Filip Woj: Strongly typed direct routing link generation in ASP.NET Web API with Drum

ASP.NET Web API provides an IUrlHelper interface and the corresponding UrlHelper class as a general, built-in mechanism you can use to generate links to your Web API routes. In fact, it’s similar in ASP.NET MVC, so this pattern has been … Continue reading

The post Strongly typed direct routing link generation in ASP.NET Web API with Drum appeared first on StrathWeb.


Taiseer Joudeh: ASP.NET Web API Documentation using Swagger

Recently I was working on designing and implementing a large scale RESTful API using ASP.NET Web API, this RESTful API contains large number of endpoints with different data models used in the request/response payloads.

Proper documentation and having a solid API explorer (Playground) is a crucial thing for your API success and likability by developers. There is a very informative slides by John Musser which highlights the top 10 reasons which makes developers hate your API, and unsurprisingly the top one is the lack of documentation and the absence of interactive documentation; so to avoid this pitfall I decided to document my Web API using Swagger.

Swagger basically is a framework for describing, consuming, and visualizing RESTful APIs. The nice thing about Swagger that it is really keeps the documentation system, the client, and the server code in sync always, in other words the documentation of methods, parameters, and models are tightly integrated into the server code.

ASP.NET Web API Documentation using Swagger

So in this short post I decided to add documentation using Swagger for a simple ASP.NET Web API project which contains a single controller with different HTTP methods, the live demo API explorer can be accessed here, and the source code can be found on GitHub. The final result for the API explorer will look as the image below:

Asp.Net Web Api Swagger

Swagger is language-agnostic so there are different implementations for different platforms. For the ASP.NET Web API world there is a solid open source implementation named “Swashbuckle”  this implementation simplifies adding Swagger to any Web API project by following simple steps which I’ll highlight in this post.

The nice thing about Swashbuckle that it has no dependency on ASP.NET MVC, so there is no need to include any MVC Nuget packages in order to enable API documentation, as well Swashbuckle contains an embedded version of swagger-ui which will automatically serve up once Swashbuckle is installed.

Swagger-ui basically is a dependency-free collection of HTML, Javascript, and CSS files that dynamically generate documentation and sandbox from a Swagger-compliant API, it is a Single Page Application which forms the play ground shown in the image above.

Steps to Add Swashbuckle to ASP.NET Web API

The sample ASP.NET Web API project I want to document is built using Owin middleware and hosted on IIS, I’ll not go into details on how I built the Web API, but I’ll focus on how I added Swashbuckle to the API.

For brevity the API contains single controller named “StudentsController”, the source code for the controller can be viewed here, as well it contains a simple “Student” Model class which can be viewed here.

Step 1: Install the needed Nuget Package

Open NuGet Package Manager Console and install the below package:

Install-Package Swashbuckle

Once this package is installed it will install a bootstrapper (App_Start/SwaggerConfig.cs) file which initiates Swashbuckle on application start-up using WebActivatorEx.

Step 2: Call the bootstrapper in “Startup” class

Now we need to add the highlighted line below to “Startup” class, so open the Startup class and replace it with the code below:

[assembly: OwinStartup(typeof(WebApiSwagger.Startup))]
namespace WebApiSwagger
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
           
            HttpConfiguration config = new HttpConfiguration();
            
            WebApiConfig.Register(config);

            Swashbuckle.Bootstrapper.Init(config);

            app.UseWebApi(config);

        }
    }
}

Step 3: Enable generating XML documentation

This is not required step to use “Swashbuckle” but I believe it is very useful for API consumers especially if you have complex data models, so to enable XML documentation, right click on your Web API project — >”Properties” then choose “Build” tab, after you choose it scroll down to the “Output” section and check “XML documentation file” check box and set the file path to: “bin\[YourProjectName].XML” as the image below. This will add an XML file to the bin folder which contains all the XML comments you added as annotation to the controllers or data models.

Swagger XML Comments

Step 4: Configure Swashbuckle to use XML Comments

By default Swashbuckle doesn’t include the annotated XML comments on the API controllers and data models into the generated specification and the UI, to include them we need to configure it, so open file “SwaggerConfig.cs” and paste the highlighted code below:

public class SwaggerConfig
    {
        public static void Register()
        {
            Swashbuckle.Bootstrapper.Init(GlobalConfiguration.Configuration);

            // NOTE: If you want to customize the generated swagger or UI, use SwaggerSpecConfig and/or SwaggerUiConfig here ...

            SwaggerSpecConfig.Customize(c =>
            {
                c.IncludeXmlComments(GetXmlCommentsPath());
            });
        }

        protected static string GetXmlCommentsPath()
        {
            return System.String.Format(@"{0}\bin\WebApiSwagger.XML", System.AppDomain.CurrentDomain.BaseDirectory);
        }
    }

Step 5: Start annotating your API methods and data models

Now we can start adding XML comments to API methods so for example if we take a look on the HTTP POST method in Students Controller as the code below:

/// <summary>
        /// Add new student
        /// </summary>
        /// <param name="student">Student Model</param>
        /// <remarks>Insert new student</remarks>
        /// <response code="400">Bad request</response>
        /// <response code="500">Internal Server Error</response>
        [Route("")]
        [ResponseType(typeof(Student))]
        public IHttpActionResult Post(Student student)
        {
            //Method implementation goes here....
        }

You will notice that each XML tag  is reflected on the Swagger properties as the image below:
Swagger XML Tags Mapping

Summary

You can add Swashbuckle seamlessly to any Web API project, and then you can easily customize the generated specification along with the UI (swagger-ui). You can check the documentation and troubleshooting section here.

That’s it for now, hopefully this post will be useful for anyone looking to document their ASP.NET Web API, if you have any question or you have used another tool for documentation please drop me a comment.

The live demo API explorer can be accessed here, and the source code can be found on GitHub.

Follow me on Twitter @tjoudeh

The post ASP.NET Web API Documentation using Swagger appeared first on Bit of Technology.


Darrel Miller: A drive by review of the Uber API

Uber recently announced the availability of a public API.  I decided to take it for a spin and provide some commentary.  The quick version is that it is a fairly standard HTTP API and that is both a good thing and a bad thing.

You can find the official documentation for the Uber API on their developer site.  In general the documentation is easy to read and it seems fairly comprehensive for what is, at the moment, a pretty simple API.  From what I can see, it is just a read-only API at the moment, but they do advertise that "more endpoints are coming".

uber

Versioning

The API reference document starts with this,

All API requests are made to https://api.uber.com/<version>, where version is currently v1

It is unfortunate that the Uber API developers have not spent more time at conferences like API Craft, RESTFest, APIDays, EndpointCon, where it is becoming increasingly common knowledge that putting a v1 at the front of your URI is not a best practice.  There many other more finely grained ways of doing versioning, when you really must introduce breaking changes.  Having a version number as a first path segment creates a huge cliff where potentially everything changes in v2 and can become a nightmare for client and server developers.

HTTP Status Codes

The documentation then lists the HTTP status codes.  I'm really not sure why the writers of the documentation feel the need to rewrite what is supposed to be standardized.  However, it is interesting that they chose to use 422 to distinguish between "invalid content" and a "malformed request".   It will be interesting to see how they make that distinction.

It does amuse me that they only include the 500 error in the 5XX range.  I'm pretty sure most clients are going to need to account for 502, 503 and 504 as well.  Enumerating out the response messages that a server supports is very misleading to client developers.  Clients need to be built to account for every HTTP status code because you never know what other status code returning intermediary is going to be sitting between you and the origin server.  There is no harm in defining actions based on ranges of status code values.  It's not as if every status code needs explicit handling code.

Error Messages

Once again, we have another API who has decided they need to re-invent the "error message" wheel.  There a number of efforts underway to create a standardized format, such as http-problem and vnd-error .  API providers should get behind one and not invent their own.

404

Rate Limiting

It is great that Uber has chosen to use the new 429 status code to indicate that rate limits have been exceeded and it is commendable that they have re-used the headers from the Twitter API for indicating the rate limit status.  However, it would have been nice if they could have dropped the X- as has been recommend in RFC 6648.

I'm not really sure why there is a need to return the rate limiting information on every request.  Why not provide a resource I can query when I want to know what the rate limit is.  Why send those extra bytes over the wire? 

Authentication

Reading the authentication section is almost a delight, because I almost don't need to read it.  For resources that a specific to a particular user, it is OAuth 2.  This is the way it is supposed to be with HTTP APIs.  By taking advantage of the uniform interface, all the infrastructure issues are just supposed to work the same way.  I don't need to learn anything new.  At least I shouldn't.  Unfortunately when testing the API we identified a number of nits related to the OAuth2 process.  I suspect they will be ironed out soon enough though.

For resources that are not specific to a user then you use a server_token which is supplied when you register your application.  Interestingly the documentation states that you MUST pass the token using an Authorization header using the authorization scheme Token.  This is awesome, just the way it should be done.  However, the reality is a bit different.  If you look at the tutorial example the server_token is passed as a query string parameter.  This is convenient but you may find your server_token littered throughout other peoples log files.  Both ways work, so just do the right thing!

Resources

The API documentation chooses to call these "Endpoints" in the heading but then uses the term Resource within the actual documentation page.  It's one of those things that doesn't matter once you already know what they mean, but it can be confusing for people who are new and trying understand which words are significant and which are not.

I am never very good at getting a grasp of APIs from looking through a list of URIs.  I much prefer a visual representation.  I'm used to building hypermedia APIs which are a bit easier to draw pictures of because they naturally have links between resources.  However, for non-hypermedia APIs like the Uber API I have found that a hierarchical diagram of the URI space by path segment can be a good visualization tool.

image

From what I understand from the documentation, this is how the URI space is defined.  The documentation does a decent job at explaining what each of these resources represent.

Media Typesreceipts

Unsurprisingly, the API simply returns application/json for all representations.  This means that it would be difficult for clients do anything other than use the request URI to identify the structure and semantics of the response.  Even sniffing the content to determine semantics is tricky because resources like /v1/me don't have a root object name.  Coupling representation structure to the request URI is common place within the HTTP API world, I just wish developers had a better understanding of the price that they are paying for taking this shortcut.

Caching

I am disappointed that we don't see any caching directives on the responses coming back from the API.  As all of the requests require an Authorization header it makes sense to choose not to make the responses publicly cacheable.  However, private caching would be perfectly viable for responses.  Especially for things like list of products.  Even more so considering the current performance of the API.  To retrieve the list of locations is currently taking around 350ms in my tests and once an HTTPS connection is established it drops down to around 150ms.  For a single request that isn't terrible, but I am doing this on a desktop with a decent broadband connection.

One mitigating factor is that there is an Etag on the products list, so at least it allows client to perform conditional GET requests and spare the bytes over the wire.  However, the current list of products weighs in at around 1KB so, chances are it would all fit in a network packet anyway!

Summary

In general, it is a fairly simple HTTP API, that uses techniques that most API developers will be familiar with.  I'm happy that we are starting to see some stabilization around security mechanisms, but I do wish that there was a little more forward thinking in API design in general.

Hypermedia potential

Interestingly, there is quite a bit of opportunity here to take of advantage of hypermedia to define workflows in this kind of API.  There are likely a few fairly well defined paths through the process of selecting a product, verifying the prices, determining time to arrival and then selecting a car.  Along the way, the client application accumulates state: where the user is located, what type of car they want, which driver they want and finally accept a request for pickup.  This is an ideal scenario for hypermedia as the engine of application state to accumulate the information selected by the user whilst leading them on to the next step of the process.

 

Image credit: Uber https://flic.kr/p/evc5a9
Image credit: 404 sign https://flic.kr/p/ca95Ad
Image credit: receipts https://flic.kr/p/35oDiF


Filip Woj: ASP.NET Web API 2: Recipes is out!

My long promised book, ASP.NET Web API 2: Recipes has been published by Apress last week. I announced the book a while ago, when I also tried to explain the general idea behind the book. I nshort, I really wanted … Continue reading

The post ASP.NET Web API 2: Recipes is out! appeared first on StrathWeb.


Taiseer Joudeh: ASP.NET Web API 2 external logins with Facebook and Google in AngularJS app

Ok so it is time to enable ASP.NET Web API 2 external logins such as Facebook & Google then consume this in our AngularJS application. In this post we’ll add support to login using Facebook and Google+ external providers, then we’ll associate those authenticated social accounts with local accounts.

Once we complete the implementation in this post we’ll have an AngularJS application that uses OAuth bearer tokens to access any secured back-end API end point. This application will allow users to login using the resource owner password flow (Username/Password) along with refresh tokens, and support for using external login providers. So to follow along with this post I highly recommend to read previous parts:

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJs External LoginsThere is a great walkthrough by Rick Anderson which shows how you can enable end users to login using social providers, the walkthrough uses VS 2013 MVC 5 template with individual accounts, to be honest it is quite easy and straight forward to implement this if you want to use this template, but in my case and if you’r following along with previous parts, you will notice that I’ve built the Web API from scratch and I added the needed middle-wares manually.

The MVC 5 template with individual accounts is quite complex, there are middle-wares for cookies, external cookies, bearer tokens… so it needs time to digest how this template work and maybe you do not need all this features in your API. There are two good posts about describing how external logins implemented in this template by Brock Allen and Dominick Baier.

So what I tried to do at the beginning is to add the features and middle-wares needed to support external logins to my back-end API (I need to use the minimal possible code to implement this), everything went good but the final step where I should receive something called external bearer token is not happening, I ended up with the same scenario in this SO question. It will be great if someone will be able to fork my repo and try to find a solution for this issue without adding Nuget packages for ASP.NET MVC and end up with all the binaries used in VS 2013 MVC 5 template.

I didn’t want to stop there, so I decided to do custom implementation for external providers with little help from the MVC 5 template, this implementation follows different flow other than the one in MVC 5 template; so I’m open for suggestions, enhancements, best practices on what we can add to the implementation I’ll describe below.

Sequence of events which will take place during external login:

  1. AngularJS application sends HTTP GET request to anonymous end point (/ExternalLogin) defined in our back-end API by specifying client_id, redirect_uri, response_type, the GET request will look as this:  http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp& redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html (more on redircet_uri value later on post)
  2. Once the end point receives the GET request, it will check if the user is authenticated, and let we assume he is not authenticated, so it will notify the middleware responsible for the requested external provider to take the responsibility for this call, in our case it is Google.
  3. The consent screen for Google will be shown, and the user will provide his Google credentials to authenticate.
  4. Google will callback our back-end API and Google will set an external cookie containing the outcome of the authentication from Google (contains all the claims from the external provider for the user).
  5. Google middleware will be listing for an event named “Authenticated” where we’ll have the chance to read all external claims set by Google. In our case we’ll be interested in reading the claim named “AccessToken” which represents a Google Access Token, where the issuer for this claim is not LOCAL AUTHORITY, so we can’t use this access token directly to authorize calls to our secure back-end API endpoints.
  6. Then we’ll set the external provider external access token as custom claim named “ExternalAccessToken” and Google middleware will redirect back the end point (/ExternalLogin).
  7. Now the user is authenticated using the external cookie so we need to check that the client_id and redirect_uri set in the initial request are valid and this client is configured to redirect for the specified URI (more on this later).
  8. Now the code checks if the external user_id along with the provider is already registered as local database account (with no password), in both cases the code will issue 302 redirect to the specified URI in the redirect_uri parameter, this URI will contain the following (“External Access Token”, “Has Local Account”, “Provider”, “External User Name”) as URL hash fragment not a query string.
  9. Once the AngularJS application receives the response, it will decide based on it if the user has local database account or not, based on this it will issue a request to one of the end points (/RegisterExternal or /ObtainLocalAccessToken). Both end points accept the external access token which will be used for verification and then using it to obtain local access token issued by LOCAL AUTHORITY. This local access token could be used to access our back-end API secured end points (more on external token verification and the two end points later in post).

Sounds complicated, right? Hopefully I’ll be able to simplify describing this in the steps below, so lets jump to the implementation:

Step 1: Add new methods to repository class

The methods I’ll add now will add support for creating the user without password as well finding and linking external social user account with local database account, they are self explanatory methods and there is nothing special about them, so open file “AuthRepository” and paste the code below:

public async Task<IdentityUser> FindAsync(UserLoginInfo loginInfo)
        {
            IdentityUser user = await _userManager.FindAsync(loginInfo);

            return user;
        }

        public async Task<IdentityResult> CreateAsync(IdentityUser user)
        {
            var result = await _userManager.CreateAsync(user);

            return result;
        }

        public async Task<IdentityResult> AddLoginAsync(string userId, UserLoginInfo login)
        {
            var result = await _userManager.AddLoginAsync(userId, login);

            return result;
        }

 Step 2: Install the needed external social providers Nuget packages

In our case we need to add Google and Facebook external providers, so open Nuget package manger console and install the 2 packages below:

Install-Package Microsoft.Owin.Security.Facebook -Version 2.1.0
Install-Package Microsoft.Owin.Security.Google -Version 2.1.0

Step 3: Create Google and Facebook Application

I won’t go into details about this step, we need to create two applications one for Google and another for Facebook. I’ve followed exactly the steps used to create apps mentioned here so you can do this. We need to obtain AppId and Secret for both social providers and will use them later, below are the settings for my two apps:

AngularJS Facebook App AngularJS Google App

Step 4: Add the challenge result

Now add new folder named “Results” then add new class named “ChallengeResult” which derives from “IHttpActionResult” then paste the code below:

public class ChallengeResult : IHttpActionResult
    {        
        public string LoginProvider { get; set; }
        public HttpRequestMessage Request { get; set; }

        public ChallengeResult(string loginProvider, ApiController controller)
        {
            LoginProvider = loginProvider;
            Request = controller.Request;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            Request.GetOwinContext().Authentication.Challenge(LoginProvider);

            HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.Unauthorized);
            response.RequestMessage = Request;
            return Task.FromResult(response);
        }
    }

The code we’ve just implemented above will responsible to call the Challenge passing the name of the external provider we’re using (Google, Facebook, etc..), this challenge won’t fire unless our API sets the HTTP status code to 401 (Unauthorized),  once it is done the external provider middleware will communicate with the external provider system and the external provider system will display an authentication page for the end user (UI page by Facebook or Google where you’ll enter username/password for authentication).

Step 5: Add Google and Facebook authentication providers

Now we want to add two new authentication providers classes where we’ll be overriding the “Authenticated” method so we’ve the chance to read the external claims set by the external provider, those set of external claims will contain information about the authenticated user and we’re interested in the claim named “AccessToken”.

As I’mentioned earlier this token is for the external provider and issued by Google or Facebook, after we’ve received it we need to assign it to custom claim named “ExternalAccessToken” so it will be available in the request context for later use.

So add two new classes named “GoogleAuthProvider” and “FacebookAuthProvider”  to folder “Providers” and paste the code below to the corresponding class:

public class GoogleAuthProvider : IGoogleOAuth2AuthenticationProvider
    {
        public void ApplyRedirect(GoogleOAuth2ApplyRedirectContext context)
        {
            context.Response.Redirect(context.RedirectUri);
        }

        public Task Authenticated(GoogleOAuth2AuthenticatedContext context)
        {
            context.Identity.AddClaim(new Claim("ExternalAccessToken", context.AccessToken));
            return Task.FromResult<object>(null);
        }

        public Task ReturnEndpoint(GoogleOAuth2ReturnEndpointContext context)
        {
            return Task.FromResult<object>(null);
        }
    }

public class FacebookAuthProvider : FacebookAuthenticationProvider
    {
        public override Task Authenticated(FacebookAuthenticatedContext context)
        {
            context.Identity.AddClaim(new Claim("ExternalAccessToken", context.AccessToken));
            return Task.FromResult<object>(null);
        }
    }

Step 6: Add social providers middleware to Web API pipeline

Now we need to introduce some changes to “Startup” class, the changes as the below:

- Add three static properties as the code below:

public class Startup
    {
        public static OAuthBearerAuthenticationOptions OAuthBearerOptions { get; private set; }
        public static GoogleOAuth2AuthenticationOptions googleAuthOptions { get; private set; }
        public static FacebookAuthenticationOptions facebookAuthOptions { get; private set; }

//More code here...
{

- Add support for using external cookies, this is important so external social providers will be able to set external claims, as well initialize the “OAuthBearerOptions” property as the code below:

public void ConfigureOAuth(IAppBuilder app)
        {
            //use a cookie to temporarily store information about a user logging in with a third party login provider
            app.UseExternalSignInCookie(Microsoft.AspNet.Identity.DefaultAuthenticationTypes.ExternalCookie);
            OAuthBearerOptions = new OAuthBearerAuthenticationOptions();

//More code here.....
}

- Lastly we need to wire up Google and Facebook social providers middleware to our Owin server pipeline and set the AppId/Secret for each provider, the values are removed so replace them with your app values obtained from step 3:

//Configure Google External Login
            googleAuthOptions = new GoogleOAuth2AuthenticationOptions()
            {
                ClientId = "xxx",
                ClientSecret = "xxx",
                Provider = new GoogleAuthProvider()
            };
            app.UseGoogleAuthentication(googleAuthOptions);

            //Configure Facebook External Login
            facebookAuthOptions = new FacebookAuthenticationOptions()
            {
                AppId = "xxx",
                AppSecret = "xxx",
                Provider = new FacebookAuthProvider()
            };
            app.UseFacebookAuthentication(facebookAuthOptions);

Step 7: Add “ExternalLogin” endpoint

As we stated earlier, this end point will accept the GET requests originated from our AngularJS app, so it will accept GET request on the form: http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp& redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html

So Open class “AccountController” and paste the code below:

private IAuthenticationManager Authentication
        {
            get { return Request.GetOwinContext().Authentication; }
        }        

// GET api/Account/ExternalLogin
        [OverrideAuthentication]
        [HostAuthentication(DefaultAuthenticationTypes.ExternalCookie)]
        [AllowAnonymous]
        [Route("ExternalLogin", Name = "ExternalLogin")]
        public async Task<IHttpActionResult> GetExternalLogin(string provider, string error = null)
        {
            string redirectUri = string.Empty;

            if (error != null)
            {
                return BadRequest(Uri.EscapeDataString(error));
            }

            if (!User.Identity.IsAuthenticated)
            {
                return new ChallengeResult(provider, this);
            }

            var redirectUriValidationResult = ValidateClientAndRedirectUri(this.Request, ref redirectUri);

            if (!string.IsNullOrWhiteSpace(redirectUriValidationResult))
            {
                return BadRequest(redirectUriValidationResult);
            }

            ExternalLoginData externalLogin = ExternalLoginData.FromIdentity(User.Identity as ClaimsIdentity);

            if (externalLogin == null)
            {
                return InternalServerError();
            }

            if (externalLogin.LoginProvider != provider)
            {
                Authentication.SignOut(DefaultAuthenticationTypes.ExternalCookie);
                return new ChallengeResult(provider, this);
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(externalLogin.LoginProvider, externalLogin.ProviderKey));

            bool hasRegistered = user != null;

            redirectUri = string.Format("{0}#external_access_token={1}&provider={2}&haslocalaccount={3}&external_user_name={4}",
                                            redirectUri,
                                            externalLogin.ExternalAccessToken,
                                            externalLogin.LoginProvider,
                                            hasRegistered.ToString(),
                                            externalLogin.UserName);

            return Redirect(redirectUri);

        }

By looking at the code above there are lot of things happening so let’s describe what this end point is doing:

a) By looking at this method attributes you will notice that its configured to ignore bearer tokens, and it can be accessed if there is external cookie set by external authority (Facebook or Google) or can be accessed anonymously. it worth mentioning here that this end point will be called twice during the authentication, first call will be anonymously and the second time will be once the external cookie is set by the external provider.

b) Now the code flow will check if the user has been authenticated (External cookie has been set), if it is not the case then Challenge Result defined in step 4 will be called again.

c) Time to validate that client and redirect URI which is set by AngularJS application is valid and this client is configured to allow redirect for this URI, so we do not want to end allowing redirection to any URI set by a caller application which will open security threat (Visit this post to know more about clients and Allowed Origin). To do so we need to add a private helper function named “ValidateClientAndRedirectUri”, you can add this to the “AccountController” class as the code below:

private string ValidateClientAndRedirectUri(HttpRequestMessage request, ref string redirectUriOutput)
        {

            Uri redirectUri;

            var redirectUriString = GetQueryString(Request, "redirect_uri");

            if (string.IsNullOrWhiteSpace(redirectUriString))
            {
                return "redirect_uri is required";
            }

            bool validUri = Uri.TryCreate(redirectUriString, UriKind.Absolute, out redirectUri);

            if (!validUri)
            {
                return "redirect_uri is invalid";
            }

            var clientId = GetQueryString(Request, "client_id");

            if (string.IsNullOrWhiteSpace(clientId))
            {
                return "client_Id is required";
            }

            var client = _repo.FindClient(clientId);

            if (client == null)
            {
                return string.Format("Client_id '{0}' is not registered in the system.", clientId);
            }

            if (!string.Equals(client.AllowedOrigin, redirectUri.GetLeftPart(UriPartial.Authority), StringComparison.OrdinalIgnoreCase))
            {
                return string.Format("The given URL is not allowed by Client_id '{0}' configuration.", clientId);
            }

            redirectUriOutput = redirectUri.AbsoluteUri;

            return string.Empty;

        }

        private string GetQueryString(HttpRequestMessage request, string key)
        {
            var queryStrings = request.GetQueryNameValuePairs();

            if (queryStrings == null) return null;

            var match = queryStrings.FirstOrDefault(keyValue => string.Compare(keyValue.Key, key, true) == 0);

            if (string.IsNullOrEmpty(match.Value)) return null;

            return match.Value;
        }

d) After we validate the client and redirect URI we need to get the external login data along with the “ExternalAccessToken” which has been set by the external provider, so we need to add private class named “ExternalLoginData”, to do so add this class to the same “AccountController” class as the code below:

private class ExternalLoginData
        {
            public string LoginProvider { get; set; }
            public string ProviderKey { get; set; }
            public string UserName { get; set; }
            public string ExternalAccessToken { get; set; }

            public static ExternalLoginData FromIdentity(ClaimsIdentity identity)
            {
                if (identity == null)
                {
                    return null;
                }

                Claim providerKeyClaim = identity.FindFirst(ClaimTypes.NameIdentifier);

                if (providerKeyClaim == null || String.IsNullOrEmpty(providerKeyClaim.Issuer) || String.IsNullOrEmpty(providerKeyClaim.Value))
                {
                    return null;
                }

                if (providerKeyClaim.Issuer == ClaimsIdentity.DefaultIssuer)
                {
                    return null;
                }

                return new ExternalLoginData
                {
                    LoginProvider = providerKeyClaim.Issuer,
                    ProviderKey = providerKeyClaim.Value,
                    UserName = identity.FindFirstValue(ClaimTypes.Name),
                    ExternalAccessToken = identity.FindFirstValue("ExternalAccessToken"),
                };
            }
        }

e) Then we need to check if this social login (external user id with external provider) is already linked to local database account or this is first time to authenticate, based on this we are setting flag “hasRegistered” which will be returned to the AngularJS application.

f) Lastly we need to issue a 302 redirect to the “redirect_uri” set by the client application along with 4 values (“external_access_token”, “provider”, “hasLocalAccount”, “external_user_name”), those values will be added as URL hash fragment not as query string so they can be accessed by JS code only running on the return_uri page.

Now the AngularJS application has those values including external access token which can’t be used to access our secured back-end endpoints, to solve this issue we need to issue local access token with the help of this external access token. To do this we need to add two new end points where they will accept this external access token, validate it then generate local access token.

Why did we add two end points? Because if the external user is not linked to local database account; we need to issue HTTP POST request to the new endpoint “/RegisterExternal”, and if the external user already linked to local database account then we just need to obtain a local access token by issuing HTTP GET to endpoint “/ObtainLocalAccessToken”.

Before adding the two end points we need to add two helpers methods which they are used on these two endpoints.

Step 8: Verifying external access token

We’ve received the external access token from the external provider, and we returned it to the front-end AngularJS application, now we need to validate and make sure that this external access token which will be sent back from the front-end app to our back-end API is valid, and valid means (Correct access token, not expired, issued for the same client id configured in our back-end API). It is very important to check that external access token is issued to the same client id because you do not want our back-end API to end accepting valid external access tokens generated from different apps (client ids). You can read more about this here.

To do so we need to add class named “ExternalLoginModels” under folder “Models” and paste the code below:

namespace AngularJSAuthentication.API.Models
{
    public class ExternalLoginViewModel
    {
        public string Name { get; set; }

        public string Url { get; set; }

        public string State { get; set; }
    }

    public class RegisterExternalBindingModel
    {
        [Required]
        public string UserName { get; set; }

         [Required]
        public string Provider { get; set; }

         [Required]
         public string ExternalAccessToken { get; set; }

    }

    public class ParsedExternalAccessToken
    {
        public string user_id { get; set; }
        public string app_id { get; set; }
    }
}

Then we need to add private helper function named “VerifyExternalAccessToken” to class “AccountController”, the implantation for this function as the below:

private async Task<ParsedExternalAccessToken> VerifyExternalAccessToken(string provider, string accessToken)
        {
            ParsedExternalAccessToken parsedToken = null;

            var verifyTokenEndPoint = "";

            if (provider == "Facebook")
            {
                //You can get it from here: https://developers.facebook.com/tools/accesstoken/
                //More about debug_tokn here: http://stackoverflow.com/questions/16641083/how-does-one-get-the-app-access-token-for-debug-token-inspection-on-facebook

                var appToken = "xxxxx";
                verifyTokenEndPoint = string.Format("https://graph.facebook.com/debug_token?input_token={0}&access_token={1}", accessToken, appToken);
            }
            else if (provider == "Google")
            {
                verifyTokenEndPoint = string.Format("https://www.googleapis.com/oauth2/v1/tokeninfo?access_token={0}", accessToken);
            }
            else
            {
                return null;
            }

            var client = new HttpClient();
            var uri = new Uri(verifyTokenEndPoint);
            var response = await client.GetAsync(uri);

            if (response.IsSuccessStatusCode)
            {
                var content = await response.Content.ReadAsStringAsync();

                dynamic jObj = (JObject)Newtonsoft.Json.JsonConvert.DeserializeObject(content);

                parsedToken = new ParsedExternalAccessToken();

                if (provider == "Facebook")
                {
                    parsedToken.user_id = jObj["data"]["user_id"];
                    parsedToken.app_id = jObj["data"]["app_id"];

                    if (!string.Equals(Startup.facebookAuthOptions.AppId, parsedToken.app_id, StringComparison.OrdinalIgnoreCase))
                    {
                        return null;
                    }
                }
                else if (provider == "Google")
                {
                    parsedToken.user_id = jObj["user_id"];
                    parsedToken.app_id = jObj["audience"];

                    if (!string.Equals(Startup.googleAuthOptions.ClientId, parsedToken.app_id, StringComparison.OrdinalIgnoreCase))
                    {
                        return null;
                    }

                }

            }

            return parsedToken;
        }

The code above is not the prettiest code I’ve written, it could be written in better way, but the essence of this helper method is to validate the external access token so for example if we take a look on Google case you will notice that we are issuing HTTP GET request to a defined endpoint by Google which is responsible to validate this access token, so if the token is valid we’ll read the app_id (client_id) and the user_id from the response, then we need to make sure that app_id returned as result from this request is exactly the same app_id used to configure Google app in our back-end API. If there is any differences then we’ll consider this external access token as invalid.

Note about Facebook: To validate Facebook external access token you need to obtain another single token for your application named appToken, you can get it from here.

Step 9: Generate local access token

Now we need to add another helper function which will be responsible to issue local access token which can be used to access our secure back-end API end points, the response for this token need to match the response we obtain when we call the end point “/token”, there is nothing special in this method, we are only setting the claims for the local identity then calling “OAuthBearerOptions.AccessTokenFormat.Protect” which will generate the local access token, so add  new private function named “” to “AccountController” class as the code below:

private JObject GenerateLocalAccessTokenResponse(string userName)
        {

            var tokenExpiration = TimeSpan.FromDays(1);

            ClaimsIdentity identity = new ClaimsIdentity(OAuthDefaults.AuthenticationType);

            identity.AddClaim(new Claim(ClaimTypes.Name, userName));
            identity.AddClaim(new Claim("role", "user"));

            var props = new AuthenticationProperties()
            {
                IssuedUtc = DateTime.UtcNow,
                ExpiresUtc = DateTime.UtcNow.Add(tokenExpiration),
            };

            var ticket = new AuthenticationTicket(identity, props);

            var accessToken = Startup.OAuthBearerOptions.AccessTokenFormat.Protect(ticket);

            JObject tokenResponse = new JObject(
                                        new JProperty("userName", userName),
                                        new JProperty("access_token", accessToken),
                                        new JProperty("token_type", "bearer"),
                                        new JProperty("expires_in", tokenExpiration.TotalSeconds.ToString()),
                                        new JProperty(".issued", ticket.Properties.IssuedUtc.ToString()),
                                        new JProperty(".expires", ticket.Properties.ExpiresUtc.ToString())
        );

            return tokenResponse;
        }

Step 10: Add “RegisterExternal” endpoint to link social user with local account

After we added the helper methods needed it is time to add new API end point where it will be used to add the external user as local account and then link it with the created local account,  this method should be accessed anonymously because we do not have local access token yet; so add new method named “RegisterExternal” to class “AccountController” as the code below:

// POST api/Account/RegisterExternal
        [AllowAnonymous]
        [Route("RegisterExternal")]
        public async Task<IHttpActionResult> RegisterExternal(RegisterExternalBindingModel model)
        {

            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            var verifiedAccessToken = await VerifyExternalAccessToken(model.Provider, model.ExternalAccessToken);
            if (verifiedAccessToken == null)
            {
                return BadRequest("Invalid Provider or External Access Token");
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(model.Provider, verifiedAccessToken.user_id));

            bool hasRegistered = user != null;

            if (hasRegistered)
            {
                return BadRequest("External user is already registered");
            }

            user = new IdentityUser() { UserName = model.UserName };

            IdentityResult result = await _repo.CreateAsync(user);
            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            var info = new ExternalLoginInfo()
            {
                DefaultUserName = model.UserName,
                Login = new UserLoginInfo(model.Provider, verifiedAccessToken.user_id)
            };

            result = await _repo.AddLoginAsync(user.Id, info.Login);
            if (!result.Succeeded)
            {
                return GetErrorResult(result);
            }

            //generate access token response
            var accessTokenResponse = GenerateLocalAccessTokenResponse(model.UserName);

            return Ok(accessTokenResponse);
        }

By looking at the code above you will notice that we need to issue HTTP POST request to the end point http://ngauthenticationapi.azurewebsites.net/api/account/RegisterExternal where the request body will contain JSON object of “userName”, “provider”, and “externalAccessToken” properties.

The code in this method is doing the following:

  • once we receive the request we’ll call the helper method  “VerifyExternalAccessToken” described earlier to ensure that this external access token is valid, and generated using our Facebook or Google application defined in our back-end API.
  • We need to check if the user_id obtained from the external access token and the provider combination has already registered in our system, if this is not the case we’ll add new user using the username passed in request body without a password to the table “AspNetUsers”
  • Then we need to link the local account created for this user to the provider and provider key (external user_id) and save this to table “AspNetUserLogins”
  • Lastly we’ll call the helper method named “GenerateLocalAccessTokenResponse” described earlier which is responsible to generate the local access token and return this in the response body, so front-end application can use this access token to access our secured back-end API endpoints.

The request body will look as the image below:

External Login Request

Step 11: Add “ObtainLocalAccessToken” for linked external accounts

Now this endpoint will be used to generate local access tokens for external users who already registered with local account and linked their external identities to a local account, this method is accessed anonymously and will accept 2 query strings (Provider, ExternalAccessToken). The end point can be accessed by issuing HTTP GET request to URI: http://ngauthenticationapi.azurewebsites.net/api/account/ObtainLocalAccessToken?provider=Facebook&externalaccesstoken=CAAKEF……….

So add new method named “ObtainLocalAccessToken” to controller “AccountController” and paste the code below:

[AllowAnonymous]
        [HttpGet]
        [Route("ObtainLocalAccessToken")]
        public async Task<IHttpActionResult> ObtainLocalAccessToken(string provider, string externalAccessToken)
        {

            if (string.IsNullOrWhiteSpace(provider) || string.IsNullOrWhiteSpace(externalAccessToken))
            {
                return BadRequest("Provider or external access token is not sent");
            }

            var verifiedAccessToken = await VerifyExternalAccessToken(provider, externalAccessToken);
            if (verifiedAccessToken == null)
            {
                return BadRequest("Invalid Provider or External Access Token");
            }

            IdentityUser user = await _repo.FindAsync(new UserLoginInfo(provider, verifiedAccessToken.user_id));

            bool hasRegistered = user != null;

            if (!hasRegistered)
            {
                return BadRequest("External user is not registered");
            }

            //generate access token response
            var accessTokenResponse = GenerateLocalAccessTokenResponse(user.UserName);

            return Ok(accessTokenResponse);

        }

By looking at the code above you will notice that we’re doing the following:

  • Make sure that Provider and ExternalAccessToken query strings are sent with the HTTP GET request.
  • Verifying the external access token as we did in the previous step.
  • Check if the user_id and provider combination is already linked in our system.
  • Generate a local access token and return this in the response body so front-end application can use this access token to access our secured back-end API endpoints.

Step 12:Updating the front-end AngularJS application

I’ve updated the live front-end application to support using  social logins as the image below:

AngularJS Social Login

Now once the user clicks on one of the social login buttons, a popup window will open targeting the URI: http://ngauthenticationapi.azurewebsites.net/api/Account/ExternalLogin?provider=Google&response_type=token&client_id=ngAuthApp &redirect_uri=http://ngauthenticationweb.azurewebsites.net/authcomplete.html and the steps described earlier in this post will take place.

What worth mentioning here that the “authcomplete.html” page is an empty page which has JS function responsible to read the URL hash fragments and pass them to AngularJS controller in callback function, based on the value of the fragment (hasLocalAccount) the AngularJS controller will decide to call end point “/ObtainLocalAccessToken” or display a view named “associate” where it will give the end user the chance to set preferred username only and then call the endpoint “/RegisterExternal”.

The “associate” view will look as the image below:

AngularJS Social Login Associate

That is it for now folks! Hopefully this implementation will be useful to integrate social logins with your ASP.NET Web API and keep the API isolated from any front end application, I believe by using this way we can integrate native iOS or Android Apps that using Facebook SDKs easily with our back-end API, usually the Facebook SDK will return Facebook access token and we can then obtain local access token as we described earlier.

If you have any feedback, comment, or better way to implement this; then please drop me comment, thanks for reading!

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

  1. Informative post by Brock Allen about External logins with OWIN middleware.
  2. Solid post by Dominick Baier on Web API Individual Accounts.
  3. Great post by Rick Anderson on VS 2013 MVC 5 template with individual accounts.
  4. Helpful post by Valerio Gheri on How to login with Facebook access token.

The post ASP.NET Web API 2 external logins with Facebook and Google in AngularJS app appeared first on Bit of Technology.


Darrel Miller: Centralized exception handling using a ASP.NET Web API MessageHandler

ASP.NET Web API 2.1 introduced some significant improvements to the mechanisms that support global error handling. Before this release there were a number of different types of errors that would be handled directly by the runtime and there was no easy way to intercept these errors and add your own custom behavior.  The standard guidance suggests you register these new handlers as services, but I prefer a different approach that seems more natural to me.

The pipe that all flows through

Every request that is processed by Web API goes through the HttpMessageHandler pipeline until it is dispatched to a controller.  The message handler pipeline works in a Russian doll approach where the request goes through each handler from first to last and then the response returns back through the same handlers in reverse. 

pipe

A trap at the end of the pipe

If a controller cannot process a request for some reason and decides that the appropriate response is to throw an exception then my gut tells me that somewhere up the call stack should be the place where that exception is caught.  If I want to have a place where I can catch all exceptions generated by message handlers, the framework dispatching code, or the controllers itself, then having a ErrorHandlingMessageHandler at the front of the message handler pipeline seems like the most natural place. 

    public class ErrorHandlerMessageHandler : DelegatingHandler
    {
        protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, 
CancellationToken cancellationToken) { try { return await base.SendAsync(request, cancellationToken); } catch (Exception ex) { return GlobalErrorHandler.ConvertExceptionToHttpResponse(ex); } } }

It will not work when…

With this approach there are two types of exceptions that my message handler will not catch: the first is connectivity errors detected by the HttpServer.  For example the client closed the TCP connection whilst waiting for the response.  The second is serialization errors generated when the HttpServer finally asks the HttpContent to serialize its bytes to a stream and for some reason it fails.  I discussed a workaround to this scenario in another post about debugging serialization errors.

Wipeout

Turn off the default behaviour

By default if you have registered a IErrorHandler and IErrorLogger server those services will be called when exceptions are trapped in framework code.  The expected behavior is that the registered services will create a HttpResponseMessage and set the context object Result property to direct the framework to return response.  That response will be returned back up the pipeline and the message handlers in the pipeline will be somewhat ignorant of the failed request.

public class GlobalErrorHandlerService : IExceptionHandler
{
    public Task HandleAsync(ExceptionHandlerContext context, CancellationToken cancellationToken)
    {
        if (context.CatchBlock != ExceptionCatchBlocks.HttpServer)
        {
            context.Result = null;
        }
        return Task.FromResult(0);
    }
}

However, if your registered services explicitly set the Result property to Null, then the framework will take that as an indication that your service have not handled the error and will re-throw the exception that was initially raised.  This will allow each message handler in the pipeline the opportunity to catch the exception and do any processing before allowing the error handling message handler to convert the exception to an actual HTTP response message.  If the exception is caught up in the HttpServer framework code, then it is too late for the message handler to catch it.

 SurfStyle

It's a style thing

I'm not going to claim any major benefits of this approach to centralizing error handling over the use of IErrorHandler service mechanism.  The advantage I see is that it is just one less interaction protocol to deal with.  I already deal with the message handler pipeline, I understand how it works,  and I know how to intercept requests and responses using it and there is nothing really about it that is Web API specific.  With the new mechanisms introduced by WebAPI, I feel like I need to learn a new set of interfaces, how the framework interacts with those interfaces and what my responsibilities are as an interface implementer.  I get especially frustrated by the pattern of passing in context objects that require me to set properties as return values.  It just feels hokey.

One slightly less touchy feely difference between the two approaches relates to how and exception handling message handlers are able to intercept some of the error responses generated by the Web API framework. For example, when the routing mechanism fails to identify an appropriate action method, no exception is thrown, instead a response is created with a 404 status code and a HTTP body that contains certain property values.  An error handling message handler, could intercept those responses and update the body of the response to be something more standardized.  Something like http-problem for example.

Web API your way

Although this approach is not going to appeal to everyone, it does demonstrate an important aspect of the Web API framework.  ASP.NET Web API was never intended to be a particularly opinionated framework.  It was designed to allow people to build HTTP based applications in many different ways.  There are many different extensibility points and components that can be completely swapped out and replaced.  This is a key strength of the framework.  You should take advantage of it.

Image Credit: Pipe https://flic.kr/p/9GuHGn
Image Credit: Wipeout https://flic.kr/p/8vfZYQ
Image Credit: Surf Style https://flic.kr/p/5GNhLA


Darrel Miller: These 8 lines of code can make debugging your ASP.Net Web API a little bit easier.

I think most us ASP.Net Web API developers have, at some point, experienced the problem where their API is returning a 500 Internal Server Error, but tracing through with Visual Studio doesn't reveal any exceptions in our code.  This problem is often caused when a MediaTypeFormatter is unable to serialize an object.  This simple message handler can take away some of the pain of debugging these scenarios.

ButterflyNet

The code

I'll start with the solution, and try and explain why the problem occurs and show how this MessageHandler solves the problem.

public class BufferNonStreamedContentHandler : DelegatingHandler
{
  protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, 
CancellationToken cancellationToken) { var response = await base.SendAsync(request, cancellationToken); if (response.Content != null) {
var services = request.GetConfiguration().Services; var bufferPolicy = (IHostBufferPolicySelector)services.GetService(typeof(IHostBufferPolicySelector));
// If the host is going to buffer it anyway
if (bufferPolicy.UseBufferedOutputStream(response)) {
// Buffer it now so we can catch the exception await response.Content.LoadIntoBufferAsync(); } } return response; } }

This message handler is installed by adding it to the collection of MessageHandlers on the Web API configuration object.

config.MessageHandlers.Add(new BufferNonStreamedContentHandler());

Just do it sooner

HighSpeedTrain

The message handler uses the IHostBufferPolicySelector to determine if the Host will buffer the response body before sending it over the wire.  For the vast majority of cases this will be true, and definitely will be true if an API returns a CLR object that will be serialized using the JsonMediaTypeFormatter or the XmlMediaTypeFormatter.

If we know the Host is going to buffer it, then we force the content to be buffered early so that we can trigger any exceptions that will occur during serialization via user code.  This will allow Visual Studio to trap the exception.  Or if you are using an trapping errors with a message handler, the exception can be trapped and dealt with there.

Why does Web Api do that?

Normally, this check against IHostBufferPolicySelector is only done once all the message handlers have completed and control is returned to the Http Server Host.  At this point you there is no-longer user code anywhere in the call stack and therefore Visual Studio doesn't break unless you have configured it to break in framework code.

The buffering of content is deferred by default just in case the host decides it wants to stream the content directly to the network stream.  HTTP headers need to be written out before body bytes can be written out and message handlers might want to change HTTP headers, so Web API has to wait until all the message handlers have completed before it can start streaming content.

It's virtually free

If we have buffered the content early, then the Host is smart enough to know not to try and serialize the content again, so adding this handler will add a minimal amount of overhead.  We are paying the price of checking IHostBufferPolicySelector an extra time so that we can catch exceptions for content that will be buffered.

FreeHugs

This solution does not help if you really are serializing content directly to the network stream.  However, if you are already writing body bytes over the wire, then the status code has already been sent and there is little that can be done at that point to handle an exception cleanly. 

It can't hurt to try it

I suspect this little trick has saved me hours of head scratching, hopefully it will do the same for others.

Image Credit: Butterfly hunter https://flic.kr/p/nqGeY7
Image Credit: High speed train https://flic.kr/p/7suAc4
Image Credit: Free hugs https://flic.kr/p/4k3eAY


Dominick Baier: Announcing Thinktecture IdentityServer v3 – Beta 1

It’s done – and I am happy (and a bit exhausted) – a few minutes ago I closed the last open issue for Beta 1.

What’s new
It’s been 424 commits since we released Preview 1 – so there is quite a lot of new stuff, but the big features are:

  • A completely revamped configuration system that allows replacing bits and pieces of the core server itself – including a DI system
  • A plugin infrastructure that allows adding new endpoints (WS-Federation support is a plugin e.g.)
  • Refresh tokens
  • Infrastructure that allows customizing views and UI assets for re-branding as well as extensibility to insert your own workflows like registration or EULAs
  • Support for CORS and HTTP strict transport security
  • Content Security Policy (CSP) & stricter caching rules
  • Protection against click-jacking
  • Control over cookies (lifetime, naming etc)
  • more extensibility points

Repositories
We also split up the solution into separate repos and nugets for better composability:

Samples and documentation
We also have a separate repo for samples and added quite a bit of content to the wiki.

With that I am now leaving for holidays! ;) Give IdentityServer a try and give us feedback. A big thanks to all contributors and the people that engaged with us over the various channels!!!

Have a nice summer!


Filed under: .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, Uncategorized, WebAPI


Filip Woj: Dependency injection directly into actions in ASP.NET Web API

There is a ton of great material on the Internet about dependency injection in ASP.NET Web API. One thing that I have not seen anywhere though, is any information about how to inject dependencies into the action, instead of a … Continue reading

The post Dependency injection directly into actions in ASP.NET Web API appeared first on StrathWeb.


Darrel Miller: Everything you need to know about HTTP Header syntax but were afraid to ask

If you use HTTP then the chances are good that you have to deal with HTTP headers.  The syntax of HTTP headers has a long and tortured history, originating from the syntax of email headers.  All too often I see headers that don't conform to the specifications.  This makes everyone's job a little bit harder.  The recent releases of the HTTP specifications have done a fair amount of clarification and consolidation to make getting the syntax right.RaiseHand

Two stage parsing

HTTP Header parsing is broken down into two phases.  In phase one, the headers are extracted from the HTTP message into a set of name/value pairs.   The name is a case-insensitive token that is defined in the HTTP specification and registered in the Message Header registry, and the value is a string whose syntax is defined specifically for that header name in the HTTP specification.  The syntax of a HTTP header always looks like this:

header-field   = field-name ":" OWS field-value OWS

OWS means "optional whitespace" and header fields are delimited using CRLF.  Note that RFC7230 now recommends that there should be no whitespace between the header name and the colon.

It is a header, that's all I need to knowOLYMPUS DIGITAL CAMERA

The ability to generically parse HTTP headers into these name value pairs without concern for the syntax of the field value is critical for performance and extensibility.  Not every component needs to look at every header, so the requirement for every intermediary in the path of the HTTP request to parse the header value contents would be wasteful.  Also, new headers are created regularly, so needing to update HTTP libraries whenever new header definitions are added would be problematic.

Wrapped lines are no more

There are a couple of additional issues relating to parsing the header lines.  In the past it was possible to allow header values to wrap onto the next line by prefixing the wrapped line with a whitespace character[1].  The most recent HTTP header specifications recommend to no longer do this[2].   

Multiple header instances

The second major issue is related to headers that contain lists of values separated by a comma.  These headers can appear multiple times in a message.  A header parsing routine can aggregate these multiple headers into a single list of values.  The one exception to this rule is the Set-Cookie header.  Set-Cookie headers are allowed to appear multiple times despite not being a comma separated list.

The building blocks

Splitting the header field names and values is the easy part.  Unfortunately, that is where most HTTP frameworks that I've seen seem to give up, or only provide support for the most commonly used headers.  They lay the burden of parsing the semantics out of headers on the application developer.  This can be painful because every header field-value has its own syntax definition Fortunately, the HTTP specification provides some basic building blocks for defining the syntax of HTTP headers.

buildingblocks

Most headers, when simplified down to primitives, are a combination of token, quoted-string, comment, OWS (optional whitespace), RWS (required whitespace), and literalsThere are also several rules for defining lists of expressions.

  • Tokens are are string of characters that avoid certain delimiter characters.
    Quoted-strings are surrounded by double quotes and have a slightly different set of rules for what characters are allowed.
    Comment is a string surrounded by parentheses with yet again another set of valid characters.

For a quick review of which characters are allowed and which are not, I created a chart below[3].

Syntax Rules

As i mentioned each header has its own syntax rules.  For example, User-Agent is defined as,

User-Agent = product *( RWS ( product / comment ) )

Where,

product         = token ["/" product-version]
product-version = token

and *(x) means "zero or more x", (x/y) means "either x or y" and [x] means "x is optional".  This is a standardized syntax definition language called ABNF.  Many of the headers rules are summarized in appendices of the specification in which they are defined.

Lists of things

The most common way that HTTP headers allow you to specify lists of tokens is using the  #(x)  syntax[4] which means you can have zero more x delimited with a comma.  As you can see from the example above, you can also have whitespace delimited lists. Parameters, which are often lists within a single element of a comma delimited list,  will be delimited by semi-colons.

An implementation

The various HTTP frameworks that I have reviewed have varying support for parsing of header values.  Considering that it is just a bunch of grammar rules for parsing and production of strings, it would seem useful to me if there was a library that allowed developers to ensure that the headers they are sending conform to the specification without needing to constantly refer to the specification rules.

I believe the key requirements of a .Net framework library for HTTP header parsing and generating are:

  • support for all standard headers
  • support for creating new headers that use the standard header primitives
  • allow for parsing/generating individual headers
    generate warnings for invalid headers when parsing and do best guess parsing.
  • have no external dependencies other than the .net framework
  • make efficient use of memory and be fast enough that its usage is negligible on the overall processing time of the message.

I'm currently working on a OSS library to do this, with the hope of being able to use it within OWIN middleware.  I'll blog about it when I have something to show.

[1] I suspect the reason header wrapping exists is based on the fact that headers were originally defined to be part of email messages that were limited to a 80 character line length.  It's really not relevant for headers in a HTTP message where there is no need to wrap.

[2] When specifications make a change to deprecate certain behaviour it is important to remember Postel's law.  When parsing headers, we have to assume that old clients/servers are going to continue doing the line folding, so it is essential that we write components that accept folded headers but we should never do it when generating headers.

[3]

Allowed Characters

Hex

Chars

Token

Quoted String

Comment

00-08        
09 <tab>   y y
1A-1F        
20 <Space>      
21 ! y y y
22 "     y
23-27 #$%&' y y y
28-29 ()   y  
2A-2F *+ y y y
2C ,   y y
2D-2E -. y y y
2F / y   y
30-39 <digit>   y y
3A-40 :;<=>?@   y y
41-5A <ALPHA> y y y
5B [   y y
5C \      
5D ]   y y
5E-60 ^_` y   y
61-7A <alpha> y y y
7B-7E {   y y
7C | y y y
7D }   y y
7E - y y y

 

[4] The #() list syntax is an extension to the standard ABNF rules that is defined with the HTTP specification.

Image credit: Raise hand https://flic.kr/p/2vgyWN
Image credit: Package https://flic.kr/p/8sgnwu


Henrik F. Nielsen: More Updates to Azure Mobile Services (Link)

Just posted the blog Azure Mobile Services .NET Updates describing new features now available for Azure Mobile Services including:

  • Support for CORS using ASP.NET Web API CORS enabling first class support for specifying CORS policy at a per-service, per-controller, or per-action level.
  • An extensible authentication model enabling you to control which authentication mechanisms are available for your mobile service clients. For example, you can add your own authentication mechanisms in addition to or in place of the default support for Azure Active Directory, Twitter, Facebook, Google, and Microsoft Account.
  • Support for Azure Active Directory Authentication using a server-side flow simplifying client authentication significantly.

If you are building cloud connected mobiles apps then check out Microsoft Azure Mobile Services and let us know what you think!

Have fun!

Henrik


Radenko Zec: How to fake session object / HttpContext for integration tests

Sometimes when we write integration tests we need to fake HttpContext to test some functionality proper way.

In one of my projects I needed a possibility to fake some session variables such as userState and maxId.

A project that is tested in this example is an ASP.NET Web API with added support for session but you can use the same approach in any ASP.NET/ MVC project.

Implementation is very simple. First, we need to create FakeHttpContext.

 

 public HttpContext FakeHttpContext(Dictionary<string, object> sessionVariables,string path)
{
    var httpRequest = new HttpRequest(string.Empty, path, string.Empty);
    var stringWriter = new StringWriter();
    var httpResponce = new HttpResponse(stringWriter);
    var httpContext = new HttpContext(httpRequest, httpResponce);
    httpContext.User = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    Thread.CurrentPrincipal = new GenericPrincipal(new GenericIdentity("username"), new string[0]);
    var sessionContainer = new HttpSessionStateContainer(
      "id",
      new SessionStateItemCollection(),
      new HttpStaticObjectsCollection(),
      10,
      true,
      HttpCookieMode.AutoDetect,
      SessionStateMode.InProc,
      false);

   foreach (var var in sessionVariables)
   {
      sessionContainer.Add(var.Key, var.Value);
   }

   SessionStateUtility.AddHttpSessionStateToContext(httpContext, sessionContainer);
   return httpContext;
}


 

After we have FakeHttpContext we can easily use it in any integration tests like this.

 

 [Test]
public void DeleteState_AcceptsCorrectExpandedState_DeletesState()
{
   HttpContext.Current = this.FakeHttpContext(
      new Dictionary<string, object> { { "UserExpandedState", 5 }, 
                                        { "MaxId", 1000 } },    
                                          "http://localhost:55024/api/v1/");

   string uri = "DeleteExpandedState";
   var result = Client.DeleteAsync(uri).Result;
   result.StatusCode.Should().Be(HttpStatusCode.OK);

   var state = statesRepository.GetState(5);
   state.Should().BeNull();
}

 

This is an example of integration test that tests Delete method of WebAPI.

WebAPI is this example hosted in memory for the purpose of integration testing.

If you like this article don’t forget to subscribe to this blog and make sure you don’t miss new upcoming blog posts.

 
 

The post How to fake session object / HttpContext for integration tests appeared first on RadenkoZec blog.


Radenko Zec: Replace JSON.NET with Jil JSON serializer in ASP.NET Web API

I have recently come across a comparison of fast JSON serializers in .NET, which shows that Jil JSON serializer is one of the fastest.

Jil is created by Kevin Montrose developer at StackOverlow and it is apparently heavily used by Stackoveflow.

This is only one of many benchmarks you can find on Github project website.

JilSpeed

You can find more benchmarks and the source code at this location https://github.com/kevin-montrose/Jil

In this short article I will cover how to replace default JSON serializer in Web API with Jil.

Create Jil MediaTypeFormatter

First, you need to grab Jil from NuGet

PM> Install-Package Jil

 

 

After that, create JilFormatter using code below.

    public class JilFormatter : MediaTypeFormatter
    {
        private readonly Options _jilOptions;
        public JilFormatter()
        {
            _jilOptions=new Options(dateFormat:DateTimeFormat.ISO8601);
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json"));

            SupportedEncodings.Add(new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true));
            SupportedEncodings.Add(new UnicodeEncoding(bigEndian: false, byteOrderMark: true, throwOnInvalidBytes: true));
        }
        public override bool CanReadType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override bool CanWriteType(Type type)
        {
            if (type == null)
            {
                throw new ArgumentNullException("type");
            }
            return true;
        }

        public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
        {
            var task= Task<object>.Factory.StartNew(() => (this.DeserializeFromStream(type,readStream)));
            return task;
        }


        private object DeserializeFromStream(Type type,Stream readStream)
        {
            try
            {
                using (var reader = new StreamReader(readStream))
                {
                    MethodInfo method = typeof(JSON).GetMethod("Deserialize", new Type[] { typeof(TextReader),typeof(Options) });
                    MethodInfo generic = method.MakeGenericMethod(type);
                    return generic.Invoke(this, new object[]{reader, _jilOptions});
                }
            }
            catch
            {
                return null;
            }

        }


        public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content, TransportContext transportContext)
        {
            using (TextWriter streamWriter = new StreamWriter(writeStream))
            {
                JSON.Serialize(value, streamWriter, _jilOptions);
                var task = Task.Factory.StartNew(() => writeStream);
                return task;
            }
        }
    }

 

This code uses reflection for deserialization of JSON.

Replace default JSON serializer

In the end, we need to remove default JSON serializer.

Place this code at beginning of WebApiConfig

config.Formatters.RemoveAt(0);
config.Formatters.Insert(0, new JilFormatter());

 

Feel free to test Jil with Web API and don’t forget to subscribe here and don’t miss a new blog post.

 
 

The post Replace JSON.NET with Jil JSON serializer in ASP.NET Web API appeared first on RadenkoZec blog.


Darrel Miller: Hypermedia as the engine of application state, the client-server dance

We are currently seeing a significant amount of discussion about building hypermedia APIs.  However, the server side only plays part of the role in a hypermedia driven system.  To take full advantage of the benefits of hypermedia, the client must allow the server to take the lead and drive the state of the client.  As I like to say, it takes two to Tango.

So you think you can dance? Tango

Soon after I was married, my wife convinced me to take dance lessons with her.  Over the couple of years we spent taking lessons, I learned there were three types of people who join a dance studio.  There are people who want to get better at dancing, there are couples who are getting married who don't want to look like idiots during their 'first dance' and there are divorcees.  I'll leave it to you to figure out why the divorcees are there as I'll just be focusing on the other two groups.

The couples who are preparing for their weddings usually are under time and budgetary constraints, so they usually opt to learn a choreographed sequence of steps to a particular song.  They both learn the sequence of steps and, to an extent, dance their own independent steps whilst hanging on to each other.  It is a process that serves a purpose. It meets the goals of the couple, but it is a vastly inferior result to the approach taken when people's goal is simply to learn how to dance.

FirstDance

In order to learn to dance there are a number of basic fundamentals that are required. It is essential to be able to follow the beat of whatever music you are trying to dance to.  There are a set of basic dance primitives that can be combined to make up dance sequences.  It is also important to understand the role of the man vs that of the woman when dancing (Note: these are traditional role names, and no longer necessarily correlate with gender). The man leads the dance, the woman follows. 

RumbaSteps As the dance progresses the man chooses the sequences of primitives to perform and uses hand signals, body position and weight change to communicate with the women what steps are coming next.  There is no predefined, choreographed set of sequences. The man basically does whatever he wants within the constrains of dance style. The woman follows. 

When done right, it looks like magic

Watching a talented couple do this freestyle dance is often indistinguishable from a choreographed dance.  When people learn to dance this way, they can dance to any piece of music as long as the beat matches a style of dance they know and they can dance with any partner.   Whereas, couples who learn their wedding dance, know one sequence to one piece of music and can only dance it with one partner.

The Client-Server dance

Building a client that can consume a HTTP API can be done in different ways.  You can build your application to be like a choreographed dance, where both the client and sever know in advance what is going to happen.  When the client makes an HTTP request to a particular resource it knows in advance how the server will respond.  The challenge with this approach is that both parties need to have knowledge of the sequences, and more importantly, where they are up to in the sequence. If someone decides to make any changes the other party is likely to get confused by the unplanned change.Duel_of_the_Fates[1]

A choreographed client

The last twenty years of building clients for distributed applications has taught us how to build highly choreographed clients.  We first learn the API that the server exposes and then teach our clients an intricate pattern of interactions in order to achieve our desired goals.  Our application is the dance we perform.

Frequently when building clients like this we will create a facade over the remote service and a view model to manage the state of the sequence of interactions. Consider the following example of a distributed application that is designed to perform the dance of turning on and off a light switch.

The service facade:

    public class SwitchService
    {
        private const string SwitchStateResource = "switch/state";
        private const string SwitchOnResource = "switch/on";
        private const string SwitchOffResource = "switch/off";

        private readonly HttpClient _client;

        public SwitchService(HttpClient client)
        {
            _client = client;
        }

        public async Task<bool> GetSwitchStateAsync()
        {
            var result = await _client.GetStringAsync(SwitchStateResource).ConfigureAwait(false);

            return bool.Parse(result);
        }

        public Task SetSwitchStateAsync(bool newstate)
        {
            if (newstate)
            {
                return _client.PostAsync(SwitchOnResource,null);
            }
            else
            {
                return _client.PostAsync(SwitchOffResource, null);
            }
        }
    }

The view model:

    public class SwitchViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        
        private readonly SwitchService _service;
        private bool _switchState;

        public SwitchViewModel(SwitchService service)
        {
            _service = service;
            _switchState = service.GetSwitchStateAsync().Result;
        }

        
        private bool SwitchState {
            get
            {
                return _switchState; 
            }
             set
            {
                _service.SetSwitchStateAsync(value).Wait();
                _switchState = value; 
                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        public bool On
        {
            get { return SwitchState; }
        }

        public void TurnOff()
        {
            SwitchState = false;
        }

        public void TurnOn()
        {
            SwitchState = true;
        }


        public bool CanTurnOn
        {
            get { return SwitchState == false; }
        }

        public bool CanTurnOff
        {
            get { return SwitchState; }
        }
        

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

In our client view model we maintain the current SwitchState.  The client needs to know at any point in time, whether the switch is on or off.  This information will be provided to the View to present a visual representation to the user and it is also used to drive the application logic that determines if we are allowed to turn the switch on or off again.  Our application wishes to prevent someone from trying to turn on the switch if it is already on and turn off the switch if it is already off.  This is an extremely simple example but will be sufficient to illustrate differences between the two approaches.

The important point to note,  that just like our engaged couple doing their dance, both the client and server must keep track of the current application state in order to know what they can and must do next.  SkippingRope

Sometimes you just have to let go

In this next example, we take away the responsibility from the client of keeping track of state that is already being tracked by the server.  The client simply follows the lead of the server and trusts the server to provide it the necessary guidance.

basejump

We no longer need to provide a facade over the server API and instead we focus on understanding the messages communicated to by the server.  For that we have created a class called SwitchDocument that allows the client to parse and interpret the message. 

    public class SwitchDocument
    {
        public static SwitchDocument Load(Stream stream)
        {
            var switchStateDocument = new SwitchDocument();
            var jObject = JObject.Load(new JsonTextReader(new StreamReader(stream)));
            foreach (var jProp in jObject.Properties())
            {
                switch (jProp.Name)
                {
                    case "On":
                        switchStateDocument.On = (bool)jProp.Value;
                        break;
                    case "TurnOnLink":
                        switchStateDocument.TurnOnLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                    case "TurnOffLink":
                        switchStateDocument.TurnOffLink = new Uri((string)jProp.Value, UriKind.RelativeOrAbsolute);
                        break;
                }
            }
            return switchStateDocument;
        }

        public bool On { get; private set; }
        public Uri TurnOnLink { get; set; }
        public Uri TurnOffLink { get; set; }

        public static Uri SelfLink
        {
            get { return new Uri("switch/state", UriKind.Relative); }
        }
    }

Our view model now has the reduced role of simply presenting the information contained in the SwitchDocument to the view and providing a way to interact with the affordances described in the SwitchDocument.

    public class SwitchHyperViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;
        private readonly HttpClient _client;
        private SwitchDocument _switchStateDocument = new SwitchDocument();

        public SwitchHyperViewModel(HttpClient client)
        {
            _client = client;
            _client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/switchstate+json"));
            _client.GetAsync(SwitchDocument.SelfLink).ContinueWith(t => UpdateState(t.Result)).Wait();
        }

        public bool On
        {
            get { return _switchStateDocument.On; }
        }

        public bool CanTurnOn
        {
            get { return _switchStateDocument.TurnOnLink != null; }
        }

        public bool CanTurnOff
        {
            get { return _switchStateDocument.TurnOffLink != null; }
        }

        public void TurnOff()
        {
            _client.PostAsync(_switchStateDocument.TurnOffLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        public void TurnOn()
        {
            _client.PostAsync(_switchStateDocument.TurnOnLink, null).ContinueWith(t => UpdateState(t.Result));
        }

        private void UpdateState(HttpResponseMessage httpResponseMessage)
        {
            if (httpResponseMessage.StatusCode == HttpStatusCode.OK)
            {
                _switchStateDocument = SwitchDocument.Load(httpResponseMessage.Content.ReadAsStreamAsync().Result);

                OnPropertyChanged();
                OnPropertyChanged("CanTurnOn");
                OnPropertyChanged("CanTurnOff");
            }
        }

        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
        {
            var handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }

    }

This new hypermedia driven View Model has the same interface as the choreographed one and can be easily connected to the same simple user interface to display the state of the light switch and provide controls that can change the light switch.   The difference is in the way the application state is managed.  In this case, the view model determines if the switch can be turned on or off based on the presence of links that will turn on and off the switch.  Attempting to turn on or off the switch involves making a HTTP request to the server and using the response as a completely new state for the view model. 

You can find a complete WPF example in the Github repository.

The similarity is only skin deep

On the surface it appears that the two different approaches produce pretty much the same results.  It is almost the same amount of code, with a similar level of complexity.  The question has to be asked, what are the benefits?

FakePurse If you watch a couple dance who have learned a choreographed dance, you may think they are very capable dancers.  You may not even be able to tell the difference between them and others doing the same dance who have a much more fundamental understanding of how to dance.  The differences only begin to appear when you introduce change.  Changing dance partners, changing music or adding new steps will quickly reveal the differences.

The impact of change

The same is true with our sample application.  Consider the scenario where new requirements are introduced where a switch could only be turned on during a certain time period, or only users with certain permissions could turn on the switch. 

In the choreographed application, we would need to add a number of other server resources that would allow a client to inquire if a user has permission, or if the time of day permits turning on or off the switch.  The client must call those resources to retrieve the information, which in itself is not terribly complex.  However, deciding when to make those requests can be tricky.  Calling them frequently adds a significant performance hit, but caching the values locally can introduce problems with keeping the local state consistent with the server based resource state.

Windmills In the hypermedia driven client, neither of our new business requirements require additional resources to be created or server roundtrips.  In fact the client code does not need to change.  All the logic that is used to determine if a client can turn on or off a switch can be embedded into the server logic for determining whether to include a "TurnOn" link or a "TurnOff" link. 

The links are always refreshed along with the state of the switch so the client state is always consistent.  The state may be stale, but that is fine because HTTP has all kinds of mechanism for refreshing stale state.  The key thing is the client does not need to deal with the complexity of the permissions being an hour old, the timing schedule being ten minutes old and state of the switch being ten seconds old.

Some constraints can lead to unimagined possibilities

The fact that our client application does not need to change to accommodate these new requirements is far more significant than our analogy might lead us to believe.  When ballroom dancing there usually just one man and one woman.  The implications of making changes to the dance are limited.  In distributed systems, it is not uncommon for a single API to have thousands of client instances and perhaps multiple different types of clients.  The client applications are often created by different teams, in different countries with completely different time constraints. 

Being able to make logic changes on the server that would normally be embedded into the client can potentially have huge benefits.  The example I have shown only scratches the surface of the techniques that be applied using hypermedia, but hopefully it hints at the possibilities.

Hangglider

Image Credit: Tango https://flic.kr/p/7BY638
Image Credit: First Dance https://flic.kr/p/d3vMxG
Image Credit: Rumba Steps https://flic.kr/p/dMrKk
Image Credit: Skipping Rope https://flic.kr/p/6pZSGX
Image Credit: Base jump https://flic.kr/p/df5Nwd
Image Credit: Fake Gucci purse: https://flic.kr/p/9w5Qji
Image Credit: Windmills https://flic.kr/p/96iTMv
Image Credit: Hang Glider https://flic.kr/p/6jfG9m


Radenko Zec: 8 ways to improve ASP.NET Web API performance

ASP.NET Web API is a great piece of technology. Writing Web API is so easy that many developers don’t take the time to structure their applications for great performance.

In this article, I am going to cover 8 techniques for improving ASP.NET Web API performance.

1) Use fastest JSON serializer

JSON serialization  can affect overall performance of ASP.NET Web API significantly. A year and a half I have switched from JSON.NET serializer on one of my project to ServiceStack.Text .

I have measured around 20% performance improvement on my Web API responses. I highly recommend that you try out this serializer. Here is some latest performance comparison of popular serializers.

SerializerPerformanceGraf

Source: theburningmonk

UPDATE: It seams that StackOverflow uses what they claims even faster JSON serializer called Jil. View some benchmarks on their GitHub page Jil serializer.

2) Manual JSON serialize from DataReader

I have used this method on my production project and gain performance benefits.

Instead reading values from DataReader and populating objects and after that reading again values from those objects and producing JSON using some JSON Serializer,  you can manually create JSON string from DataReader and avoid unnecessary creation of objects.

You produce JSON using StringBuilder and in the end you return StringContent as the content of your response in WebAPI

var response = Request.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent(jsonResult, Encoding.UTF8, "application/json");
return response;

 

You can read more about this method on Rick Strahl’s blog

3) Use other formats if possible (protocol buffer, message pack)

If you can use other formats like Protocol Buffers or MessagePack in your project instead of JSON do it.

You will get huge performance benefits not only because Protocol Buffers serializer is faster, but because format is smaller than JSON which will result in smaller and faster responses.

4) Implement compression

Use GZIP or Deflate compression on your ASP.NET Web API.

Compression is an easy and effective way to reduce the size of packages and increase the speed.

This is a must have feature. You can read more about this in my blog post ASP.NET Web API GZip compression ActionFilter with 8 lines of code.

5) Use caching

If it makes sense, use output caching on your Web API methods. For example, if a lot of users accessing same response that will change maybe once a day.

If you want to implement manual caching such as caching tokens of users into memory please refer to my blog post Simple way to implement caching in ASP.NET Web API.

6) Use classic ADO.NET if possible

Hand coded ADO.NET is still the fastest way to get data from database. If the performance of Web API is really important for you, don’t use ORMs.

You can see one of the latest performance comparison of popular ORMs.

ORMMapper

The Dapper and the  hand-written fetch code are very fast, as expected, all ORMs are slower than those three.

LLBLGen with resultset caching is very fast, but it fetches the resultset once and then re-materializes the objects from memory.

7) Implement async on methods of Web API

Using asynchronous Web API services can increase the number of concurrent HTTP requests Web API can handle.

Implementation is simple. The operation is simply marked with the async keyword and the return type is changed to Task.

[HttpGet]  
public async Task OperationAsync()  
{   
    await Task.Delay(2000);  
}

8) Return Multiple Resultsets and combined results

Reduce number of round-trips not only to database but to Web API as well. You should use multiple resultsets functionality whenever is possible.

This means you can extract multiple resultsets from DataReader like in the example bellow:

// read the first resultset 
var reader = command.ExecuteReader(); 

// read the data from that resultset 
while (reader.Read()) 
{ 
	suppliers.Add(PopulateSupplierFromIDataReader( reader )); 
} 

// read the next resultset 
reader.NextResult(); 

// read the data from that second resultset 
while (reader.Read()) 
{ 
	products.Add(PopulateProductFromIDataReader( reader )); 
}

 

Return as many objects you can in one Web API response. Try combining objects into one aggregate object like this:

public class AggregateResult
{
     public long MaxId { get; set; }
     public List<Folder> Folders{ get; set; }
     public List<User>  Users{ get; set; }
}

 

This way you will reduce the number of HTTP requests to your Web API.

Thank you for reading this article.

Leave a comment below and let me know what other methods you have found to improve Web API performance?

 
 

The post 8 ways to improve ASP.NET Web API performance appeared first on RadenkoZec blog.


Dominick Baier: NDC London: Identity and Access Control for modern Web Applications and APIs

I am happy to announce that NDC will host our new workshop in London in December!

Join us to learn everything that is important to secure modern web applications and APIs using Microsoft’s current and future web stack! Looking forward to it!

course description / ndc london / tickets


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Taiseer Joudeh: Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin

After my previous Token Based Authentication post I’ve received many requests to add OAuth Refresh Tokens to the OAuth Resource Owner Password Credentials flow which I’m currently using in the previous tutorial. To be honest adding support for refresh tokens adds a noticeable level of complexity to your Authorization Server. As well most of the available resources on the net don’t provide the full picture of how to implement this by introducing clients nor how to persist the refresh tokens into database.

So I’ve decided to write a detailed post with live demo application which resumes what we’ve built in the previous posts, so I recommend you to read part 1 at least to follow along with this post. This detailed post will cover adding Clients, persisting refresh tokens, dynamically configuring refresh tokens expiry dates, and revoking refresh tokens.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJS OAuth Refresh Tokens

Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin

Before start into the implementation I would like to discuss when and how refresh tokens should be used, and what is the database structure needed to implement a complete solution.

Using Refresh Tokens

The idea of using refresh token is to issue short lived access token at the first place then use the refresh token to obtain new access token and so on, so the user needs to authenticate him self by providing username and password along with client info (we’ll talk about clients later in this post), and if the information provided is valid a response contains a short lived access token is obtained along with long lived refresh token (This is not an access token, it is just identifier to the refresh token). Now once the access token expires we can use the refresh token identifier to try to obtain another short lived access token and so on.

But why we are adding this complexity, why not to issue long lived access tokens from the first place?

In my own opinion there are three main benefits to use refresh tokens which they are:

  1. Updating access token content: as you know the access tokens are self contained tokens, they contain all the claims (Information) about the authenticated user once they are generated, now if we issue a long lived token (1 month for example) for a user named “Alex” and enrolled him in role “Users” then this information get contained on the token which the Authorization server generated. If you decided later on (2 days after he obtained the token) to add him to the “Admin” role then there is no way to update this information contained in the token generated, you need to ask him to re-authenticate him self again so the Authorization server add this information to this newly generated access token, and this not feasible on most of the cases. You might not be able to reach users who obtained long lived access tokens. So to overcome this issue we need to issue short lived access tokens (30 minutes for example) and use the refresh token to obtain new access token, once you obtain the new access token, the Authorization Server will be able to add new claim for user “Alex” which assigns him to “Admin” role once the new access token being generated.
  2. Revoking access from authenticated users: Once the user obtains long lived access token he’ll be able to access the server resources as long as his access token is not expired, there is no standard way to revoke access tokens unless the Authorization Server implements custom logic which forces you to store generated access token in database and do database checks with each request. But with refresh tokens, a system admin can revoke access by simply deleting the refresh token identifier from the database so once the system requests new access token using the deleted refresh token, the Authorization Server will reject this request because the refresh token is no longer available (we’ll come into this with more details).
  3. No need to store or ask for username and password: Using refresh tokens allows you to ask the user for his username and password only one time once he authenticates for the first time, then Authorization Server can issue very long lived refresh token (1 year for example) and the user will stay logged in all this period unless system admin tries to revoke the refresh token. You can think of this as a way to do offline access to server resources, this can be useful if you are building an API which will be consumed by front end application where it is not feasible to keep asking for username/password frequently.

Refresh Tokens and Clients

In order to use refresh tokens we need to bound the refresh token with a Client, a Client means the application the is attempting communicate with the back-end API, so you can think of it as the software which is used to obtain the token. Each Client should have Client Id and Secret, usually we can obtain the Client Id/Secret once we register the application with the back-end API.

The Client Id is a unique public information which identifies your application among other apps using the same back-end API. The client id can be included in the source code of your application, but the client secret must stay confidential so in case we are building JavaScript apps there is no need to include the secret in the source code because there is no straight way to keep this secret confidential on JavaScript application. In this case we’ll be using the client Id only for identifying which client is requesting the refresh token so it can be bound to this client.

In our case I’ve identified clients to two types (JavaScript – Nonconfidential) and (Native-Confidential) which means that for confidential clients we can store the client secret in confidential way (valid for desktop apps, mobile apps, server side web apps) so any request coming from this client asking for access token should include the client id and secret.

Bounding the refresh token to a client is very important, we do not want any refresh token generated from our Authorization Server to be used in another client to obtain access token. Later we’ll see how we will make sure that refresh token is bounded to the same client once it used to generate new access token.

Database structure needed to support OAuth Refresh Tokens

It is obvious that we need to store clients which will communicate with our back-end API in persistent medium, the schema for Clients table will be as the image below:

OAuth 2.0 Clients

The Secret column is hashed so anyone has an access to the database will not be able to see the secrets, the Application Type column with value (1) means it is Native – Confidential client which should send the secret once the access token is requested.

The Active column is very useful; if the system admin decided to deactivate this client, so any new requests asking for access token from this deactivated client will be rejected. The Refresh Token Life Time column is used to set when the refresh token (not the access token) will expire in minutes so for the first client it will expire in 10 days, it is nice feature because now you can control the expiry for refresh tokens for each client.

Lastly the Allowed Origin column is used configure CORS and to set “Access-Control-Allow-Origin” on the back-end API. It is only useful for JavaScript applications using XHR requests, so in my case I’ m setting the allowed origin for client id “ngAuthApp” to origin “http://ngauthenticationweb.azurewebsites.net/” and this turned out to be very useful, so if any malicious user obtained my client id from my JavaScript app which is very trivial to do, he will not be able to use this client to build another JavaScript application using the same client id because all preflighted  requests will fail and return 405 HTTP status (Method not allowed) All XHR requests coming for his JavaScript app will be from different domain. This is valid for JavaScript application types only, for other application types you can set this to “*”.

Note: For testing the API the secret for client id “consoleApp” is “123@abc”.

Now we need to store the refresh tokens, this is important to facilitate the management for refresh tokens, the schema for Refresh Tokens table will be as the image below:

Refresh Tokens

The Id column contains hashed value of the refresh token id, the API consumer will receive and send the plain refresh token Id. the Subject column indicates to which user this refresh token belongs, and the same applied for Client Id column, by having this columns we can revoke the refresh token for a certain user on certain client and keep the other refresh tokens for the same user obtained by different clients available.

The Issued UTC and Expires UTC columns are for displaying purpose only, I’m not building my refresh tokens expiration logic based on these values.

Lastly the Protected Ticket column contains magical signed string which contains a serialized representation for the ticket for specific user, in other words it contains all the claims and ticket properties for this user. The Owin middle-ware will use this string to build the new access token auto-magically (We’ll see how this take place later in this post).

Now I’ll walk you through implementing the refresh tokens, as I stated before you can read the previous post to be able to follow along with me:

Step 1: Add the new Database Entities

Add new folder named “Entities”, inside the folder you need to define 2 classes named “Client” and “RefreshToken”, the definition for classes as the below:

public class Client
    {
        [Key] 
        public string Id { get; set; }
        [Required]
        public string Secret { get; set; }
        [Required]
        [MaxLength(100)]
        public string Name { get; set; }
        public ApplicationTypes ApplicationType { get; set; }
        public bool Active { get; set; }
        public int RefreshTokenLifeTime { get; set; }
        [MaxLength(100)]
        public string AllowedOrigin { get; set; }
    }

public class RefreshToken
    {
        [Key]
        public string Id { get; set; }
        [Required]
        [MaxLength(50)]
        public string Subject { get; set; }
        [Required]
        [MaxLength(50)]
        public string ClientId { get; set; }
        public DateTime IssuedUtc { get; set; }
        public DateTime ExpiresUtc { get; set; }
        [Required]
        public string ProtectedTicket { get; set; }
    }

Then we need to add simple Enum which defined the Application Type, so add class named “Enum” inside the “Models” folder as the code below:

public enum ApplicationTypes
    {
        JavaScript = 0,
        NativeConfidential = 1
    };

To make this entities available on the DbContext, open file “AuthContext” and paste the code below:

public class AuthContext : IdentityDbContext<IdentityUser>
    {
        public AuthContext()
            : base("AuthContext")
        {
     
        }

        public DbSet<Client> Clients { get; set; }
        public DbSet<RefreshToken> RefreshTokens { get; set; }
    }

Step 2: Add new methods to repository class

The methods I’ll add now will add support for manipulating the tables we’ve added, they are self explanatory methods and there is nothing special about them, so open file “AuthRepository” and paste the code below:

public Client FindClient(string clientId)
        {
            var client = _ctx.Clients.Find(clientId);

            return client;
        }

        public async Task<bool> AddRefreshToken(RefreshToken token)
        {

           var existingToken = _ctx.RefreshTokens.Where(r => r.Subject == token.Subject && r.ClientId == token.ClientId).SingleOrDefault();

           if (existingToken != null)
           {
             var result = await RemoveRefreshToken(existingToken);
           }
          
            _ctx.RefreshTokens.Add(token);

            return await _ctx.SaveChangesAsync() > 0;
        }

        public async Task<bool> RemoveRefreshToken(string refreshTokenId)
        {
           var refreshToken = await _ctx.RefreshTokens.FindAsync(refreshTokenId);

           if (refreshToken != null) {
               _ctx.RefreshTokens.Remove(refreshToken);
               return await _ctx.SaveChangesAsync() > 0;
           }

           return false;
        }

        public async Task<bool> RemoveRefreshToken(RefreshToken refreshToken)
        {
            _ctx.RefreshTokens.Remove(refreshToken);
             return await _ctx.SaveChangesAsync() > 0;
        }

        public async Task<RefreshToken> FindRefreshToken(string refreshTokenId)
        {
            var refreshToken = await _ctx.RefreshTokens.FindAsync(refreshTokenId);

            return refreshToken;
        }

        public List<RefreshToken> GetAllRefreshTokens()
        {
             return  _ctx.RefreshTokens.ToList();
        }

Step 3: Validating the Client Information

Now we need to implement the logic responsible to validate the client information sent one the application requests an access token or uses a refresh token to obtain new access token, so open file “SimpleAuthorizationServerProvider” and paste the code below:

public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {

            string clientId = string.Empty;
            string clientSecret = string.Empty;
            Client client = null;

            if (!context.TryGetBasicCredentials(out clientId, out clientSecret))
            {
                context.TryGetFormCredentials(out clientId, out clientSecret);
            }

            if (context.ClientId == null)
            {
                //Remove the comments from the below line context.SetError, and invalidate context 
                //if you want to force sending clientId/secrects once obtain access tokens. 
                context.Validated();
                //context.SetError("invalid_clientId", "ClientId should be sent.");
                return Task.FromResult<object>(null);
            }

            using (AuthRepository _repo = new AuthRepository())
            {
                client = _repo.FindClient(context.ClientId);
            }

            if (client == null)
            {
                context.SetError("invalid_clientId", string.Format("Client '{0}' is not registered in the system.", context.ClientId));
                return Task.FromResult<object>(null);
            }

            if (client.ApplicationType == Models.ApplicationTypes.NativeConfidential)
            {
                if (string.IsNullOrWhiteSpace(clientSecret))
                {
                    context.SetError("invalid_clientId", "Client secret should be sent.");
                    return Task.FromResult<object>(null);
                }
                else
                {
                    if (client.Secret != Helper.GetHash(clientSecret))
                    {
                        context.SetError("invalid_clientId", "Client secret is invalid.");
                        return Task.FromResult<object>(null);
                    }
                }
            }

            if (!client.Active)
            {
                context.SetError("invalid_clientId", "Client is inactive.");
                return Task.FromResult<object>(null);
            }

            context.OwinContext.Set<string>("as:clientAllowedOrigin", client.AllowedOrigin);
            context.OwinContext.Set<string>("as:clientRefreshTokenLifeTime", client.RefreshTokenLifeTime.ToString());

            context.Validated();
            return Task.FromResult<object>(null);
        }

By looking at the code above you will notice that we are doing the following validation steps:

  1. We are trying to get the Client id and secret from the authorization header using a basic scheme so one way to send the client_id/client_secret is to base64 encode the (client_id:client_secret) and send it in the Authorization header. The other way is to sent the client_id/client_secret as “x-www-form-urlencoded”. In my case I’m supporting the both approaches so client can set those values using any of the two available options.
  2. We are checking if the consumer didn’t set client information at all, so if you want to enforce setting the client id always then you need to invalidate the context. In my case I’m allowing to send requests without client id for the sake of keeping old post and demo working correctly.
  3. After we receive the client id we need to check our database if the client is already registered with our back-end API, if it is not registered we’ll invalidate the context and reject the request.
  4. If the client is registered we need to check his application type, so if it was “JavaScript – Non Confidential” client we’ll not check or ask for the secret. If it is Native – Confidential app then the client secret is mandatory and it will be validated against the secret stored in the database.
  5. Then we’ll check if the client is active, if it is not the case then we’ll invalidate the request.
  6. Lastly we need to store the client allowed origin and refresh token life time value on the Owin context so it will be available once we generate the refresh token and set its expiry life time.
  7. If all is valid we mark the context as valid context which means that client check has passed and the code flow can proceed to the next step.

Step 4: Validating the Resource Owner Credentials

Now we need to modify the method “GrantResourceOwnerCredentials” to validate that resource owner username/password is correct and bound the client id to the access token generated, so open file “SimpleAuthorizationServerProvider” and paste the code below:

public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            var allowedOrigin = context.OwinContext.Get<string>("as:clientAllowedOrigin");

            if (allowedOrigin == null) allowedOrigin = "*";

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            using (AuthRepository _repo = new AuthRepository())
            {
                IdentityUser user = await _repo.FindUser(context.UserName, context.Password);

                if (user == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim("role", "user"));

            var props = new AuthenticationProperties(new Dictionary<string, string>
                {
                    { 
                        "as:client_id", (context.ClientId == null) ? string.Empty : context.ClientId
                    },
                    { 
                        "userName", context.UserName
                    }
                });

            var ticket = new AuthenticationTicket(identity, props);
            context.Validated(ticket);

        }

 public override Task TokenEndpoint(OAuthTokenEndpointContext context)
        {
            foreach (KeyValuePair<string, string> property in context.Properties.Dictionary)
            {
                context.AdditionalResponseParameters.Add(property.Key, property.Value);
            }

            return Task.FromResult<object>(null);
        }

By looking at the code above you will notice that we are doing the following:

  1. Reading the allowed origin value for this client from the Owin context, then we use this value to add the header “Access-Control-Allow-Origin” to Owin context response, by doing this and for any JavaScript application we’ll prevent using the same client id to build another JavaScript application hosted on another domain; because the origin for all requests coming from this app will be from a different domain and the back-end API will return 405 status.
  2. We’ll check the username/password for the resource owner if it is valid, and if this is the case we’ll generate set of claims for this user along with authentication properties which contains the client id and userName, those properties are needed for the next steps.
  3. Now the access token will be generated behind the scenes when we call “context.Validated(ticket)”

Step 5: Generating the Refresh Token and Persisting it

Now we need to generate the Refresh Token and Store it in our database inside the table “RefreshTokens”, to do the following we need to add new class named “SimpleRefreshTokenProvider” under folder “Providers” which implements the interface “IAuthenticationTokenProvider”, so add the class and paste the code below:

public class SimpleRefreshTokenProvider : IAuthenticationTokenProvider
    {

        public async Task CreateAsync(AuthenticationTokenCreateContext context)
        {
            var clientid = context.Ticket.Properties.Dictionary["as:client_id"];

            if (string.IsNullOrEmpty(clientid))
            {
                return;
            }

            var refreshTokenId = Guid.NewGuid().ToString("n");

            using (AuthRepository _repo = new AuthRepository())
            {
                var refreshTokenLifeTime = context.OwinContext.Get<string>("as:clientRefreshTokenLifeTime"); 
               
                var token = new RefreshToken() 
                { 
                    Id = Helper.GetHash(refreshTokenId),
                    ClientId = clientid, 
                    Subject = context.Ticket.Identity.Name,
                    IssuedUtc = DateTime.UtcNow,
                    ExpiresUtc = DateTime.UtcNow.AddMinutes(Convert.ToDouble(refreshTokenLifeTime)) 
                };

                context.Ticket.Properties.IssuedUtc = token.IssuedUtc;
                context.Ticket.Properties.ExpiresUtc = token.ExpiresUtc;
                
                token.ProtectedTicket = context.SerializeTicket();

                var result = await _repo.AddRefreshToken(token);

                if (result)
                {
                    context.SetToken(refreshTokenId);
                }
             
            }
        }
 }

As you notice this class implements the interface “IAuthenticationTokenProvider” so we need to add our refresh token generation logic inside method “CreateAsync”, and by looking at the code above we can notice the below:

  1. We are generating a unique identifier for the refresh token, I’m using Guid here which is enough for this or you can use your own unique string generation algorithm.
  2. Then we are reading the refresh token life time value from the Owin context where we set this value once we validate the client, this value will be used to determine how long the refresh token will be valid for, this should be in minutes.
  3. Then we are setting the IssuedUtc, and ExpiresUtc values for the ticket, setting those properties will determine how long the refresh token will be valid for.
  4. After setting all context properties we are calling method “context.SerializeTicket();” which will be responsible to serialize the ticket content and we’ll be able to store this magical serialized string on the database.
  5. After this we are building a token record which will be saved in RefreshTokens table, note that I’m checking that the token which will be saved on the database is unique for this Subject (User) and the Client, if it not unique I’ll delete the existing one and store new refresh token. It is better to hash the refresh token identifier before storing it, so if anyone has access to the database he’ll not see the real refresh tokens.
  6. Lastly we will send back the refresh token id (without hashing it) in the response body.

The “SimpleRefreshTokenProvider” class should be set along with the “OAuthAuthorizationServerOptions”, so open class “Startup” and replace the code used to set “OAuthAuthorizationServerOptions” in method “ConfigureOAuth” with the code below, notice that we are setting the access token life time to a short period now (30 minutes) instead of 24 hours.

OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions() {
            
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(30),
                Provider = new SimpleAuthorizationServerProvider(),
                RefreshTokenProvider = new SimpleRefreshTokenProvider()
            };

Once this is done we we can now test obtaining refresh token and storing it in the database, to do so open your favorite REST client and I’ll use PostMan to compose the POST request against the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

Generate Refresh Token Request

What worth mentioning here that we are setting the client_id parameter in the request body, and once the “/token” end point receives this request it will go through all the validation we’ve implemented in method “ValidateClientAuthentication”, you can check how the validation will take place if we set the client_id to “consoleApp”  and how the client_secret is mandatory and should be provided with this request because the application type for this client is “Native-Confidential”.

As well by looking at the response body you will notice that we’ve obtained a “refresh_token” which should be used to obtain new access token (we’ll see this later in this post) this token is bounded to user “Razan” and for Client “ngAuthApp”. Note that the “expires_in” value is related to the access token not the refresh token, this access token will expires in 30 mins.

Step 6: Generating an Access Token using the Refresh Token

Now we need to implement the logic needed once we receive the refresh token so we can generate a new access token, to do so open class “SimpleRefreshTokenProvider”  and implement the code below in method “ReceiveAsync”:

public async Task ReceiveAsync(AuthenticationTokenReceiveContext context)
        {

            var allowedOrigin = context.OwinContext.Get<string>("as:clientAllowedOrigin");
            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { allowedOrigin });

            string hashedTokenId = Helper.GetHash(context.Token);

            using (AuthRepository _repo = new AuthRepository())
            {
                var refreshToken = await _repo.FindRefreshToken(hashedTokenId);

                if (refreshToken != null )
                {
                    //Get protectedTicket from refreshToken class
                    context.DeserializeTicket(refreshToken.ProtectedTicket);
                    var result = await _repo.RemoveRefreshToken(hashedTokenId);
                }
            }
        }

What we’ve implemented in this method is the below:

  1. We need to set the “Access-Control-Allow-Origin” header by getting the value from Owin Context, I’ve spent more than 1 hour figuring out why my requests to issue access token using a refresh token returns 405 status code and it turned out that we need to set this header in this method because the method “GrantResourceOwnerCredentials” where we set this header is never get executed once we request access token using refresh tokens (grant_type=refresh_token).
  2. We get the refresh token id from the request, then hash this id and look for this token using the hashed refresh token id in table “RefreshTokens”, if the refresh token is found, we will use the magical signed string which contains a serialized representation for the ticket to build the ticket and identities for the user mapped to this refresh token.
  3. We’ll remove the existing refresh token from tables “RefreshTokens” because in our logic we are allowing only one refresh token per user and client.

Now the request context contains all the claims stored previously for this user, we need to add logic which allows us to issue new claims or updating existing claims and contain them into the new access token generated before sending it to the user, to do so open class “SimpleAuthorizationServerProvider” and implement method “GrantRefreshToken” using the code below:

public override Task GrantRefreshToken(OAuthGrantRefreshTokenContext context)
        {
            var originalClient = context.Ticket.Properties.Dictionary["as:client_id"];
            var currentClient = context.ClientId;

            if (originalClient != currentClient)
            {
                context.SetError("invalid_clientId", "Refresh token is issued to a different clientId.");
                return Task.FromResult<object>(null);
            }

            // Change auth ticket for refresh token requests
            var newIdentity = new ClaimsIdentity(context.Ticket.Identity);
            newIdentity.AddClaim(new Claim("newClaim", "newValue"));

            var newTicket = new AuthenticationTicket(newIdentity, context.Ticket.Properties);
            context.Validated(newTicket);

            return Task.FromResult<object>(null);
        }

What we’ve implement above is simple and can be explained in the points below:

  1. We are reading the client id value from the original ticket, this is the client id which get stored in the magical signed string, then we compare this client id against the client id sent with the request, if they are different we’ll reject this request because we need to make sure that the refresh token used here is bound to the same client when it was generated.
  2. We have the chance now to add new claims or remove existing claims, this was not achievable without refresh tokens, then we call “context.Validated(newTicket)” which will generate new access token and return it in the response body.
  3. Lastly after this method executes successfully, the flow for the code will hit method “CreateAsync” in class “SimpleRefreshTokenProvider” and a new refresh token is generated and returned in the response along with the new access token.

To test this out we need to issue HTTP POST request to the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

Generate Access Token Request

Notice how we set the “grant_type” to “refresh_token” and passed the “refresh_token” value and client id with the request, if all went successfully we’ll receive a new access token and refresh token.

Step 7: Revoking Refresh Tokens

The idea here is simple, all you need to do is to delete the refresh token record from table “RefreshTokens” and once the user tries to request new access token using the deleted refresh token it will fail and he needs to authenticate again using his username/password in order to obtain new access token and refresh token. By having this feature your system admin can have control on how to revoke access from logged in users.

To do this we need to add new controller named “RefreshTokensController” under folder “Controllers” and paste the code below:

[RoutePrefix("api/RefreshTokens")]
    public class RefreshTokensController : ApiController
    {

        private AuthRepository _repo = null;

        public RefreshTokensController()
        {
            _repo = new AuthRepository();
        }

        [Authorize(Users="Admin")]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(_repo.GetAllRefreshTokens());
        }

        //[Authorize(Users = "Admin")]
        [AllowAnonymous]
        [Route("")]
        public async Task<IHttpActionResult> Delete(string tokenId)
        {
            var result = await _repo.RemoveRefreshToken(tokenId);
            if (result)
            {
                return Ok();
            }
            return BadRequest("Token Id does not exist");
            
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }
    }

Nothing special implemented here, we only have 2 actions which lists all the stored refresh tokens on the database, and another method which accepts a query string named “tokeId”, this should contains the hashed value of the refresh token, this method will be used to delete/revoke refresh tokens.

For the sake of the demo, I’ve authorized only user named “Admin” to execute the “Get” method and obtain all refresh tokens, as well I’ve allowed anonymous access for the “Delete” method so you can try to revoke your own refresh tokens, on production you need to secure this end point.

To hash your refresh tokens you have to use the helper class below, so add new class named “Helper” and paste the code below:

public static string GetHash(string input)
        {
            HashAlgorithm hashAlgorithm = new SHA256CryptoServiceProvider();
       
            byte[] byteValue = System.Text.Encoding.UTF8.GetBytes(input);

            byte[] byteHash = hashAlgorithm.ComputeHash(byteValue);

            return Convert.ToBase64String(byteHash);
        }

So to test this out and revoke a refresh token we need to issue a DELETE request to the following end point “http://ngauthenticationapi.azurewebsites.net/api/refreshtokens“, the request will be as the image below:

Revoke Refresh Token

Once the refresh token has been removed, if the user tries to use it he will fail obtaining new access token until he authenticate again using username/password.

Step 8: Updating the front-end AngularJS Application

I’ve updated the live front-end application to support using refresh tokens along with Resource Owner Password Credentials flow, so once you log in, and if you want to user refresh tokens you need to check the check box “Use Refresh Tokens” as the image below:

Login with Refresh Tokens

One you log in successfully, you will find new tab named “Refresh Tokens” on the top right corner, this tab will provide you with a view which allows you to obtain a new access token using the refresh token you obtained on log in, the view will look as the image below:

Refresh Token AngularJS

Update (11-08-2014) Thanks to Nikolaj for forking the repo and add support for seamless refresh token requests, you can check it here.

Lastly I’ve added new view named “/tokens” which is accessible only by username “Admin”. This view will list all available refresh tokens along with refresh token issue and expiry date. This view will allow the admin only to revoke a refresh tokens, the view will look as the below image:

List Refresh Tokens

I will write another post which shows how I’ve implemented this in the AngularJS application, for now you can check the code on my GitHub repo.

Conclusion

Security is really hard! You need to think about all the ins and outs, this post turned out to be too long and took way longer to compile and write than I expected, but hopefully it will be useful for anyone looking to implement refresh tokens. I would like to hear your feedback and comments if there is something missing or there is something we can enhance on this post.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

  1. Special thanks goes to Dominick Baier for his detailed posts about OAuth, especially this post was very useful. As well I highly recommend checking  the Thinktecture.IdentityServer.
  2. Great post by Andrew Timney on persisting refresh tokens.

The post Enable OAuth Refresh Tokens in AngularJS App using ASP .NET Web API 2, and Owin appeared first on Bit of Technology.


Dominick Baier: Updated IdentityServer v3 Roadmap (and Refresh Tokens)

Brock and I have been pretty busy the last months and we did not find as much time to work on IdentityServer as we wanted.

So we have updated our milestones on github and are currently planning a Beta 1 for beginning of August.

You can check the github issue tracker (or open new issues when you find bugs or have suggestions) or you can have an alternative view on our current work using Huboard.

I just checked in initial support for refresh tokens, and it would be great if you could give that a try and let us know if it works for you – see here.

That’s it – back to work.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Filip Woj: Building a strongly typed route provider for ASP.NET Web API

ASP.NET Web API 2.2 was released last week, and one of the key new features is the ability to extend and plug in your own custom logic into the attribute routing engine. Commonly known as “attribute routing”, it’s actually officially … Continue reading

The post Building a strongly typed route provider for ASP.NET Web API appeared first on StrathWeb.


Darrel Miller: The Insanity of the Vary Header

In my first deep dive into a HTTP header on the user-agent header I said that I would try and produce a series of posts going under the covers on certain HTTP headers.  This post is about the Vary header.  The Vary header both wonderful and sad at the same time.  I'll discuss how to make it work for you and where it fails miserably.

The Vary header is used for HTTP caching.  If you want the really gory details of HTTP caching, you can find them here in,  Caching is hard, draw me a picture.  The short and pertinent part of that story is, when you make an HTTP request, it is possible that the response will come from a cache, rather than being generated by the origin server.  For the cache to know whether it can satisfy a response it needs a cache key.

Anatomy of a cache key

KeyCache entries have a primary cache key and potentially a secondary cache key.  The primary cache key is made up of a HTTP method and a URL.  For the vast majority of cases the HTTP method is a GET.  So, for the purposes of our discussion about the vary header, we can assume that the primary cache key is the URL of the HTTP resource. 

Assuming a resource identifies itself as cacheable, or at least, does not explicitly prevent it, a cache that sits somewhere between the client and the origin server, could hold on to a copy of the representation returned from the origin server and store it for satisfying future requests to the same URL. 

But we have variants

The challenge that we have is other HTTP headers can be used to request variations of the representation.  If we were to send Accept-encoding: gzip in our request, we are telling the server that we can handle the response being compressed.  What should the cache do?  Should it ignore the request and pass it along to the server.  Should it return the uncompressed version?  For compressed content, it might not be a big deal because if the client can handle compressed responses, it can also handle uncompressed responses, so whatever happens the client will be happy.  But what should the cache do with a compressed response that comes back from the origin server? Should it update the representation stored in the cache  with the new compressed one?  That would be a problem for future request from clients that do not have the ability to decompress responses.

The example of Accept-Encoding has lots of possible solutions.  However, a header like Accept-Language is more challenging.  If one user asks for a French version of a resource and another asks for an English version of a resource, only one can be stored in the cache if we limit ourselves to just the primary cache key.

We have the same problem if we just use the Accept header to do transparent negotiation between media types.  If one user asks for application/calendar+json and another asks for application/calendar+xml then we can only cache one of these at once.

Vary to the rescue

So far we have mentioned three different HTTP headers that could cause different variations of the resource to be returned.  We can use the Vary header in a response from a server to indicate which HTTP headers were used to produce the variation.

This is the LA County Sheriff Department's Rescue 5 helicopter. The first time I saw it was at the air show but ever since then, I see it all the time on the local news. At the air show the crew performed a mock rescue by lowering a rescuer down to the ground and then lifting someone up into the helicopter. It was pretty impressive!

ABOUT THE SERIES

In 2008 I went to the American Heroes Air Show with my friend Ryan. The air show exhibits helicopters from law enforcement, fire departments, search and rescue, and other government agencies.

When a HTTP cache goes to store the representation it needs to look at the Vary header and for each header listed, look at the request headers that generated the response.  The values of those request headers are used as the secondary cache key.  The cache then uses this secondary cache key to store multiple variants for the same primary cache key.

When trying to satisfy a request from those stored, the cache will use the headers named in the Vary header of the stored variant to generate a new secondary cache key from the request.  If the secondary cache key generated from the request, match that of the stored representation then the stored representation can be served to the client.

Bingo! We can now cache all kinds of variants of our resource and the cache will know which one to serve up based on what the request asks for.

Sounds like a great idea, but...

Consider this request

> GET /test HTTP/1.1
> Host: example.com
> Accept-Encoding: gzip,deflate
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Content-Encoding: gzip
< Content-Type: application/json
< Content-Length: 230
< Cache-Control: max-age=10000000

followed by

> GET /test HTTP/1.1 
> Host: example.com 
> Accept-Encoding: gzip 
> 
< HTTP/1.1 200 OK 
< Vary: Accept-Encoding
< Content-Encoding: gzip 
< Content-Type: application/json 
< Content-Length: 230 
< Cache-Control: max-age=10000000 

The first request would generate a secondary cache key of "gzip,deflate" because the Vary header declared by the server says that the representation was affected by the value of the Accept-Encoding header. 

In the second request, the Accept-Encoding header is different, because this client does not support the "deflate" method of compression.  Even though the cache is holding onto a perfectly good copy of the representation that is gzip compressed and the second client can process gzipped representations, the second client will not get that stored response served because the Accept-Encoding header of the request does not match the value in the secondary cache key.

Translated into English, if you don't ask for exactly the same thing, you won't get the cached copy even if is what you want.

Wait, it gets worse

accident

Time passes, representations are cached, the origin server code is updated to be multilingual, and now the vary header that is returned includes both Accept-Encoding and Accept-Language.

A client makes the following request,

> GET /test HTTP/1.1
> Host: example.com
> Accept-Encoding: gzip,deflate
> Accept-Language: fr
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding, Accept-Language
< Content-Encoding: gzip
< Content-Language: fr
< Content-Type: application/json
< Content-Length: 230
< Cache-Control: max-age=10000000

The cache stores the representation using a secondary cache key of "gzip,deflate:fr".  The same client then makes exactly the same request. Can you see a problem?

If we assume that the representation we stored, back when the vary header only contained Accept-Encoding, is still fresh then we now have two stored representations that match.  This is because when we compare this new request with the old stored representation, the vary header of the old representation only tells us to look at the Accept-Encoding header.

The guidance provided by the HTTP Caching specification tells us that we  MUST use the most recent matching response to satisfy these ambiguous requests.   This isn't really a major problem for developers writing clients and servers, but it's a pain for people trying to write caches.  In fact, I haven't found a private cache implementation that actually does this yet.

Its not as simple as I make it out to be

I glossed over a number of additional issues mentioned in the spec.  When the vary header contains an asterisk, no variants are allowed to match.  I'm still trying to figure out why you would want to store a variant that will never match a request.

Also, I talked about generating the secondary cache key from the values in the request header.  Technically, before creating the secondary cache key, those header values should be normalized.  Which is a fancy term for stripping unnecessary whitespace, removing differences of letter casing when a header value is deemed case insensitive and other more insane requirements like re-ordering field values where the order is not significant.  You can imagine doing a vary on an accept header that lists a bunch of different media types and having to parse them and sort them before being able to do a comparison!

If you think the specification is bad, you should see the implementations

I can't speak for implementations on all platforms, but the support for the vary header on the Windows platform is less than ideal.  Eric Lawrence covers the details of Vary in IE in a blog post.  It would not surprise me in the slightest if other platforms are similarly limited in their support for Vary.

Is there a point to this post?

I believe there are three points to this post: 

  • Vary is a widely used HTTP header, so ideally developers should understand how it is supposed to work.
  • Lots of people are gung-ho on transparent content negotiation.  Without a good working vary implementation, caching is going to be difficult.  That's not good for performance.
  • I'd like to point to a proposed alternative that solves many of the problems of the vary header, the Key Response Http Header.  I'll have to save discussion of this solution for a future post.
Road

Image Credit: Key https://flic.kr/p/56DLot
Image Credit: Rescue https://flic.kr/p/9w9doc
Image Credit: Accident https://flic.kr/p/5XfRKk
Image Credit: Road https://flic.kr/p/8GokGE


Radenko Zec: Simple way to share Dependency Resolvers between MVC and Web API

I had several projects in past using ASP.NET MVC 4 and 5 and ASP.NET Web API that reside in same project. When you want to share DI container between MVC and Web API things can become complicated.

Reason for this is because ASP.NET MVC 5 uses interface System.Web.Mvc.IDependencyResolver for implementing dependency resolver and  ASP.NET Web API uses System.Web.Http.Dependencies.IDependencyResolver interface which has the same name but resides in different namespace.

These two interfaces are different, even they have a same name.

Why should you use MVC and Web API in the same project

There could be many reasons for this. For me the main reason was simplifying the way to securing Web API access by implementing session support in ASP.NET Web API even it is not recommended.

This way we can detect request coming from our MVC Application without need to protect API Key and API Secret which we use for mobile access to Web API. Protecting API Key and API Secret on web application can be very complicated.
 
If you want to learn more about ASP.NET Web API I recommend a great book that I love written by real experts:
BookWebAPIEvolvable
Designing Evolvable Web APIs with ASP.NET

Simple implementation

I am usually using Ninject as DI container of my choice. First we need to reference Ninject in our project.

After that we create class called NinjectRegistrations which inherits NinjectModule and will be used for registering types into container.

public class NinjectRegistrations : NinjectModule
    {
        public override void Load()
        {
            Bind<IApiHelper>().To<ApiHelper>();
        }
    }

 

After that we will create class NinjectDependencyResolver that will implement both required interfaces mentioned above:

 public class NinjectDependencyResolver : NinjectDependencyScope, IDependencyResolver, System.Web.Mvc.IDependencyResolver
    {
        private readonly IKernel kernel;

        public NinjectDependencyResolver(IKernel kernel)
            : base(kernel)
        {
            this.kernel = kernel;
        }

        public IDependencyScope BeginScope()
        {
            return new NinjectDependencyScope(this.kernel.BeginBlock());
        }
    }

 

 
We also need to implement class NinjectDependencyScope which is inherited from NinjectDependencyResolver:

    public class NinjectDependencyScope : IDependencyScope
    {
        private IResolutionRoot resolver;

        internal NinjectDependencyScope(IResolutionRoot resolver)
        {
            Contract.Assert(resolver != null);

            this.resolver = resolver;
        }

        public void Dispose()
        {
            var disposable = this.resolver as IDisposable;
            if (disposable != null)
            {
                disposable.Dispose();
            }

            this.resolver = null;
        }

        public object GetService(Type serviceType)
        {
            if (this.resolver == null)
            {
                throw new ObjectDisposedException("this", "This scope has already been disposed");
            }

            return this.resolver.TryGet(serviceType);
        }

        public IEnumerable<object> GetServices(Type serviceType)
        {
            if (this.resolver == null)
            {
                throw new ObjectDisposedException("this", "This scope has already been disposed");
            }

            return this.resolver.GetAll(serviceType);
        }
    }

 

That is entire implementation. Usage is very simple. To use the same container in MVC and Web API project we just create Ninject container in Global.asax.cs and pass same Ninject resolver to MVC and Web API.

 
NinjectModule registrations = new NinjectRegistrations();
var kernel = new StandardKernel(registrations);
var ninjectResolver = new NinjectDependencyResolver(kernel);

DependencyResolver.SetResolver(ninjectResolver); // MVC
GlobalConfiguration.Configuration.DependencyResolver = ninjectResolver; // Web API
 

You can use the same method for any kind of DI container not only Ninject.

And if you liked this post, go ahead and subscribe here to my RSS Feed to make sure you don’t miss other content like this.

 
 

The post Simple way to share Dependency Resolvers between MVC and Web API appeared first on RadenkoZec blog.


Darrel Miller: REST Agent - Lessons learned in building generic hypermedia clients

Several years ago when I was beginning to work on a large hyper media driven client, I recognized frequently recurring patterns and decided that I could raise the level of abstraction from an HTTP client to a hypermedia client.  It took about a year to realize that it was not as good an idea as I had first thought.

Mountaintop

Introducing RESTAgent

Four years ago I developed the RESTAgent library, created some samples, wrote some blog posts (an intro, basic usage, and more advanced) to try and make it easier to deal with hypermedia on the client by providing generic support for any hypermedia type.  It was designed to provide two methods for navigating links.  Navigate(Link link) and Embed(Link link).  The returned representation is stored as state in the RESTAgent class.

If the media type was recognized then RESTAgent would parse the content looking for additional links contained in the response.  In order for RESTAgent to know how to construct an HTTP request when following a link, a Link class was developed that would encapsulate all the link attributes defined in RFC5988.  This Link class could then be used to construct a complete HTTP request.

The combination of the Link class, the RESTAgent request generator and pluggable media type parsers allowed RESTAgent to navigate all over a hypermedia API using just a few simple methods.

Abstractions need to make things simpler

The challenge, as I discovered, is that HTTP is an extremely flexible protocol.  There are many different valid ways of dealing with the wide variety of HTTP status codes.  Dealing with client errors, redirects and server errors are all situations where the right solution is, "it depends".  Trying to encapsulate that behaviour inside RESTAgent left me with two choices.  Be opinionated about the behaviour, or build a sophisticated interface to allow users of RESTAgent, to configure or replace the default RESTAgent behaviour. 

lightswitch

Being opinionated about the way HTTP is used is fine for server frameworks, but for a client library you need to be able to accommodate the chosen approaches of all of the APIs that you want to interact with.  Flexibility is a critical feature for clients.

Trying to build a lot of flexibility into an abstraction ends up with a solution just as complex as the thing that you are trying to abstract.  At this point the value that you are adding is negated by requiring people to learn the new abstraction.

Generic content is genericpackage

Another significant limitation of a generic hypermedia client is that the only semantics that it understands are links.  Links can add a huge amount of value to a distributed systems, but it is quite difficult to do useful tasks with only links.  Client applications need some domain semantics to work with.  Having RESTAgent only able to partially handle content ended up creating content being managed in multiple places.  Which is less than ideal.

The Link is where the magic is

Further experimentation showed me that the real value of the RESTAgent library was in the Link abstraction.  It's ability to provide a place to encapsulate the interaction semantics related to a particular link relation type was critical to making dealing with hypermedia easier.

ChainLink

By creating various media type parsers that expose their links using this standardized link class, we can obtain a large amount of the value that was provided by RESTAgent without needing to abstract away the HTTP interface.  My recent efforts in this area have been on building out a more useful Link class.

Managing Client State

The other work that RESTAgent was trying to help with was managing client state.  However, I realized that with a full blown client application there tends to be lots of "mini" client state machines that have different lifetimes and work together to form the complete client application.  I initially described these mini state machines as missions.  They used a single RESTAgent to achieve a goal through a set of client server interactions.  Once I actually formalized the notion of this encapsulating mission into an actual implementation , I decided that the mission itself was a better place to manage state than within the RESTAgent where all state had to be handled generically.  I will write more on the concept of missions in a future post.

My advice

I've seen a few people recently talking about building generic hypermedia clients.  Hopefully, if they read about my experiences, they may avoid making the same mistakes I did, and get a better result.  I would definitely be interested to hear if anyone else is experimenting with concepts similar to Links and Missions.

Photo credit: Chain https://flic.kr/p/3fdHLE
Photo credit: Package https://flic.kr/p/4A15jm
Photo credit: Mountain top https://flic.kr/p/883WpP
Photo credit: Light switch https://flic.kr/p/6mZCc4


Taiseer Joudeh: AngularJS Authentication with Auth0 & ASP .Net OWIN

This is guest post written originally to Auth0.

Recently I’ve blogged about using tokens to authenticate users in single page applications, I’ve used ASP.NET Web API, Owin middleware and ASP.NET Identity to store local accounts in database, I didn’t tap into social identity logins such as (Google, Microsoft Accounts, Facebook, etc..) because each provider will not supply the same information (profile schema) about the logged in user, there might be properties missing or with different names, so handling this manually and storing those different schemes will not be a straight forward process.

I was introduced to Auth0 by Matias Woloski, basically Auth0 is a feature rich identity management system which supports local database accounts, integration with more than 30 social identity providers, and enterprise identity providers such as (AD, Office 365, Google Apps, etc…). You can check the full list here.

In this post I’ll implement the same set of features I’ve implemented previously using Auth0 management system as well I’ll integrate authentication with multiple social identity providers using less code in the back-end API and in the front-end application which will be built using AngularJS. So let’s jump to the implementation.

I’ll split this post into two sections, the first section will be for creating the back-end API and configuring Auth0 application, and the second section will cover creating the SPA and Auth0 widget.

The demo application we’ll build in this post can be accessed from (http://auth0web.azurewebsites.net). The source code is available on my GitHub Repo.

Section 1: Building the Back-end API

Step 1.1: Create new Application in Auth0

After you register with Autho you need to create an application, Auth0 comes with set of applications with easy to integrate SDKs, in our case we’ll select “ASP.NET (OWIN)” application as the image below:

OWIN Application

After you give the application a friendly name, in my case I named it “ASP.NET (OWIN)” a popup window will show up asking which connections you want to enable for this application. “Connection” in Auth0 means identity providers you want to allow in the application, in my case I’ll allow Database local accounts, Facebook, Google, GitHub, and Microsoft Accounts as the image below. Usually the social accounts will be disabled for all applications, to enable them navigate to “Connections” tab, choose “Social” then enable the social providers you like to enable for your application.

Social Connections

Once the application is created, you can navigate to application “settings” link where you will find all the needed information (Domain, Client Id, Client Secret, Callback URLs, etc…) to configure the Web API we’ll build in the next step.

App Settings

Step 1.2: Create the Web API

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, you can follow along using Visual Studio 2012 but you need to install Web Tools 2013.1 for VS 2012 by visiting this link.

Now create an empty solution and name it “AuthZero” then add new ASP.NET Web application named “AuthZero.API”, the selected template for the project will be “Empty” template with no core dependencies, check the image below:

WebAPIProject

Step 1.3: Install the needed NuGet Packages

This project is empty so we need to install the NuGet packages needed to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.1.2
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.1.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 2.1.0
Install-Package Microsoft.Owin.Security.Jwt -Version 2.1.0
Install-Package Microsoft.Owin.Cors -Version 2.1.0
Update-Package System.IdentityModel.Tokens.Jwt

The first package “Microsoft.AspNet.WebApi” contains everything we need to host our API on IIS, the second package “Microsoft.AspNet.WebApi.Owin” will allow us to host Web API within an Owin server.

The third package “Microsoft.Owin.Host.SystemWeb” will be used to enable our Owin server to run our API on IIS using ASP.NET request pipeline as eventually we’ll host this API on Microsoft Azure Websites which uses IIS.

The forth package “Microsoft.Owin.Security.Jwt” will enable Owin server Middleware to protect and validate JSON Web Tokens (JWT).

The last package “Microsoft.Owin.Cors” will be responsible to allow CORS for our Web API so it will accept requests coming from any origin.

Note: The package “System.IdentityModel.Tokens.Jwt” gets installed by default is old (version 1.0.0) so we need to update it to latest (version 3.0.2).

Step 1.4: Add Auth0 settings to Web.config

We need to read the Auth0 settings for the application we created earlier to configure our API, so open Web.Config file and add the below keys, do not forget to replace the values of those keys with the correct one you obtain once you create application on Auth0.

<appSettings>
     <add key="auth0:ClientId" value="YOUR_CLIENT_ID" />
     <add key="auth0:ClientSecret" value="YOUR_CLIENT_SECRET" />
     <add key="auth0:Domain" value="YOUR_DOMAIN" />
  </appSettings>

Step 1.5: Add Owin “Startup” Class

Now we want to add new class named “Startup”. It will contain the code below:

[assembly: OwinStartup(typeof(AuthZero.API.Startup))]
namespace AuthZero.API
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureAuthZero(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }
    }
}

What we’ve implemented above is simple, this class will be fired once our server starts, notice the “assembly” attribute which states which class to fire on start-up. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server.

The implementation of method “ConfigureAuthZero” will be covered in the next step, this method will be responsible to configure our Web API to generate JWT using Auth0 application we created earlier.

The “HttpConfiguration” object is used to configure API routes, so we’ll pass this object to method “Register” in “WebApiConfig” class.

Lastly, we’ll pass the “config” object to the extension method “UseWebApi” which will be responsible to wire up ASP.NET Web API to our Owin server pipeline.

The implementation of “WebApiConfig” is simple, all you need to do is to add this class under the folder “App_Start” then paste the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API routes
            config.MapHttpAttributeRoutes();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 1.6: Configure generating JWT using Auth0

Now we want to configure our Web API to use Auth0 application created earlier to generate JSON Web Tokens which will be used to allow authenticated users to access the secured methods in our Web API. So to implement this open class “Startup” and add the code below:

//Rest of Startup class implementation is here

private void ConfigureAuthZero(IAppBuilder app)
        {
            var issuer = "https://" + ConfigurationManager.AppSettings["auth0:Domain"] + "/";
            var audience = ConfigurationManager.AppSettings["auth0:ClientId"];
            var secret = TextEncodings.Base64.Encode(TextEncodings.Base64Url.Decode(ConfigurationManager.AppSettings["auth0:ClientSecret"]));

            // Api controllers with an [Authorize] attribute will be validated with JWT
            app.UseJwtBearerAuthentication(
                new JwtBearerAuthenticationOptions
                {
                    AuthenticationMode = Microsoft.Owin.Security.AuthenticationMode.Active,
                    AllowedAudiences = new[] { audience },
                    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
                    {
                        new SymmetricKeyIssuerSecurityTokenProvider(issuer, secret)
                    }
                });
        }

What we’ve implemented in this method is the following:

  • Read Autho application settings stored in the web.config file.
  • We’ve added the JSON Web Token Bearer middleware to our Owin server, this class accepts set of options, as you can see we’ve set the authentication mode to “Active” which configures the middleware to check every incoming request and attempt to authenticate the call, and if it is successful, it will create a principal that represents the current user and assign that principal to the hosting environment.
  • We’ve set the issuer of our JSON Web Token (Domain Name) along with the base64 encoded symmetric key (Client Secret) which will be used to sign the generated JSON Web Token.

Now if we want to secure any end point in our Web API, all we need to do is to attribute any Web API controller with “[Authorize]” attribute, so requests containing a valid bearer token can only access it.

Note: The JWT Token expiration time can be set from Autho Application settings as the image below, the default value is 36000 seconds (10 hours).

JWT Expiration

Step 1.7: Add a Secure Shipments Controller

Now we want to add a secure controller to serve our Shipments, we’ll assume that this controller will return shipments only for authenticated users, to keep things simple we’ll return static data. So add new controller named “ShipmentsController” and paste the code below:

[RoutePrefix("api/shipments")]
    public class ShipmentsController : ApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(Shipment.CreateShipments());
        }
    }

    #region Helpers

    public class Shipment
    {
        public int ShipmentId { get; set; }
        public string Origin { get; set; }
        public string Destination { get; set; }

        public static List<Shipment> CreateShipments()
        {
            List<Shipment> ShipmentList = new List<Shipment> 
            {
                new Shipment {ShipmentId = 10248, Origin = "Amman", Destination = "Dubai" },
                new Shipment {ShipmentId = 10249, Origin = "Dubai", Destination = "Abu Dhabi"},
                new Shipment {ShipmentId = 10250,Origin = "Dubai", Destination = "New York"},
                new Shipment {ShipmentId = 10251,Origin = "Boston", Destination = "New Jersey"},
                new Shipment {ShipmentId = 10252,Origin = "Cairo", Destination = "Jeddah"}
            };

            return ShipmentList;
        }
    }

    #endregion

Notice how we added the “Authorize” attribute on the method “Get” so if you tried to issue HTTP GET request to the end point “http://localhost:port/api/shipments” you will receive HTTP status code 401 unauthorized because the request you sent till this moment doesn’t contain JWT in the authorization header. You can check it using this end point: http://auth0api.azurewebsites.net/api/shipments

Step 1.8: Add new user and Test the API

Auth0 dashboard allows you to manage users registered in the applications you created under Auth0, so to test the API we’ve created we need to create a user before, I’ll jump back to Auth0 dashboard and navigate to the “Users” tab then click “New”, a popup window will appear as the image below, this window will allow us to create local database user, so fill up the form to create new user, I’ll use email “taiseer.joudeh@hotmail.com” and password “87654321″.

Auth0 New User

Once the user is created we need to generate JWT token so we can access the secured end point, to generate JWT we can send HTTP POST request to the end point https://tjoudeh.auth0.com/oauth/ro this end point will work only for local database connections and AD/LDAP. You can check Auth0 API playground here. To get JWT token open your favorite REST client and issue POST request as the image below:

JWT Request

Notice that the content-type and payload type is “x-www-form-urlencoded” so the payload body will contain encoded URL, as well notice that we’ve specified the “Connection” for this token and the “grant_type” If all is correct we’ll received JWT token (id_token) on the response.

Note: The “grant_type” indicates the type of grant being presented in exchange for an access token, in our case it is password.

Now we want to use this token to request the secure data using the end point http://auth0api.azurewebsites.net/api/shipments so we’ll issue GET request to this end point and will pass the bearer JWT token in the Authorization header, so for any secure end point we have to pass this bearer token along with each request to authenticate the user.

The GET request will be as the image below:

Get Secure

If all is correct we’ll receive HTTP status 200 Ok along with the secured data in the response body, if you try to change any character with signed token you directly receive HTTP status code 401 unauthorized.

By now our back-end API is ready to be consumed by the front-end app we’ll build in the next section.

Section 2: Building the Front-end SPA

Now we want to build SPA using AngularJS which will communicate with the back-end API created earlier. Auth0 provides a very neat and feature rich JavaScript plugin named “Widget“. This widget is very easy to implement in any web app. When you use it you will get features out of the box such as social integration with social providers, integration with AD, and sign up/forgot password features. The widget plugin looks as the image below:

Auth0Widget

The enabled social login providers can be controlled from Autho dashboard using the “Connection” tab, so for example if you want to enable LinkedIn, you just need to activate it on your Auth0 application and it will show directly on the Widget.

Step 2.1: Add the Shell Page (Index.html)

First of all, we need to add the “Single Page” which is a container for our application, this page will contain a link to sign in, a section will show up for authenticated users only, and “ng-view” directive which will be used to load partial views. The html for this page will be as the below:

<!DOCTYPE html>
<html>
    <head>
        <link href="content/app.css" rel="stylesheet" />
        <link href="https://netdna.bootstrapcdn.com/bootstrap/3.0.3/css/bootstrap.min.css" rel="stylesheet">
    </head>
    <body ng-app="auth0-sample" class="home" ng-controller="mainController">
        <div class="login-page clearfix">
          <div class="login-box auth0-box" ng-hide="loggedIn">
            <img src="https://i.cloudup.com/StzWWrY34s.png" />
            <h3>Auth0 Example</h3>
            <p>Zero friction identity infrastructure, built for developers</p>
             <a ng-click="login()" class="btn btn-primary btn-lg btn-block">SignIn</a>
          </div>
             <!-- This log in page would normally be done using routes
          but for the simplicity of the excercise, we're using ng-show -->
          <div ng-show="loggedIn" class="logged-in-box auth0-box">
            <img ng-src="{{auth.profile.picture}}" class="avatar"/>
            <h3>Welcome {{auth.profile.name}}</h3>
            <div class="profile-info row">
                <div class="profile-info-label col-xs-6">Nickname</div>
                <div class="profile-info-content col-xs-6">{{auth.profile.nickname}}</div>
            </div>
            <div class="profile-info row">
                <div class="profile-info-label col-xs-6">Your JSON Web Token</div>
                <div class="profile-info-content col-xs-6">{{auth.idToken | limitTo:12}}...</div>
                
            </div>
                 <div class="profile-info row">
                    <a ng-href ="#/shipments" class="btn btn-success btn-sm btn-block">View my shipments</a>
               </div>
               <div class="profile-info row">
                    <a ng-click="logout()" class="btn btn-danger btn-sm btn-block">Log out</a>
               </div>
          </div>
          <div data-ng-view="">
        </div>
        </div>
        <script src="https://code.angularjs.org/1.2.16/angular.min.js" type="text/javascript"></script>
        <script src="https://code.angularjs.org/1.2.16/angular-cookies.min.js" type="text/javascript"></script>
        <script src="https://code.angularjs.org/1.2.16/angular-route.min.js" type="text/javascript"></script>
        <script src="app/app.js" type="text/javascript"></script>
        <script src="app/controllers.js" type="text/javascript"></script>
    </body>
</html>

Step 2.2: Add reference for Auth0-angular.js and Widget libraries

Now we need to add a reference for file Auth0-angular.js, this file is AngularJS module which allows us to trigger the authentication process and parse the JSON Web Token with the “ClientID” we obtained once we created Auth0 application.

As well we need to add a reference for the “Widget” plugin which will be responsible to show the nice login popup we’ve showed earlier. To implement this and at the bottom of “index.html” page add a reference to those new JS files (line 5 and 6) as the code snippet below:

<!--rest of HTML is here--> 
<script src="https://code.angularjs.org/1.2.16/angular.min.js" type="text/javascript"></script>
<script src="https://code.angularjs.org/1.2.16/angular-cookies.min.js" type="text/javascript"></script>
<script src="https://code.angularjs.org/1.2.16/angular-route.min.js" type="text/javascript"></script>
<script src="https://cdn.auth0.com/w2/auth0-widget-4.js"></script>
<script src="https://cdn.auth0.com/w2/auth0-angular-0.4.js"> </script>
<script src="app/app.js" type="text/javascript"></script>
<script src="app/controllers.js" type="text/javascript"></script>

Step 2.3: Booting up our AngularJS app

We’ll add file named “app.js”, this file will be responsible to configure our AngularJS app, so add this file and paste the code below:

var app = angular.module('auth0-sample', ['auth0-redirect', 'ngRoute']);

app.config(function (authProvider, $httpProvider, $routeProvider) {
    authProvider.init({
        domain: 'tjoudeh.auth0.com',
        clientID: '80YvW9Bsa5P67RnMZRJfZv8jEsDSerDW',
        callbackURL: location.href
    });
});

By looking at the code above you will notice that we’ve injected the dependency “auth0-redirect” to our module, once it is injected we can use the “authProvider” service where we’ll use to configure the widget, so we need to set the values for the domain, clientID, and callBackURL. Those values are obtained from Autho application we’ve created earlier. Once you set the callbackURL you need to visit Auth0 application settings and set the same callbackURL as the image below:

Auth0 CallBack URL

The callbackURL is used once user is successfully authenticated, Auth0 will redirect user to the callbackURL with a hash containing an access_token and the JSON Web Token (id_token).

Step 2.4: Showing up the Widget Plugin

Now it is time to show the Widget once the user clicks on SignIn link, so we need to add file named “controllers.js”, inside this file we’ll define a controller named “mainController”, the implementation for this controller as the code below:

app.controller('mainController', ['$scope', '$location', 'auth', 'AUTH_EVENTS',
  function ($scope, $location, auth, AUTH_EVENTS) {

    $scope.auth = auth;
    $scope.loggedIn = auth.isAuthenticated;

    $scope.$on(AUTH_EVENTS.loginSuccess, function () {
      $scope.loggedIn = true;
      $location.path('/shipments');
    });
    $scope.$on(AUTH_EVENTS.loginFailure, function () {
      console.log("There was an error");
    });

    $scope.login = function () {
        auth.signin({popup: false});
    }

    $scope.logout = function () {
        auth.signout();
        $scope.loggedIn = false;
        $location.path('/');
    };

}]);

You can notice that we are injecting the “auth” service and “AUTH_EVENTS”, inside the “login” function we are just calling “auth.signin” and passing “popup:false” as an option,  the nice thing here that Auth0-angularjs module broadcasts events related to successful/unsuccessful logins so all child scopes which are interested to listen to this event can handle the login response. So in our case when the login is successful (User is authenticated) we’ll set flag named “loggedIn” to true and redirect the user to a secure partial view named “shipments” which we’ll add in the following steps.

Once the user is authenticated the JSON Web Token and the access token is stored automatically using AngularJS cookie store so you won’t worry about refreshing the single page application and losing the authenticated user context, all this is done by the Widget and Auth0-angularjs module.

Note: To check how to customize the Widget plugin check the link here.

As well we’ve added support for signing out the users, all we need to do is call “auth.signout” and redirect the user to the application root, the widget will take care of clearing the cookie store from the stored token.

Step 2.5: Add Shipments View and Controller

Now we want to add a view which should be accessed by authenticated users only, so we need to add new partial view named “shipments.html”, it will only renders the static data from the end point http://auth0api.azurewebsites.net/api/shipments when issuing GET request. The html for partial view as the below:

<div class="row">
    <div class="col-md-4">
        &nbsp;
    </div>
    <div class="col-md-4">
        <h5><strong>My Secured Shipments</strong> </h5>
        <table class="table table-striped table-bordered table-hover">
            <thead>
                <tr>
                    <th>Shipment ID</th>
                    <th>Origin</th>
                    <th>Destination</th>
                </tr>
            </thead>
            <tbody>
                <tr data-ng-repeat="shipment in shipments">
                    <td>
                        {{ shipment.shipmentId }}
                    </td>
                    <td>
                        {{ shipment.origin }}
                    </td>
                    <td>
                        {{ shipment.destination }}
                    </td>
                </tr>
            </tbody>
        </table>
    </div>
    <div class="col-md-4">
        &nbsp;
    </div>

Now open file “controller.js” and add the implementation for “shipmentsController” as the code below:

app.controller('shipmentsController', ['$scope', '$http', '$location', function ($scope, $http, $location) {
    var serviceBase = "http://auth0api.azurewebsites.net/";
    $scope.shipments = [];
    init();

    function init() {
        getShipments();
    }
    function getShipments() {
        var shipmentsSuccess = function (response) {
            $scope.shipments = response.data;
        }
        var shipmentsFail = function (error) {
            if (error.status === 401) {
                $location.path('/');
            }
        }
        $http.get(serviceBase + 'api/shipments').then(shipmentsSuccess, shipmentsFail);
    }
}]);

The implementation for this controller is pretty straight forward. We are just sending HTTP GET request to the secured endpoint http://auth0api.azurewebsites.net/api/shipments, so if the call has succeeded we will set the returned shipments in $scope object named “shipments” and if it failed because the user is unauthorized (HTTP status code 401) then we’ll redirect the user to the application root.

Now to be able to access the secured end point we have to send the JSON Web Token in the authorization header for this request. As you notice we are not setting the token value inside this controller. The right way to do this is to use “AngularJS Interceptors”. Thanks for the Auth0-angularjs module which makes implementing this very simple. This interceptor will allow us to capture every XHR request and manipulate it before sending it to the back-end API so we’ll be able to set the bearer token if the token exists in the cookie store (user is authenticated).

Step 2.6: Add the Interceptor and Configure Routes

All you need to do to add the interceptor is to push it to $httpProvider service interceptors array. Setting the token with each request will be done by Auth0-angularjs module.

As well to configure the shipments route we need to map the “shipmentsController” with “shipments” partial view using $routeProvider service, so open “app.js” file again and replace all the code in it with the code snippet below:

var app = angular.module('auth0-sample', ['auth0-redirect', 'ngRoute', 'authInterceptor']);

app.config(function (authProvider, $httpProvider, $routeProvider) {
    authProvider.init({
        domain: 'tjoudeh.auth0.com',
        clientID: '80YvW9Bsa5P67RnMZRJfZv8jEsDSerDW',
        callbackURL: location.href
    });

    $httpProvider.interceptors.push('authInterceptor');

    $routeProvider.when("/shipments", {
        controller: "shipmentsController",
        templateUrl: "/app/views/shipments.html"
    });

    $routeProvider.otherwise({ redirectTo: "/" });
});

By completing this step we are ready to run the application and see how Auth0 simplified and enriched the experience of users authentication.

If all is implemented correctly, after you are authenticated using social login or database account your profile information and the secured shipments view will look as the image below:

LogedIn

That’s it for now folks!

I hope this step by step post will help you to integrate Auth0 with your applications, if you have any question please drop me a comment.

The post AngularJS Authentication with Auth0 & ASP .Net OWIN appeared first on Bit of Technology.


Darrel Miller: Composing API responses for maximum reuse with ASP.NET Web API

In Web API 2.1 a new mechanism was introduced for returning HTTP messages that appeared to be a cross between HttpResponseMessage and the ActionResult mechanism from ASP.NET MVC.  At first I wasn't a fan of it at all.  It appeared to add little new value and just provide yet another alternative that would a be a source of confusion.  It wasn't until I saw an example produced by Brad Wilson that I was convinced that it had value.

The technique introduced by Brad was to enable a chain of action results to be created in the controller action method that would then be processed when the upstream framework decided it needed to have an instance of a HttpResponseMessage.

A composable factory

BuildingBlocks

The IHttpActionResult interface is a single asynchronous method that returns a HttpResponseMessage.  It is effectively a HttpResponseMessage factory.  Being able to chain together different implementations of IHttpActionResult allows you to compose the exact behaviour you want in your controller method with re-usable classes.  With a little bit of extensions method syntax sugar you can create controller actions that look like this,

 public IHttpActionResult Get()
        {
            var home = new HomeDocument();

            home.AddResource(TopicsLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(DaysLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(SessionsLinkHelper.CreateLink(Request).WithHints());
            home.AddResource(SpeakersLinkHelper.CreateLink(Request).WithHints());


            return new OkResult(Request)
                .WithCaching(new CacheControlHeaderValue() { MaxAge = new TimeSpan(0, 0, 60) })
                .WithContent(new HomeContent(home));
        }

The OkResult class implements IHttpActionResult and the WithCaching and WithContent build the a chain with other implementations.  This is similar to the way HttpMessageHandler and DelegatingMessageHandler work together.

Apply your policies

In the example above, the CachingActionResult didn’t add a whole lot of value over just setting the CacheControl header directly. However, I think could be a good place to start defining consistent policies across your API.  Consider writing a bit of infrastructure code to allow you to do,

 public IHttpActionResult Get()
 {
          var image = /* ... Get some image */

            return new OkResult(Request)
                .WithCaching(CacingPolicy.Image)
                .WithImageContent(image);
 }

This example is a bit of a thought experiment, however the important point is, you can create standardized, re-usable classes that apply a certain type of processing to a generated HttpResponseMessage.

Dream a little

If I were to let my imagination run wild, I could see all kinds of uses for this methods,

new CreatedResult(Request)
.WithLocation(Uri uri)
.WithContent(HttpContent content)
.WithEtag()

new ServerNotAvailableResult(Request)
.WithRetry(Timespan timespan)
.WithErrorDetail(exception)

new AcceptResult(Request)
.WithStatusMonitor(Uri monitorUrl)

new OkResult(Request)
 .WithNegotiatedContent(object body)
 .WithLastModified(DateTime lastModified)
 .WithCompression()

Some of these methods might be able to be created as completely general purpose “framework level” methods, but I see them more as being able to apply your API standards to controller responses. 

More testable

One significant advantage of using this approach over using ActionFilter attributes is that when unit testing your controllers, you will be able to test the effect of the ActionResult chain.  With ActionFilter attributes, you need to create an in-memory host to get the full framework infrastructure support in order for the ActionFilter objects to execute.

Another advantage of this approach over action filters for defining action-level behaviour is that by constructing the chain, you have explicit over the order in which the changes are applied to your response. 

More reusable

It may also be possible to take the abstraction one step further and create a factory for pre defined chains.  If the same sets of behaviour are applicable to many different actions then you can create those chains in one place and re-use a factory method.  This can be useful in controllers where there are a variety of different overloads of a method that take different parameters, but the response is always constructed in a similar way.   The method for creating the action result chain can be in the controller itself.

The implementation

The CachingResult class looks like this,

public class CachingResult : IHttpActionResult
    {
        private readonly IHttpActionResult _actionResult;
        private readonly CacheControlHeaderValue _cachingHeaderValue;


        public CachingResult(IHttpActionResult actionResult, CacheControlHeaderValue cachingHeaderValue)
        {
            _actionResult = actionResult;
            _cachingHeaderValue = cachingHeaderValue;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {

            return _actionResult.ExecuteAsync(cancellationToken)
                .ContinueWith(t =>
                {
                    t.Result.Headers.CacheControl = _cachingHeaderValue;
                    return t.Result;
                }, cancellationToken);
        }
    }

and the extension method is simply,

  public static IHttpActionResult WithCaching(this IHttpActionResult actionResult, CacheControlHeaderValue cacheControl)
  {
            return new CachingResult(actionResult, cacheControl);
  }

A more refined implementation might be to create an abstract base class that takes care of the common housekeeping.  It can also take care of actually instantiating a default HttpResponseMessage if the ActionResult is at the end of the chain.  Something like this,

public abstract class BaseChainedResult : IHttpActionResult
    {
        private readonly IHttpActionResult _NextActionResult;

        protected BaseChainedResult(IHttpActionResult actionResult)
        {
            _NextActionResult = actionResult;
        }

        public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
        {
            if (_NextActionResult == null)
            {
                var response = new HttpResponseMessage();
                ApplyAction(response);
                return Task.FromResult(response);
            }
            else
            {

                return _NextActionResult.ExecuteAsync(cancellationToken)
                    .ContinueWith(t =>
                    {
                        var response = t.Result;
                        ApplyAction(response);
                        return response;
                    }, cancellationToken);
            }
        }

        public abstract void ApplyAction(HttpResponseMessage response);
    
    }

A more complete usage example

If you are interested in seeing these classes in use in a complete project you can take a look at the Conference API project on Github.

Image credit: Building blocks https://flic.kr/p/4r9zSK


Darrel Miller: An HTTP Resource is a lot simpler than you might think

Unfortunately, I still regularly run into articles on the web that misunderstand the concept of an HTTP resource.  Considering it is a core piece of web architecture, having a clear understanding of what it means can make many other pieces of web architectural guidance considerably easier to understand.

To try and keep this post as practical as possible, let’s first consider some URLs.

http://customers example.org/10 
http://example.org/customers?id=10 
http://example.org/customers 
http://example.org/customers?active=true 
http://example.org/customers/10/edit

How many resources are being identified by these five URLs? 

DogTags

If all of those requests return some kind of 2XX response, then there are five distinct information resources.  The presence of query strings parameters, or path parameters, or verbs or nouns, or plurals or singular terms, has no bearing on the fact that that these are distinct web identifiers that identify distinct resources. 

Time to set the record straight

Let me give a few examples that seem to confuse people:

Resources > Entities

Another common misconception is that resources are limited to exposing entities from your application domain.  Here are some examples of resources that are perfectly valid despite what you might read on the internet:

http://example.org/dashboard
http://example.org/printer 
http://example.org/barcodeprocessor
http://example.org/invoice/32/status 
http://example.org/searchform
http://example.org/calculator

It’s very simple

A resource is some piece of information that is exposed on the web using an URL as an identifier.  Try not to read anything more into the definition than that. 


Radenko Zec: Simple way to implement caching in ASP.NET Web API

This article is not about caching the output of APIControllers.
Often in ASP.NET Web API you will have a need to cache something temporarily in memory probably to improve performance.

There are several nice libraries that allow you to do just that.

One very popular is CacheCow and here you will find a very nice article how to use this library.

But what if you want to do caching without third party library in a very simple way?
 
If you want to learn more about ASP.NET Web API I recommend a great book that I love written by real experts:
 
BookWebAPIEvolvable
Designing Evolvable Web APIs with ASP.NET 

Caching without third party library in a very simple way

I had a need for this. I implemented some basic authentication for the ASP.NET Web API with Tokens and wanted to cache Tokens temporarily in memory to avoid going into the database for every HTTP request.

The solution is very simple.
You just need to use Microsoft class for this called MemoryCache which resides in System.Runtime.Caching dll.

 

I created a simple helper class for caching with few lines of code:

 

public class MemoryCacher
{
  public object GetValue(string key)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    return memoryCache.Get(key);
  }

  public bool Add(string key, object value, DateTimeOffset absExpiration)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    return memoryCache.Add(key, value, absExpiration);
  }

  public void Delete(string key)
  {
    MemoryCache memoryCache = MemoryCache.Default;
    if (memoryCache.Contains(key))
    {
       memoryCache.Remove(key);
    }
  }
}

When you want to cache something:

memCacher.Add(token, userId, DateTimeOffset.UtcNow.AddHours(1));

 

And to get something from the cache:

var result = memCacher.GetValue(token);

 

 
NOTE: You should be aware that MemoryCache will be disposed every time IIS do app pool recycle.

If your Web API :

  • Does not receive any request for more that 20 minutes
  • Or hit default IIS pool interval 1740 minutes
  • Or you copy a new version of your ASP.NET Web API project into an IIS folder (this will auto-trigger app pool recycle)

So you should have a workaround for this. If you don’t get value from the cache you should grab it for example from a database and then store it again in the cache:

 

 var result = memCacher.GetValue(token);
 if (result == null)
 {
     // for example get token from database and put grabbed token
     // again in memCacher Cache
 }

 

And if you liked this post, go ahead and subscribe here to my RSS Feed to make sure you don’t miss other content like this.

 
 

The post Simple way to implement caching in ASP.NET Web API appeared first on RadenkoZec blog.


Radenko Zec: ASP.NET Web API GZip compression ActionFilter with 8 lines of code

If you are building high performance applications that consumes WebAPI, you often need to enable GZip / Deflate compression on Web API.

Compression is an easy way to reduce the size of packages and in the same time increase the speed of communication between client and server.

Two common algorithms used to perform this on the Web are GZip and Deflate. These two algorithms are recognized by all web browsers and all GZip and Deflate HTTP responses are automatically decompressed.

On this picture you can see the benefits of GZip compression.

 

compression

Source : Effects of GZip compression

How to implement compression on ASP.NET Web API

There are several ways to do this. One is to do it on IIS level. This way all responses from ASP.NET Web API will be compressed.

Another way is to implement custom delegating handler to do this. I have found several examples of how to do this on the internet but it often requires to write a lot of code.

The third way is to implement your own actionFilter which can be used on method level, controller level or for entire WebAPI.
 

 

Implement ASP.NET Web API GZip compression ActionFilter

For this example with around 8 lines of code I will use very popular library for Compression / Decompression called DotNetZip library .This library can easily be downloaded from NuGet.

Now we implement  Deflate compression ActionFilter.

 public class DeflateCompressionAttribute : ActionFilterAttribute
 {

    public override void OnActionExecuted(HttpActionExecutedContext actContext)
    {
        var content = actContext.Response.Content;
        var bytes = content == null ? null : content.ReadAsByteArrayAsync().Result;
        var zlibbedContent = bytes == null ? new byte[0] : 
        CompressionHelper.DeflateByte(bytes);
        actContext.Response.Content = new ByteArrayContent(zlibbedContent);
        actContext.Response.Content.Headers.Remove("Content-Type");
        actContext.Response.Content.Headers.Add("Content-encoding", "deflate");
        actContext.Response.Content.Headers.Add("Content-Type","application/json");
        base.OnActionExecuted(actContext);
      }
  }

 

We also need a helper class to perform compression.

public class CompressionHelper
{ 
        public static byte[] DeflateByte(byte[] str)
        {
            if (str == null)
            {
                return null;
            }

            using (var output = new MemoryStream())
            {
                using (
                    var compressor = new Ionic.Zlib.DeflateStream(
                    output, Ionic.Zlib.CompressionMode.Compress, 
                    Ionic.Zlib.CompressionLevel.BestSpeed))
                {
                    compressor.Write(str, 0, str.Length);
                }

                return output.ToArray();
            }
        }
}

 

For GZipCompressionAttribute implementation is exactly the same. You only need to call GZipStream instead of DeflateStream in helper method implementation.

If we want to mark some method in controller to be Deflated just put this ActionFilter attribute above method like this :

public class V1Controller : ApiController
{
  
    [DeflateCompression]
    public HttpResponseMessage GetCustomers()
    {

    }

}

 

If you find some better way to perform this please let me know.

 

The post ASP.NET Web API GZip compression ActionFilter with 8 lines of code appeared first on RadenkoZec blog.


Darrel Miller: Single purpose media types and reusability

The one great thing about twitter is that you quickly find out what you failed to explain clearly :-)  My efforts in advocating for single purpose media types failed to clarify that by single purpose, I am not suggesting that these media types should not be re-usable.  Let me try and explain.

My realization was triggered by @inadarai 's tweet to me,

I'm going to take the liberty of assuming that this tweet was slightly tongue-in-cheek.  However, it did highlight to me the part of my explanation that I was missing.

It is a whacky world out there

For those who have better things to do than keep up with daily craziness in Silicon Valley, the significance of this image may not be immediately apparent.  A new start-up called Yo is making the news with an application that simply sends the message "Yo" to someone in your social network.  The joke of this tweeted image is that many people are likely to copy this idea and we will have many apps on our phone where each one will send just a single one word message.

My response to @inadarei included these two tweets,

Although still not a particularly serious suggestion, the example is extremely illustrative.  By single purpose media types, I am not suggesting that each and every API should invent media types that are designed to solve just their specific problems.  It is critical to realize that media types should be targeted to solve a single generic problem that is faced by many APIs.  It is only by doing this can we achieve the level of re-usability and interoperability that make the web successful.

Generalize the problem

By generalizing our problem space to the notion of a "salutation" we can define a single hypothetical media type and name it application/salutation+json.

To satisfy the functional requirements of an application that simply displays a single word message to an end user, all we need is a payload that looks something like:

{
   "message" : "Yo"
}

This media type could then be delivered by any API that wishes to transmit a single word message.  Now that the message has been generalized, there is no need to have a different client for each API.  In the same way that RSS feeds allowed us to use one client application to consume blog posts from many different blogs, so our salutation media type enables just a single client application for all APIs that support the format.

But that was a dumb example

Let me provide a more realistic scenario.   For many years I've been building ERP systems and for the last 7 years I've been working on a building an ERP API that uses media types and link relations to communicate information to a desktop client application.  For a period of time, I considered inventing my own media types for Invoices, Sales orders, Customers, Purchase Orders, etc, etc.  However, I quickly realized that the overhead of creating written specifications for the 500 or so entities that were in our ERP product  would be a mammoth and very painful task.  I would also never likely see any re-use out of those media types.  It is highly unlikely that I could convince any other ERP vendor that they should re-use our Invoice format as it would likely contain many capabilities that were specific to our API, and I'm sure they would believe that their format is better!

Don't those standards already exist?

As an industry we have tried on numerous occasions to create universal standards for interchange of business documents.  (e.g Edifact, ANSI X12, UBL).  None of these efforts have come close to having the adoption like HTML has, despite having massive amounts of money poured into convincing people to adopt them.

The problem with these previous efforts is they tried to bite off a bigger problem than they could chew.  In order to manage the size of problem they were trying to address, they first invented a meta-format and then applied schemas to that format to produce consistent documents for each business scenario.  The result was horribly complicated, with a huge barrier to entry.  If you don't believe me, here is the list of fields used by a UBL invoice.

Be less ambitious

Now imagine a much simpler scenario for just invoicing information.  Imagine if we could agree on a simple read-only format containing invoice information.  If services like Amazon, NewEgg, Staples, Zappos, etc, could expose their invoice information in this shared format, as well as the normal HTML and PDF, then this new format could then be consumed easily by services like Expensify, Mint, Quicken and what ever other services might help you manage your finances.   This scenario would be an achievable goal in my opinion and one that could quickly deliver real value.

Profiles can do that

There is no theoretical reason why this same goal could not be achieved by using generic hypermedia types and profiles .  However, you could now have the situation where Amazon likes using Hal, Staples decide on Siren for their company standard and NewEgg chose JSON-LD.  The theory of profiles, as I understand it, is that everyone should be able to consistently apply the same "invoice" profile across multiple base generic media types.  However, the consumers like Expensify and Mint are now faced with dealing with all these different base formats combined with profiles.  This seems like a significant amount of complexity for minimal gain.

But what about exploding media types

A common objection to creating media types that have a limited scope is the fear of "an explosion of media types".  The main concern here is multiple people creating media types that address the same purpose but have trivial differences.  Having one invoice format that has a property called "InvoiceDate" and another that has "dateInvoiced" is really not helping anyone. 

bigbang

Ironically, the competing generic hypermedia types that were partly a result of people trying to avoid creating an explosion of media types are now suffering from the "trivially different" problem.  Also, it is quite likely that the wide usage of schemas and profiles will suffer from APIs defining their own variants of semantically similar documents.

The important goal

Regardless of whether developers choose to use single purpose media types or generic hypermedia types with profiles, it is essential that we focus on creating artifacts that can be re-used across multiple APIs to enable sharing and integration that is "web scale".

Image Credit: Big bang https://flic.kr/p/cS8Hed 


Dominick Baier: Resource/Action based Authorization for OWIN (and MVC and Web API)

Authorization is hard – much harder than authentication because it is so application specific. Microsoft went through several iterations of authorization plumbing in .NET, e.g. PrincipalPermission, IsInRole, Authorization configuration element and AuthorizeAttribute. All of the above are horrible approaches and bad style since they encourage you to mix business and authorization logic (aka role names inside your business code).

WIF’s ClaimsPrincipalPermission and ClaimsAuthorizationManager tried to provide better separation of concerns – while this was a step in the right direction, the implementation was “sub-optimal” – based on a CLR permission attribute, exception based, no async, bad for unit testing etc…

In the past Brock and me worked on more modern versions that integrate nicer with frameworks like Web API and MVC, but with the advent of OWIN/Katana there was a chance to start over…

Resource Authorization Manager & Context
We are mimicking the WIF resource/action based authorization approach – which proved to be general enough to build your own logic on top. We removed the dependency on System.IdentityModel and made the interface async (since you probably will need to do I/O at some point). This is the place where you will centralize your authorization policy:

public interface IResourceAuthorizationManager

{

    Task<bool> CheckAccessAsync(ResourceAuthorizationContext context);

}

 

(there is also a ResourceAuthorizationManager base class with some easy to use helpers for returning true/false and evaluations)

The context allows you to describe the actions and resources as lists of claims:

public class ResourceAuthorizationContext
{
    public IEnumerable<Claim> Action { get; set; }
    public IEnumerable<Claim> Resource { get; set; }
    public ClaimsPrincipal Principal { get; set; }
}

 

Middleware
The corresponding middleware makes the authorization manager available in the OWIN enviroment:

public void Configuration(IAppBuilder app)
{
    var cookie = new CookieAuthenticationOptions
    {
        AuthenticationType = "Cookie",
        ExpireTimeSpan = TimeSpan.FromMinutes(20),
        LoginPath = new PathString("/Login"),
    };
    app.UseCookieAuthentication(cookie);
 
    app.UseResourceAuthorization(new ChinookAuthorization());
}

 

Usage
Since the authorization manager is now available from the environment (key: idm:resourceAuthorizationManager) you can get ahold of it from anywhere in the pipeline, construct the context and call the CheckAccessAsync method.

The Web API and MVC integration packages provide a ResourceAuthorize attribute for declarative checks:

[ResourceAuthorize(ChinookResources.AlbumActions.View, ChinookResources.Album)]

 

And several extension methods for HttpContextBase and HttpRequestMessage, e.g.:

if (!HttpContext.CheckAccess(
    ChinookResources.AlbumActions.Edit,
    ChinookResources.Album,
    id.ToString()))
{
    return new HttpUnauthorizedResult();
}

 

or..

var result = Request.CheckAccess(
    ChinookResources.AlbumActions.Edit,
    ChinookResources.Album,
    id.ToString());

 

Testing authorization policy
Separating authorization policy from controllers and business logic is a good thing, centralizing the policy into a single place also has the nice benefit that you can now write unit tests against your authorization rules, e.g.:

[TestMethod]
public void Authenticated_Admin_Can_Edit_Album()
{
    var ctx = new ResourceAuthorizationContext(User("test", "Admin"),
        ChinookResources.AlbumActions.Edit,
        ChinookResources.Album);
    Assert.IsTrue(subject.CheckAccessAsync(ctx).Result);
}

 

or…

[TestMethod]
public void Authenticated_Manager_Cannot_Edit_Track()
{
    var ctx = new ResourceAuthorizationContext(User("test", "Manager"),
        ChinookResources.TrackActions.Edit,
        ChinookResources.Track);
    Assert.IsFalse(subject.CheckAccessAsync(ctx).Result);
}

 

Code, Samples, Nuget
The authorization manager, context, middleware and integration packages are part of Thinktecture.IdentityModel – see here.

The corresponding Nuget packages are:

..and here’s a sample using MVC (if anyone wants to add a Web API to it – send me a PR).


Filed under: ASP.NET, IdentityModel, Katana, OWIN, WebAPI


Darrel Miller: Single purpose media types and caching

My recent post asking people to refrain from creating more generic hypermedia types sparked some good conversation on twitter between @mamund, @cometaj2, @mogsie, @inadarei and others.  Whilst thinking some more on the potential benefits of single purpose media types versus generic hypermedia types, realized there is a correlation between single purpose media types and representation lifetimes.  I thought it might be worth adding to the conversation, but there is no way I could fit it in 140 chars, so I’m posting it here.

Age matters

When I say representation lifetime, I am referring to how long a representation can be considered to be still fresh, from the perspective of caching.   The lifetime of a representation can be explicitly defined by either setting the Expires header or by setting the max-age parameter of the cache-control header.  If neither of these values are set then a client can choose to use heuristics to determine how long the response could still be fresh for.

BestBeforeDateWhen using Single purpose media types, those types often hint at representations that should have different lifetimes.  If you are building an HTML dashboard that contains images of charts showing KPIs (key performance indicators), then the HTML will likely have a much longer lifetime than that of the images.

However, if you are building a dynamic HTML application, then the HTML is likely to have a shorter lifespan than the images displayed on it.  Media types that provide metadata like CSS, or API discovery documents are likely to have long lifetimes, whereas media types that display status information are likely to be very short lived.

Hints for client heuristics

When I was reading the specification for apisjson I noticed that they had explicitly added a clause to say that if no lifetime information was provided then a lifetime of 7 days would be an appropriate value.  I think the idea of a media type specification explicitly stating suggested lifetime values is very interesting.  When chatting with Steven Wilmott at API Craft Meetup SF he mentioned that they had got the idea from the standard convention of caching robots.txt for 24 hours. 

Just a thought

As I mentioned, this is simply an overly long tweet, rather than a blog post.  Nothing ground breaking, but I find that developers often forget about the value of setting caching headers.  If media type designers can provide guidance on what appropriate values might be then developers might pay a little more attention and the Internet might just get a little faster.

Image Credit: Best before date https://flic.kr/p/6MAMum


Darrel Miller: XSLT is easy, even for transforming JSON!

Most developers I talk to will cringe if they hear the acronym XSLT.  I suspect that reaction is derived from some past experience where they have seen some horrendously complex XML/XSLT combination.  There is certainly lots of that around. However, for certain types of document transformations, XSLT can be a very handy tool and with the right approach, and as long as you avoid edge cases, it can be fairly easy.

When I start building an XSLT transform, I always start with the “identity” transform,

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
<xsl:output method="xml" indent="yes" />
<xsl:template match="/ | @* | node()"> <xsl:copy> <xsl:apply-templates select="* | @* | node()" /> </xsl:copy> </xsl:template>
</xsl:stylesheet>

OptimusPrimeThe identity transform simply traverses the nodes in the input document and copies them into the output document.

Make some changes

In order to make changes to the output document you need to add templates that will do something other than simply copy the existing node.

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
   
<xsl:output method="xml" indent="yes" />
<xsl:template match="/ | @* | node()"> <xsl:copy> <xsl:apply-templates select="* | @* | node()" /> </xsl:copy> </xsl:template>
<!-- Change something --> <xsl:template match="foo"> <xsl:element name="bar"> <xsl:apply-templates select="* | @* | node()" /> </xsl:element> </xsl:template>
</xsl:stylesheet>

This template matches on any element named foo and in it’s place creates an element named bar that contains a copy of everything that foo contained.  XSLT will always use the template that matches most specifically.

Given the above XSLT, an input XML document like this,

<baz>  
    <foo value=”10text=”Hello World/>
</baz>

would be transformed into

<baz>
     <bar value=”10text=”Hello World/>
</baz>

And add more changes…

The advantage of XSLT over doing transformations in imperative code, is that adding more transformations to the document doesn’t make the XSLT more complex, just longer. For example, I could move the foo element inside another element by adding a new matching template.  The original parts of the template stay unchanged. 

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
        <xsl:output method="xml" indent="yes" />

        <xsl:template match="/ | @* | node()">
            <xsl:copy>
                <xsl:apply-templates select="* | @* | node()" />
           </xsl:copy>
        </xsl:template>

        <xsl:template match="foo">
           <xsl:element name="bar">
                <xsl:apply-templates select="* | @* | node()" />
           </xsl:element>
        </xsl:template>

        <!-- Additional template that does not change previous -->
        <xsl:template match="baz">
            <xsl:element name="splitz">
               <xsl:copy>
                 <xsl:apply-templates select="* | @* | node()" />
              </xsl:copy>
            </xsl:element>
        </xsl:template>

</xsl:stylesheet>

With imperative code, adding additional transformations often requires doing refactoring of existing code.  Due to XSLT being from a more functional/declarative heritage, it tends to stay cleaner when you add more to it.

Just because I can…

And for those of you love JSON too much to ever go near XSLT, below is some sample code that takes a JSON version of my sample document and applies the XSLT transform to it using the XML support in JSON.NET.

Starting with this JSON

{
    "baz": {
        "foo": {
            "value": "10",
            "text": "HelloWorld"
        }
    }
}

we end up with

{
    "splitz": {
        "baz": {
            "bar": {
                "value": "10",
                "text": "Hello World"
            }
        }
    }
}

Here’s the code I used to do this.  Please don’t take this suggestion too seriously.  I suspect the performance of this approach would be pretty horrific.  However, if you have a big chunk of JSON that you need to do a complex, offline transformation on, it might just prove useful.

var jdoc = JObject.Parse("{ 'baz' : { 'foo' : { 'value' : '10', 'text' : 'Hello World' } }}");
            
// Convert Json to XML
var doc = (XmlDocument)JsonConvert.DeserializeXmlNode(jdoc.ToString());

// Create XML document containing Xslt Transform
var transform = new XmlDocument();
transform.LoadXml(@"<?xml version='1.0' encoding='utf-8'?>
                    <xsl:stylesheet version='1.0' xmlns:xsl='http://www.w3.org/1999/XSL/Transform' >
                        <xsl:output method='xml' indent='yes' omit-xml-declaration='yes' />

                        <xsl:template match='/ | @* | node()'>
                            <xsl:copy>
                                <xsl:apply-templates select='* | @* | node()' />
                            </xsl:copy>
                        </xsl:template>

                        <xsl:template match='foo'>
                            <xsl:element name='bar'>
                                <xsl:apply-templates select='* | @* | node()' />
                            </xsl:element>
                        </xsl:template>
                        <xsl:template match='baz'>  
                            <xsl:element name='splitz'>
                                <xsl:copy>
                                    <xsl:apply-templates select='* | @* | node()' />
                                </xsl:copy>
                        </xsl:element></xsl:template>
                    </xsl:stylesheet>");

//Create compiled transform object that will actually do the transform.
var xslt = new XslCompiledTransform();
xslt.Load(transform.CreateNavigator());

// Transform our Xml-ified JSON
var outputDocument = new XmlDocument();
var stream = new MemoryStream();
xslt.Transform(doc, null, stream);
stream.Position = 0;
outputDocument.Load(stream);

// Convert back to JSON :-)
string jsonText = JsonConvert.SerializeXmlNode(outputDocument);

Duck-billed Platypus

Image credit: Transformer https://flic.kr/p/2LK2ph
Image credit: Platypus https://flic.kr/p/8tYptD


Darrel Miller: Please, no more generic hypermedia types

This opinion has been stewing for a couple of years now, but following an excellent conversation I had with Ted Young the other evening at the API Craft San Francisco event, I think it is time to have more discussion around this subject.

A little bit of history

I have been a big supporter of the HAL media type ever since Mike Kelly showed me his first sample on the freenode #REST IRC channel. I actively pushed for the use of HAL within several teams at Microsoft and a number of other large corporations. I saw HAL as an opportunity to introduce people to hypermedia using a simple, flexible, standardized format.  HAL has its flaws, but it served my purpose.

Since HAL was introduced, a number of other hypermedia types have been created, including Collection+JSON, Siren, JSON-LD, JsonAPI, Mason, UBER, OData and probably a few others that I have forgotten.

I think it is excellent that there is experimentation going on in this area. Much of this work can be used as the basis for future media type designs. The more people who have experience creating media types the better. However, there is a trend emerging where we are repeatedly creating general purpose hypermedia types that attempt to provide a single solution for all the resources in a API.

VanillaIceCream

What's the problem?

Developers who wish to build a hypermedia API are now required to make a decision up front on whether they want to use HAL, or Siren, or Mason, or JSON-LD, or OData, or UBER. All these designs are only incrementally different than each other and simply reflect the preferences and priorities of their authors.

The fragmentation of our already fairly small community of developers who are using hypermedia formats is really not helping us advance the adoption of hypermedia. Additionally, by trying to build hypermedia types that are sufficiently generic to support a wide range of use-cases, we have been forced to introduce secondary mechanisms to convey application semantics. Once a developer has chosen a hypermedia format, they must now choose a profile format, or a schema language, or create extension link relations in order to communicate the remainder of the semantics that the general purpose hypermedia formats do not support.

IceCreamChoices

I intentionally left Collection+JSON out of the list of competing media types because I see C+J as different, as it was designed to represent a simple lists of things. This a very focused goal and yet one that almost every application has the need for.

So what does work?

I realize that this example is going to turn off many people, but there is an elephant in the room that we just can't continue to ignore. The human web is built around a fairly small set of media types that have narrowly focused goals.

  • text/html provides read-only textual content to be presented to an end user.
  • application/x-www-form-urlencoded provides a way of transmitting user input to an origin server.
  • text/css provides hints to the user-agent on how to render the content of the HTML document.
  • image/(png|jpeg|gif) provides bitmap based images
  • image/svg provides vector based images
  • application/javascript provides source code for client side scripts

It is the combination of these narrowly focused media types that enable web applications to produce a huge variety of user experiences.

The UNIX philosophy

In my opinion the best media types are the ones that are designed to solve a single problem, solve it well and then allow the combination of those media types. This allows the media type to remain fairly simple. Simple to create, simple to parse and simple to replace if a better format comes along. It is also easier to support multiple similar formats. The web only supports one text/html format, but it supports several image formats.

IceCreamToppings

Examples of potential media types

During my years of building hypermedia based systems, I have run into many situations that would have benefited from having specialized media types. Here are some examples:

  • Discovery document : An entry point document that allows a client to discover other resources that exist within a system. e.g json-home
  • List of things : A tabular list of pieces of data. e.g. Collection+Json, text/csv
  • Data entry form : A representation of a set of data elements that have types, constraints, validation rules, intended for capturing information from the user. e.g. HTML forms are a very primitive example
  • Captured event stream : A sequence of events captured by a client application potentially used for replaying on some remote system. e.g. ActivityStreams, Json-patch
  • Error document : document used for reporting errors resulting from a HTTP Request. e.g. http-problem, vnd.error+(xml|json)
  • Operation Status document : Document which describes the current status of a long running operation. application/status+(xml|json)
  • Banded Report document : Representation of the set of data used by report writing tools to generate printed reports
  • Query description : Query language that selects and filters a subset of data e.g. application/sql, application/opensearchdescription+xml
  • Datapoints : for driving dashboard widgets and graphs
  • Security permissions management : tasks, claims, users
  • Patch format : Apply a delta set of updates to a target resource Json-patch Sundae

Call to action

I'd like to see those developers who do have experience building hypermedia types working on these kinds of focused media types instead of creating competing, one-size-fits-all formats.
I'd like to see widget and component developers building native support for consuming these specialized media types.
I'd like to see more application developers identifying opportunities for creating these specialized media types.
I'd like to see API Commons filled with a wide variety of media type specifications that I can pick and choose as building blocks for building my API.

I want to have a future where I can expose my data in a standardized format and then simply connect hypermedia enabled client components that will consume those formats. Now that would improve productivity.

That's my opinion, I'd love to hear yours.

Image Credit: Vanilla Ice cream https://flic.kr/p/9JyfVG
Image Credit: Sundae https://flic.kr/p/3aK9TU
Image Credit: Toppings https://flic.kr/p/fuY7GZ


Darrel Miller: Self-descriptive, isn't. Don't assume anything.

With the recent surge of interest in hypermedia APIs I am beginning to see the term “self-descriptive” thrown around quite frequently.  Unfortunately, the meaning of self-descriptive is not exactly self-descriptive, leading to misuse of the term. 

Consider the following HTTP requests,

Example 1:

GET /address/99
=> 200 OK
Content-Type: application/json
Content-Length: 508

{
"_meta" : {
	"street" : { 	"type" : "string", 
		     	"length" : 80, 
		     	"description" : "Street address"},
	"city" :   { 	"type" : "string", 
		     	"length" : 20, 
		     	"description" : "City name"},
	"postcode" : { 	"type" : "string", 
		       	"length" : 11, 
		       	"description" : "Postal Code / Zip Code"},
	"country" : {	"type" : "string", 
			"length" : 15, 
			"description" : "Country name"},
	},
"address" : {
	"street" : "1 youville way",
	"city" : "Mysteryville",
	"postcode" : "H3P 2Z9",
	"country" : "Canada"
	}
}


Example 2:

GET /address/99
=> 200 OK
Content-Type: application/vnd.gabba.berg
Content-Length: 90

<berg>
   <blurp filk="iggy">ababa</blurb>
   <bop>
      <bip>yerk</bip>
     </bop>
  <berg>


I suspect a fair number of people will be surprised when I make the claim that from the perspective of self-descriptive HTTP messages, the first message is not self-descriptive and the second one is. 

The first may contain more descriptive content, but it doesn't use the standardized methods provided to us by HTTP to identify the semantics of the content.  The client is forced to make assumptions.  The second one is explicit about identifying the meaning of the payload.

Nametag

Identify yourself

Self-descriptive in HTTP does not mean the message describes itself.  It means that the message depends on semantic identifiers, using mechanisms defined by HTTP (e.g. media-types and link relations) to convey the complete meaning of the message. This allows client application to know whether it can understand the incoming message. 

The first example contains all kinds of metadata which attempts to describe the actual data in the message.  However, how can the client know if it is able to interpret the metadata?   It reminds me of the first French language course I took in Quebec, where the teacher started providing instruction in French!  Fortunately, humans are pretty intelligent creatures, software applications, not so much. 

Declare your semantics

In example 1, the media type in the content type header is declared as "application/json".  Unfortunately that tells me nothing about the meaning of the information in the message body.  The client can process the content as JSON, but the message is telling it nothing else about the meaning of the message.  Allowing a client to assume that because you have retrieved the representation from /address/99 that the response will contain information about an address is a violation of the self-descriptive constraint.

Why yes, I do speak Klingon

In example 2, which at first glance appears completely unintelligible to a developer, provides a media type, which, in theory, should be registered with IANA and therefore I should be able to find a written specification that explains what all those weird attributes and elements mean.  Once I have read the specification, I can write code in my client application to be able to process content that is identified as "application/vnd.gabba.berg". 

There is no magic

I get the impression that some developers perceive hypermedia and self-descriptiveness as some magical property that will allow clients to perform tasks that they previously had no idea how to do. 

A client can only process media types it understands.  A web browser knows how to render HTML, follow links, fill forms and run script.  The browser is completely ignorant of the fact that one HTML page might be doing banking transactions and another submitting an order for a year's supply of Shamwow products. 

The effect might be magical, but the reality is that the hypermedia driven clients can only do exactly what they have been coded to do. 

 

Image credit: Name tag https://flic.kr/p/27Y1J9


Darrel Miller: Distributed Web API discovery

The site apisjson.org defines a specification for creating a document that declares the existence of an API on the web.   This document provides some identification information about the API and links to documentation and to the actual API root URL.  It also supports pointing to other resources like API design metadata documents and contact information for maintainers of the API.earth

Here is a fairly minimal version of what the declaration document might look like for the Runscope API.

{
  "Name": "Runscope APIs",
  "Description": "APIs for interacting with Runscope Tools",
  "Tags": ["testing", "http", "web api", "debugging","monitoring"],                            
  "Created": "06/12/2014",                                                                  
  "Modified": "06/12/2014",
  "Url": "http://api.runscope.com/apis.json",
  "SpecificationVersion" : "0.14",
  "Apis": [
    {
      "Name": "Runscope API",
      "Description": "An API for interacting with the Runscope HTTP debugging, testing and monitoring tools",
      "humanUrl": "https://www.runscope.com/docs/api/overview",
      "baseUrl": "https://api.runscope.com/",
      }]
    }
  ]
}

There is an experimental search engine available here http://apis.io/ that maintains a list of links to these API declaration documents and makes the contents searchable.  If I were to take the JSON above and expose it at http://api.runscope.com/apis.json and then submit that URL, the search engine would index my declaration file and allow users to find it.

The important bit

One of the best parts of this specification, regardless of the fact that it is still immature and I'm sure has many more revisions in front of it, is that the authors are really trying to embrace the way the web is supposed to work. 

  • They have identified the focused scenario of "api discovery" that can take advantage of a standardized solution. 
  • They are taking no dependencies on tooling or frameworks. 
  • They are considering registering this as a standardized IANA media type
  • They are recommending the use of a standardized link relation types described-by to allow APIs to point to their own declaration document to help make APIs more self-describing. 
  • They are considering making use of well-known URIs as a way to allow crawlers to find these API declaration documents without needing them to be submitted manually.
  • The specification is being developed in the open. They have a github repo and a Google group where they are accepting and acting on feedback they receive and they are iterating quickly.   

This is how web work should be done.  By building on the conventions and standards that others have already produced instead of re-inventing and fragmenting.  Kudos to Kin Lane and Steven Willmott for pioneering this effort.  I look forward to many more people getting involved to make this successful.

Now if only we could find a better name for the format :-)

Image credits: Earth https://flic.kr/p/8FnV8


Darrel Miller: Returning raw JSON content from ASP.NET Web API

In a previous post I talked about how to send raw JSON to a web API and consume it easily.  This is a non-obvious process because ASP.NET Web API is optimized for sending and receiving arbitrary CLR object that then get serialized by the formatters in the request/response pipeline.  However, sometimes you just want to have more direct control over the format that is returned in your response.  This post talks about some ways you can regain that control.

Serialize your JSON documentjason2

If the only thing you want to do is take a take plain old JSON and return it, then you can on rely on the fact that the default JsonMediaTypeFormatter knows how to serialize JToken objects.

public class JsonController : ApiController
{
    public JToken Get()
    {
        JToken json = JObject.Parse("{ 'firstname' : 'Jason', 'lastname' : 'Voorhees' }");
        return json;
    }

}

If you want a bit more control over the returned message then you can a peel off a layer of convenience and return a HttpReponseMessage with a HttpContent object.

Derive from HttpContent for greater control

The HTTP object model that is used by ASP.NET Web API is quite different than many other web frameworks because it makes an explicit differentiation between the response message and the payload body that is contained in the response message.  The HttpContent class is designed as an abstract base class for the purpose of providing a standard interface to any kind of payload body.

Knobsandswitches2 Out of the box there are a number of specialized HttpContent classes: StringContent, ByteArrayContent, FormUrlEncodedContent, ObjectContent and a number of others.  However, it is fairly straightforward to create our own derived classes to deliver different kinds of payloads.

 

To return JSON content you can create a JsonContent class that looks something like this,

public class JsonContent : HttpContent
{
    private readonly JToken _value;

    public JsonContent(JToken value)
    {
        _value = value;
        Headers.ContentType = new MediaTypeHeaderValue("application/json");
    }

    protected override Task SerializeToStreamAsync(Stream stream,
        TransportContext context)
    {
        var jw = new JsonTextWriter(new StreamWriter(stream))
        {
            Formatting = Formatting.Indented
        };
        _value.WriteTo(jw);
        jw.Flush();
        return Task.FromResult<object>(null);
    }

    protected override bool TryComputeLength(out long length)
    {
        length = -1;
        return false;
    }
}


This JsonContent class can then be used like this,

public class JsonContentController : ApiController
{
    public HttpResponseMessage Get()
    {
        JToken json = JObject.Parse("{ 'firstname' : 'Jason', 'lastname' : 'Voorhees' }");
        return new HttpResponseMessage()
        {
            Content = new JsonContent(json)
        };
    }
}

This approach provides the benefit of allowing you to either manipulate the HttpResponseMessage in the controller action, or if there are other HttpContent.Headers that you wish to set then you can do that in JsonContent class.

Making a request for this resource would look like this,

> GET /JsonContent HTTP/1.1
> User-Agent: curl/7.28.1
> Host: 127.0.0.1:1001
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 43
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< Date: Wed, 11 Jun 2014 21:27:50 GMT
<
{"firstname":"Jason","lastname":"Voorhees"}

Wrapping HttpContent for Additional Transformations

Another nice side-effect of using HttpContent classes is that they can be used as wrappers to perform transformations on content.  These wrappers can be layered and will just automatically work  whether the content is buffered or stream directly over the network.  For example, the following is possible,

public HttpResponseMessage Get()
{
    JToken json = JObject.Parse("{ 'property' : 'value' }");
    return new HttpResponseMessage()
    {
        Content = new EncryptedContent(new CompressedContent(new JsonContent(json)))
    };
}

This would create the stream of JSON content, compress it, encrypt it and set all the appropriate headers.

Separation of concerns promotes reuse

By focusing on the payload as an element independent of the response message it becomes easier to re-use the specialized content classes in for different resources.  In the APIs I have built I have had success creating many other content classes like:

Image credit : Jason https://flic.kr/p/gQEp5X
Image credit : Knobs and switches https://flic.kr/p/2YJC3Y


Ali Kheyrollahi: BeeHive Series - Part 3: BeeHive 0.5, RabbitMQ and more

Level [T4]

BeeHive is a friction-free library to do Reactor Cloud Actors - effortlessly. It defines abstractions for the message, queue and the actors and all you have to do is to define your actors and connect their dots using subscriptions. If it is the first time you read about BeeHive, you could have a look at previous posts but basically a BeeHive Actor (technically Processor Actor) is very simple:

[ActorDescription("TopicName-SubscriptionName")]
public class MyActor : IProcessorActor
{
public Task<IEnumerable<Event>> ProcessAsync(Event evnt)
{
// impl
}
}
All you do is to consume a message, do some work and then return typically one, sometimes zero and rarely many events back.
A few key things to note here.

Event

First of all Event, is an immutable, unique and timestamped message which documents a significant business event. It has a string body which normally is a JSON serialisation of actual message object - but it does not have to be.

So usually messages are arbitrary bytes, why here it is a string? While it was possible to use byte[], if you need to send binary blobs or you need custom serialisation, you are probably doing something wrong. Bear in mind, BeeHive is targeted at solutions that require scale, High Availability and linearisation. If you need to attach a big binary blob, just drop it in a key value store using IKeyValueStore and put the link in your message. If it is small, use Base64 encoding. Also your messages need to very simple DTOs (and by simple I do not mean small, but a class with getters and setters), if you are having problem serialising them then again, you are doing something wrong.

Queue naming

BeeHive uses a naming conventional for queues, topics and subscriptions. Basically it is in the format of TopicName-SubscriptionName. So there are a few rules with this:
  • Understandably, TopicName or SubscriptionName should not contain hyphens
  • If the value of TopicName and SubscriptionName is the same, it is a simple queue and not a publish-subscribe queue. For example, "OrderArrived-OrderArrived"
  • If you leave off the SubscriptionName then you are referring to the topic. For example "OrderArrived-".
Queue name is represented by the class QueueName. If you need to construct queue names using static methods:

var n1 = QueueName.FromSimpleQueueName("SimpleQ"); // "SimpleQ-SimpleQ"
var n2 = QueueName.FromTopicName("Topic"); // "Topic-"
var n3 = QueueName.FromTopicAndSubscriptionName("topic", "Sub"); // "Topic-Sub"

There is a QueueName property on the Event class. This property defines where to send the event message. The queue name must be the name of the topic or simple queue name.

IEventQueueOperator

This interface got some make over in this release. I have not been happy the interface as it had some inconsistencies - especially in terms of creating . Thanks to Adam Hathcock who reminded me, now this is done.

With QueueName ability of differentiating topics and simple queue, this value needs to be either name of the simple queue (in the example above "SimpleQ") or the conventional topic name (in the example above "Topic-").

So here is the interface(s) as it stands now:

public interface ISubscriptionOperator<T>
{
Task<PollerResult<T>> NextAsync(QueueName name);
Task AbandonAsync(T message);
Task CommitAsync(T message);
Task DeferAsync(T message, TimeSpan howLong);
}

public interface ITopicOperator<T>
{
Task PushAsync(T message);
Task PushBatchAsync(IEnumerable<T> messages);
}

public interface IQueueOperator<T> : ITopicOperator<T>, ISubscriptionOperator<T>
{
Task CreateQueueAsync(QueueName name);
Task DeleteQueueAsync(QueueName name);
Task<bool> QueueExists(QueueName name);
}

public interface IEventQueueOperator : IQueueOperator<Event>
{
}
Main changes were made to IQueueOperator<T> passing the QueueName which made it simpler.

RabbitMQ Roadmap

BeeHive targets cloud frameworks. IEventQueueOperator and main data structures have been implemented for Azure. Next is AWS.

Amazon Web Services (AWS) provides Simple Queue Service (SQS) which only supports simple send-receive scenarios and not Publish-Subscribe cases. With this in mind, it is most likely that other message brokers will be used although a custom implementation of pub-sub based on Simple Notification Service (SNS) has been reported. Considering RabbitMQ is by far the most popular message broker out there (is it not?) it is sensible to pick this implementation first.

RabbitMQ client for .NET has a very simple API and working with it is very easy. However, the connection implementation has a lot to be desired. EasyNetQ has a sophisticated connection implementation that covers dead connection refreshes and catering for round-robin in case of High-Availability scenario. Using a full framework to just the connection is not really an option hence I need to implement something similar.

So for now, I am realising an alpha version without the HA and connection refresh to get community feedback. So please do ping me what you think.

Since this is a pre-release, you need to use -Pre to get it installed:

PM> Install-Package BeeHive.RabbitMQ -Pre


Darrel Miller: There is Unicode in your URL!

In our Runscope HipChat room a few weeks ago, I was asked about Unicode encoding in URLs.  After a quick sob about why I never get asked the easy questions, I decided it was time to do some investigating. 

I had explored this subject in the past whilst trying to get Unicode support working in my URI Templates library.  At that time I had got lost in the mysteries of Unicode normalization and never actually got to the bottom of the problem.  This time I was determined.

Get to the point, the Code Point

To cut a long story short, the solution for what I believe to be the common scenario, is fairly straightforward.  To support Unicode in a URI you simply need to convert the Unicode "code point" into UTF-8 bytes and then percent-encode those bytes.  The percent encoded bytes can than then be embedded directly in the URL.

As an example, consider we want to embed the character that has the code point \u263A into our URI.  We can create a string that has that code point in C# like this,

var s = "Hello World \u263A";

Show me the bytes

Now that string can be converted to  UTF-8 bytes likes this,

var bytes = Encoding.UTF8.GetBytes(s);

an finally they can be percent encoded like this,

var encodedstring = string.Join("",bytes.Select(b => b > 127 ? 
Uri.HexEscape((char)b) : ((char)b).ToString()));

The trick here is that we only want to do the HexEscape for characters that are part of a multi-byte UTF8 encoding of a code point.  UTF-8 guarantees that all bytes that are part of a multi-byte character encoding will have the high bit set and therefore will be greater than 127. 

One caveat to be aware of is that because you are going to be including this string in a URI, you should either call Uri.EscapeUriString() or Uri.EscapeDataString() before doing the Unicode escaping or you could end up double escaping the Unicode escaping.

A complete example

Here is a small ScriptCS example that shows how this could be used,

#r "system.net.http.dll"
using System.Net.Http;

var httpClient = new HttpClient();

var url = EncodeUnicode("http://stackoverflow.com/search?q=hello+world\u263A");
var response = httpClient.GetAsync(url).Result;

Console.WriteLine(response.StatusCode);

public string EncodeUnicode(string s) {
  var bytes = Encoding.UTF8.GetBytes(s);
  var encodedstring = string.Join("",bytes.Select(b => b > 127 ? 
           Uri.HexEscape((char)b) : ((char)b).ToString()));
  return encodedstring;
}

This produces the following request,

GET http://stackoverflow.com/search?q=hello+world%E2%98%BA HTTP/1.1
Host: stackoverflow.com

The long story

One of the reasons I was originally confused when first looking into this was Unicode supports the ability to generate the same character multiple different ways.  This happens because some characters can be combined into composite characters.  Technically, before percent-encoding the bytes a normalization process should occur to ensure that sorting and comparison of encoded Unicode characters works as expected.  I suspect a large number of use cases don't need this process, but it worth being aware of it.


Dominick Baier: DotNetRocks on OpenID Connect with Brock and Me

Recorded at NDC Oslo:

http://www.dotnetrocks.com/default.aspx?ShowNum=993


Filed under: Conferences & Training, IdentityServer, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: NDC Oslo 2014 Slides, Samples and Videos

As always – NDC was a great conference! Here’s the list of resources relevant to my talks:

IdentityServer v3 preview: github

Web API Access Control & Authorization: slides / video

OpenID Connect: slides / video

 


Filed under: ASP.NET, Conferences & Training, IdentityServer, OAuth, OpenID Connect, WebAPI


Taiseer Joudeh: AngularJS Token Authentication using ASP.NET Web API 2, Owin, and Identity

This is the second part of AngularJS Token Authentication using  ASP.NET Web API 2 and Owin middleware, you can find the first part using the link below:

You can check the demo application on (http://ngAuthenticationWeb.azurewebsites.net), play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

AngularJS Authentication

In this post we’ll build sample SPA using AngularJS, this application will allow the users to do the following:

  • Register in our system by providing username and password.
  • Secure certain views from viewing by authenticated users (Anonymous users).
  • Allow registered users to log-in and keep them logged in for 24 hours 30 minutes because we are using refresh tokens or until they log-out from the system, this should be done using tokens.

If you are new to AngularJS, you can check my other tutorial which provides step by step instructions on how to build SPA using AngularJS, it is important to understand the fundamentals aspects of AngularJS before start working with it, in this tutorial I’ll assume that reader have basic understanding of how AngularJS works.

Step 1: Download Third Party Libraries

To get started we need to download all libraries needed in our application:

  • AngularJS: We’ll serve AngularJS from from CDN, the version is 1.2.16
  • Loading Bar: We’ll use the loading bar as UI indication for every XHR request the application will made, to get this plugin we need to download it from here.
  • UI Bootstrap theme: to style our application, we need to download a free bootstrap ready made theme from http://bootswatch.com/ I’ve used a theme named “Yeti”.

Step 2: Organize Project Structure

You can use your favorite IDE to build the web application, the app is completely decoupled from the back-end API, there is no dependency on any server side technology here, in my case I’m using Visual Studio 2013 so add new project named “AngularJSAuthentication.Web” to the solution we created in the previous post, the template for this project is “Empty” without any core dependencies checked.

After you add the project you can organize your project structure as the image below, I prefer to contain all the AngularJS application and resources files we’ll create in folder named “app”.

AngularJS Project Structure

Step 3: Add the Shell Page (index.html)

Now we’ll add the “Single Page” which is a container for our application, it will contain the navigation menu and AngularJS directive for rendering different application views “pages”. After you add the “index.html” page to project root we need to reference the 3rd party JavaScript and CSS files needed as the below:

<!DOCTYPE html>
<html data-ng-app="AngularAuthApp">
<head>
    <meta content="IE=edge, chrome=1" http-equiv="X-UA-Compatible" />
    <title>AngularJS Authentication</title>
    <link href="content/css/bootstrap.min.css" rel="stylesheet" />
    <link href="content/css/site.css" rel="stylesheet" />
<link href="content/css/loading-bar.css" rel="stylesheet" />
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
</head>
<body>
    <div class="navbar navbar-inverse navbar-fixed-top" role="navigation" data-ng-controller="indexController">
        <div class="container">
            <div class="navbar-header">
                  <button class="btn btn-success navbar-toggle" data-ng-click="navbarExpanded = !navbarExpanded">
                        <span class="glyphicon glyphicon-chevron-down"></span>
                    </button>
                <a class="navbar-brand" href="#/">Home</a>
            </div>
            <div class="collapse navbar-collapse" data-collapse="!navbarExpanded">
                <ul class="nav navbar-nav navbar-right">
                    <li data-ng-hide="!authentication.isAuth"><a href="#">Welcome {{authentication.userName}}</a></li>
                    <li data-ng-hide="!authentication.isAuth"><a href="#/orders">My Orders</a></li>
                    <li data-ng-hide="!authentication.isAuth"><a href="" data-ng-click="logOut()">Logout</a></li>
                    <li data-ng-hide="authentication.isAuth"> <a href="#/login">Login</a></li>
                    <li data-ng-hide="authentication.isAuth"> <a href="#/signup">Sign Up</a></li>
                </ul>
            </div>
        </div>
    </div>
    <div class="jumbotron">
        <div class="container">
            <div class="page-header text-center">
                <h1>AngularJS Authentication</h1>
            </div>
            <p>This single page application is built using AngularJS, it is using OAuth bearer token authentication, ASP.NET Web API 2, OWIN middleware, and ASP.NET Identity to generate tokens and register users.</p>
        </div>
    </div>
    <div class="container">
        <div data-ng-view="">
        </div>
    </div>
    <hr />
    <div id="footer">
        <div class="container">
            <div class="row">
                <div class="col-md-6">
                    <p class="text-muted">Created by Taiseer Joudeh. Twitter: <a target="_blank" href="http://twitter.com/tjoudeh">@tjoudeh</a></p>
                </div>
                <div class="col-md-6">
                    <p class="text-muted">Taiseer Joudeh Blog: <a target="_blank" href="http://bitoftech.net">bitoftech.net</a></p>
                </div>
            </div>
        </div>
    </div>
    <!-- 3rd party libraries -->
    <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular.min.js"></script>
    <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular-route.min.js"></script>
    <script src="scripts/angular-local-storage.min.js"></script>
    <script src="scripts/loading-bar.min.js"></script>
    <!-- Load app main script -->
    <script src="app/app.js"></script>
    <!-- Load services -->
    <script src="app/services/authInterceptorService.js"></script>
    <script src="app/services/authService.js"></script>
    <script src="app/services/ordersService.js"></script>
    <!-- Load controllers -->
    <script src="app/controllers/indexController.js"></script>
    <script src="app/controllers/homeController.js"></script>
    <script src="app/controllers/loginController.js"></script>
    <script src="app/controllers/signupController.js"></script>
    <script src="app/controllers/ordersController.js"></script>
</body>
</html>

Step 4: “Booting up” our Application and Configure Routes

We’ll add file named “app.js” in the root of folder “app”, this file is responsible to create modules in applications, in our case we’ll have a single module called “AngularAuthApp”, we can consider the module as a collection of services, directives, filters which is used in the application. Each module has configuration block where it gets applied to the application during the bootstrap process.

As well we need to define and map the views with the controllers so open “app.js” file and paste the code below:

var app = angular.module('AngularAuthApp', ['ngRoute', 'LocalStorageModule', 'angular-loading-bar']);

app.config(function ($routeProvider) {

    $routeProvider.when("/home", {
        controller: "homeController",
        templateUrl: "/app/views/home.html"
    });

    $routeProvider.when("/login", {
        controller: "loginController",
        templateUrl: "/app/views/login.html"
    });

    $routeProvider.when("/signup", {
        controller: "signupController",
        templateUrl: "/app/views/signup.html"
    });

    $routeProvider.when("/orders", {
        controller: "ordersController",
        templateUrl: "/app/views/orders.html"
    });

    $routeProvider.otherwise({ redirectTo: "/home" });
});

app.run(['authService', function (authService) {
    authService.fillAuthData();
}]);

So far we’ve defined and mapped 4 views to their corresponding controllers as the below:

Orders View

Step 5: Add AngularJS Authentication Service (Factory)

This AngularJS service will be responsible for signing up new users, log-in/log-out registered users, and store the generated token in client local storage so this token can be sent with each request to access secure resources on the back-end API, the code for AuthService will be as the below:

'use strict';
app.factory('authService', ['$http', '$q', 'localStorageService', function ($http, $q, localStorageService) {

    var serviceBase = 'http://ngauthenticationapi.azurewebsites.net/';
    var authServiceFactory = {};

    var _authentication = {
        isAuth: false,
        userName : ""
    };

    var _saveRegistration = function (registration) {

        _logOut();

        return $http.post(serviceBase + 'api/account/register', registration).then(function (response) {
            return response;
        });

    };

    var _login = function (loginData) {

        var data = "grant_type=password&username=" + loginData.userName + "&password=" + loginData.password;

        var deferred = $q.defer();

        $http.post(serviceBase + 'token', data, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }).success(function (response) {

            localStorageService.set('authorizationData', { token: response.access_token, userName: loginData.userName });

            _authentication.isAuth = true;
            _authentication.userName = loginData.userName;

            deferred.resolve(response);

        }).error(function (err, status) {
            _logOut();
            deferred.reject(err);
        });

        return deferred.promise;

    };

    var _logOut = function () {

        localStorageService.remove('authorizationData');

        _authentication.isAuth = false;
        _authentication.userName = "";

    };

    var _fillAuthData = function () {

        var authData = localStorageService.get('authorizationData');
        if (authData)
        {
            _authentication.isAuth = true;
            _authentication.userName = authData.userName;
        }

    }

    authServiceFactory.saveRegistration = _saveRegistration;
    authServiceFactory.login = _login;
    authServiceFactory.logOut = _logOut;
    authServiceFactory.fillAuthData = _fillAuthData;
    authServiceFactory.authentication = _authentication;

    return authServiceFactory;
}]);

Now by looking on the method “_saveRegistration” you will notice that we are issuing HTTP Post to the end point “http://ngauthenticationapi.azurewebsites.net/api/account/register” defined in the previous post, this method returns a promise which will be resolved in the controller.

The function “_login” is responsible to send HTTP Post request to the endpoint “http://ngauthenticationapi.azurewebsites.net/token”, this endpoint will validate the credentials passed and if they are valid it will return an “access_token”. We have to store this token into persistence medium on the client so for any subsequent requests for secured resources we’ve to read this token value and send it in the “Authorization” header with the HTTP request.

Notice that we have configured the POST request for this endpoint to use “application/x-www-form-urlencoded” as its Content-Type and sent the data as string not JSON object.

The best way to store this token is to use AngularJS module named “angular-local-storage” which gives access to the browsers local storage with cookie fallback if you are using old browser, so I will depend on this module to store the token and the logged in username in key named “authorizationData”. We will use this key in different places in our app to read the token value from it.

As well we’ll add object named “authentication” which will store two values (isAuth, and username). This object will be used to change the layout for our index page.

Step 6: Add the Signup Controller and its View

The view for the signup is simple so open file named “signup.html” and add it under folders “views” open the file and paste the HTML below:

<form class="form-login" role="form">
    <h2 class="form-login-heading">Sign up</h2>
    <input type="text" class="form-control" placeholder="Username" data-ng-model="registration.userName" required autofocus>
    <input type="password" class="form-control" placeholder="Password" data-ng-model="registration.password" required>
    <input type="password" class="form-control" placeholder="Confirm Password" data-ng-model="registration.confirmPassword" required>
    <button class="btn btn-lg btn-info btn-block" type="submit" data-ng-click="signUp()">Submit</button>
    <div data-ng-hide="message == ''" data-ng-class="(savedSuccessfully) ? 'alert alert-success' : 'alert alert-danger'">
        {{message}}
    </div>
</form>

Now we need to add controller named “signupController.js” under folder “controllers”, this controller is simple and will contain the business logic needed to register new users and call the “saveRegistration” method we’ve created in “authService” service, so open the file and paste the code below:

'use strict';
app.controller('signupController', ['$scope', '$location', '$timeout', 'authService', function ($scope, $location, $timeout, authService) {

    $scope.savedSuccessfully = false;
    $scope.message = "";

    $scope.registration = {
        userName: "",
        password: "",
        confirmPassword: ""
    };

    $scope.signUp = function () {

        authService.saveRegistration($scope.registration).then(function (response) {

            $scope.savedSuccessfully = true;
            $scope.message = "User has been registered successfully, you will be redicted to login page in 2 seconds.";
            startTimer();

        },
         function (response) {
             var errors = [];
             for (var key in response.data.modelState) {
                 for (var i = 0; i < response.data.modelState[key].length; i++) {
                     errors.push(response.data.modelState[key][i]);
                 }
             }
             $scope.message = "Failed to register user due to:" + errors.join(' ');
         });
    };

    var startTimer = function () {
        var timer = $timeout(function () {
            $timeout.cancel(timer);
            $location.path('/login');
        }, 2000);
    }

}]);

Step 6: Add the log-in Controller and its View

The view for the log-in is simple so open file named “login.html” and add it under folders “views” open the file and paste the HTML below:

<form class="form-login" role="form">
    <h2 class="form-login-heading">Login</h2>
    <input type="text" class="form-control" placeholder="Username" data-ng-model="loginData.userName" required autofocus>
    <input type="password" class="form-control" placeholder="Password" data-ng-model="loginData.password" required>
    <button class="btn btn-lg btn-info btn-block" type="submit" data-ng-click="login()">Login</button>
     <div data-ng-hide="message == ''" class="alert alert-danger">
        {{message}}
    </div>
</form>

Now we need to add controller named “loginController.js” under folder “controllers”, this controller will be responsible to redirect authenticated users only to the orders view, if you tried to request the orders view as anonymous user, you will be redirected to log-in view. We’ll see in the next steps how we’ll implement the redirection for anonymous users to the log-in view once users request a secure view.

Now open the “loginController.js” file and paste the code below:

'use strict';
app.controller('loginController', ['$scope', '$location', 'authService', function ($scope, $location, authService) {

    $scope.loginData = {
        userName: "",
        password: ""
    };

    $scope.message = "";

    $scope.login = function () {

        authService.login($scope.loginData).then(function (response) {

            $location.path('/orders');

        },
         function (err) {
             $scope.message = err.error_description;
         });
    };

}]);

Step 7: Add AngularJS Orders Service (Factory)

This service will be responsible to issue HTTP GET request to the end point “http://ngauthenticationapi.azurewebsites.net/api/orders” we’ve defined in the previous post, if you recall we added “Authorize” attribute to indicate that this method is secured and should be called by authenticated users, if you try to call the end point directly you will receive HTTP status code 401 Unauthorized.

So add new file named “ordersService.js” under folder “services” and paste the code below:

'use strict';
app.factory('ordersService', ['$http', function ($http) {

    var serviceBase = 'http://ngauthenticationapi.azurewebsites.net/';
    var ordersServiceFactory = {};

    var _getOrders = function () {

        return $http.get(serviceBase + 'api/orders').then(function (results) {
            return results;
        });
    };

    ordersServiceFactory.getOrders = _getOrders;

    return ordersServiceFactory;

}]);

By looking at the code above you’ll notice that we are not setting the “Authorization” header and passing the bearer token we stored in the local storage earlier in this service, so we’ll receive 401 response always! Also we are not checking if the response is rejected with status code 401 so we redirect the user to the log-in page.

There is nothing prevent us from reading the stored token from the local storage and checking if the response is rejected inside this service, but what if we have another services that needs to pass the bearer token along with each request? We’ll end up replicating this code for each service.

To solve this issue we need to find a centralized place so we add this code once so all other services interested in sending bearer token can benefit from it, to do so we need to use “AngualrJS Interceptor“.

Step 8: Add AngularJS Interceptor (Factory)

Interceptor is regular service (factory) which allow us to capture every XHR request and manipulate it before sending it to the back-end API or after receiving the response from the API, in our case we are interested to capture each request before sending it so we can set the bearer token, as well we are interested in checking if the response from back-end API contains errors which means we need to check the error code returned so if its 401 then we redirect the user to the log-in page.

To do so add new file named “authInterceptorService.js” under “services” folder and paste the code below:

'use strict';
app.factory('authInterceptorService', ['$q', '$location', 'localStorageService', function ($q, $location, localStorageService) {

    var authInterceptorServiceFactory = {};

    var _request = function (config) {

        config.headers = config.headers || {};

        var authData = localStorageService.get('authorizationData');
        if (authData) {
            config.headers.Authorization = 'Bearer ' + authData.token;
        }

        return config;
    }

    var _responseError = function (rejection) {
        if (rejection.status === 401) {
            $location.path('/login');
        }
        return $q.reject(rejection);
    }

    authInterceptorServiceFactory.request = _request;
    authInterceptorServiceFactory.responseError = _responseError;

    return authInterceptorServiceFactory;
}]);

By looking at the code above, the method “_request” will be fired before $http sends the request to the back-end API, so this is the right place to read the token from local storage and set it into “Authorization” header with each request. Note that I’m checking if the local storage object is nothing so in this case this means the user is anonymous and there is no need to set the token with each XHR request.

Now the method “_responseError” will be hit after the we receive a response from the Back-end API and only if there is failure status returned. So we need to check the status code, in case it was 401 we’ll redirect the user to the log-in page where he’ll be able to authenticate again.

Now we need to push this interceptor to the interceptors array, so open file “app.js” and add the below code snippet:

app.config(function ($httpProvider) {
    $httpProvider.interceptors.push('authInterceptorService');
});

By doing this there is no need to setup extra code for setting up tokens or checking the status code, any AngularJS service executes XHR requests will use this interceptor. Note: this will work if you are using AngularJS service $http or $resource.

Step 9: Add the Index Controller

Now we’ll add the Index controller which will be responsible to change the layout for home page i.e (Display Welcome {Logged In Username}, Show My Orders Tab), as well we’ll add log-out functionality on it as the image below.

Index Bar

Taking in consideration that there is no straight way to log-out the user when we use token based approach, the work around we can do here is to remove the local storage key “authorizationData” and set some variables to their initial state.

So add a file named “indexController.js”  under folder “controllers” and paste the code below:

'use strict';
app.controller('indexController', ['$scope', '$location', 'authService', function ($scope, $location, authService) {

    $scope.logOut = function () {
        authService.logOut();
        $location.path('/home');
    }

    $scope.authentication = authService.authentication;

}]);

Step 10: Add the Home Controller and its View

This is last controller and view we’ll add to complete the app, it is simple view and empty controller which is used to display two boxes for log-in and signup as the image below:

Home View

So add new file named “homeController.js” under the “controllers” folder and paste the code below:

'use strict';
app.controller('homeController', ['$scope', function ($scope) {
   
}]);

As well add new file named “home.html” under “views” folder and paste the code below:

<div class="row">
        <div class="col-md-2">
            &nbsp;
        </div>
        <div class="col-md-4">
            <h2>Login</h2>
            <p class="text-primary">If you have Username and Password, you can use the button below to access the secured content using a token.</p>
            <p><a class="btn btn-info" href="#/login" role="button">Login &raquo;</a></p>
        </div>
        <div class="col-md-4">
            <h2>Sign Up</h2>
            <p class="text-primary">Use the button below to create Username and Password to access the secured content using a token.</p>
            <p><a class="btn btn-info" href="#/signup" role="button">Sign Up &raquo;</a></p>
        </div>
        <div class="col-md-2">
            &nbsp;
        </div>
    </div>

By now we should have SPA which uses the token based approach to authenticate users.

One side note before closing: The redirection for anonymous users to log-in page is done on client side code; so any malicious user can tamper with this. It is very important to secure all back-end APIs as we implemented on this tutorial and not to depend on client side code only.

That’s it for now! Hopefully this two posts will be beneficial for folks looking to use token based authentication along with ASP.NET Web API 2 and Owin middleware.

I would like to hear your feedback and comments if there is a better way to implement this especially redirection users to log-in page when the are anonymous.

You can check the demo application on (http://ngAuthenticationWeb.azurewebsites.net), play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

The post AngularJS Token Authentication using ASP.NET Web API 2, Owin, and Identity appeared first on Bit of Technology.


Henrik F. Nielsen: Fresh Updates to Azure Mobile Services .NET (Link)

Posted the blog Fresh Updates to Azure Mobile Services .NET on the Azure Mobile Services Team Blog describing new features released today for Azure Mobile Services .NET. If you are building cloud connected mobiles apps then check out Microsoft Azure Mobile Services and let us know what you think!

Have fun!

Henrik


Darrel Miller: Sharing Fiddler requests using Runscope

Fiddler is an excellent tool that I have been using for many years to debug HTTP requests in local applications.  However, one of the things that Fiddler can't do easily, out of the box, is allow you to share requests with other team members.  Sometimes it is nice to be able to show someone a HTTP request/response for debugging or design related issues. I recently a built an extension to Fiddler that takes advantage of Runscope's request sharing mechanism, to make sharing a request captured by Fiddler a one click affair.

If you want to try out the tool, you can install the latest version using Chocolatey,

cinst runscope.fiddlerextension 

hamstersharing

If you don't have Chocolatey installed and you don't feel shaving that yak today, you can download a .zip file that has the DLL here.  Just copy the DLL into your Fiddler scripts folder.  You should find the folder here:  c:\program files (x86)\Fiddler2\Scripts.  Despite the name of this folder, this plug-in is currently only compatible with Fiddler4, the version for .net 4.0. 

How do I use it

Once the extension is successfully installed, you should see a new context menu item,

FiddlerContextMenu

Connect to your Runscope Account

The first time, you attempt to share a request, you will  be prompted with a configuration screen to connect to your Runscope account.  If you don't have one then you can sign up for free Runscope account, no credit card required.

ConfigureForm

Clicking the "Get Key" button will launch a web browser and take you to the Runscope authentication page.  Once successfully authenticated with the Runscope website, you will be asked to authorize the FiddlerExtension application.  This allows the extension to write requests into your Runscope buckets.  Once completed, an API key will be displayed and the list of available buckets will be displayed.  Select the bucket where you wish the request to be created.  By default, Fiddler will not display the requests being made to the Runscope API.  Selecting the "use proxy" checkbox will allow you to see the API request in Fiddler.

If at a later point in time you wish to change these options, you can always get back to this screen using the Tools -> Configure Runscope menu option.

ConfigureMenu

Once the configuration is complete, the Fiddler session is passed to the Runscope API, and added to the Shared message collection in the specified bucket.  The publicly shareable URL is returned and the default web browser is launched to display the shared message.

The following URL was created from a Fiddler request created on my PC,

https://www.runscope.com/public/196bfc03-e52c-4bb2-932b-b466da7b363d/7dd3f872-644a-4f0a-ba6b-613ad7472ec5

Show me how it works

The source for this tool is available and is licensed with the Apache 2 OSS license.  I will do a follow-up post with more details of the implementation.  There are a few pieces of the project that may be interesting.  There is a Owin based self-hosted server that handles the OAuth2 redirect request and extracts the api key.  I created a WebPack project that encapsulates the semantics of the Runscope API and it also demonstrates how to create a Fiddler plugin. 

Image Credit: Hamsters https://flic.kr/p/dA7Vg


Ali Kheyrollahi: Cancelling an async HTTP request Task sends TCP RESET packet

Level [T4]

This blog post did not just happen. In fact, never, if ever, something just happens. There is a story behind everything and this one is no different. Looking back, it feels like a nice find but as the story was unfolding, I was running around like a headless chicken. Here we have the luxury of the hindsight so let's take advantage of it.

TLDR; If you are a sensible HTTP client and make your HTTP requests using cancellable async Tasks by passing a CancellationToken, you could find your IP blocked by legacy bridge devices blacklisting clients sending TCP RESET packets.

So here is how it started ...

So we were supposed to go live on Monday - some Monday. Talking of live, it was not really live - it was only to internal users but considering the high profile of the project, it felt like the D-Day. All VPs knew of the release and were waiting to see a glimpse of the project. Despite the high profile, it was not properly resourced, I despite being so called architect , pretty much singled handedly did all the API and the middleware connecting the Big Data outputs with the Single Page Application.

We could not finish going live on Monday so it moved to Tuesday. Now on Tuesday morning we were all ready and I set up my machine's screen like traders with all performance monitors up on the screen looking at users. With using the cloud Azure, elasticity was the option although the number of internal users could hardly make a dent on the 3 worker roles. So we did go live, and, I could see traffic building up and all looked fine. Until ... it did not.

I saw requests queuing up and loading the page taking longer and longer. Until it was completely frozen. And we had to take the site down. And that was not good.

Server Analysis

I brought up DebugView and was lucky to see this (actual IP and site names anonymised):

[1240] w3wp.exe Error: 0 :
[1240] <html>
[1240] <h1>Access Administratively Blocked</h1>
[1240] <br>URL : 'http://www.xyz.com/services/blahblah'
[1240] <br>Client IP address : 'xyz.xx.yy.zzz'
[1240] </html>

So we are being blocked! Something is blocking us and this could be because we used an UI data endpoint as a Data API. Well I knew it is not good but as I said we had a limited time and in reality that data endpoint was meant to support our live traffic.

So after a lot of to and fro with our service delivery and some third party support, we were told that our software was recognised as malicious since it was sending way too many TCP RESET packets. Whaa?? No one ain't sending no TCP whatever packets, we are using a high level language (C#) and it is the latest HttpClient implementation. We are actually using many optimising techniques such as async calls, parallelisation, etc to make the code as efficient as possible. We also used short timeout+ retry which is Netflix's approach to improve performance.

But what is TCP RESET packets? Basically a RESET packet is one that has the RESET flag set (which is otherwise unset) and tells the server to drop the TCP connection immediately and reclaim all the resources associated with it. There is an RFC from back in 2002 that considers RESET harmful. Wikipedia's article argues that when used as designed, it is useful but forged RESET can disrupt the communication between the client and server. And Microsoft's technet blog on the topic says "RESET is actually a good thing".

And in essence, I would agree with the Microsoft (and Wikipedia's) account that sending RESET packet is what a responsible client would do. Let's imagine you are browsing a site using a really bad wifi connection. The loading of the page takes too long and you frustrated by the slow connection, cancel browsing by pressing the X button. At this point, a responsible browser should tell the server it has changed its mind and is not interested in the response. This will let the server use its resources for a client that is actually waiting for the server's response.

Now going back to the problem at hand, I am not a TCP expert by any stretch - I have always used higher level constructs and never had to go down so deep in the OSI model. But my surprise was, what is different now with my code that with a handful calls I was getting blocked while the live clients work well with no problem with significantly larger number of calls?

I had a hunch that it probably has to do with the some of the patterns I have been using on the server. And to shorten the suspense, the answer came from the analysis of TCP packets when cancelling an async HTTP Task. The live code uses the traditional synchronous calls - none of the fancy patterns I used. So let's look at some sample code that cancels the task if it takes too long:

var client = new HttpClient();
var buffer = new byte[5 * 1000 * 1000];
// you might have to use different timeout or address
var cts = new CancellationTokenSource(TimeSpan.FromMilliseconds(300)); /
try
{
var result = client.GetAsync("http://www.google.com",
cts.Token).Result;
var s = result.Content.ReadAsStreamAsync().Result;

var result1 = s.ReadAsync(buffer, 0, buffer.Length, cts.Token).Result;
ConsoleWriteLine(ConsoleColor.Green, "Got it");
}
catch (Exception e)
{
ConsoleWriteLine(ConsoleColor.Red, "error! " + e);
}

In this snippet, we are calling the google server and set a 300ms timeout (which you might have to modify the timeout or the address based on your connection speed, in order to see the cancellation). Here is a WireShark proof:



As you can see above a TCP RESET packet has been sent - if you have set the parameters in a way that the request does not complete before its timeout and gets cancelled. You can try this with a longer timeout or use a WebClient which is synchronous and make sure you will never ever see this RST packet.

Now the question is, should a network appliance pick on this responsible cancellation and treat it as an attack? By no means. But in my case, it did and it is very likely that it could do that with yours.

My solution came by whitelisting my IP against "TCP RESET attacks". After all, I was only trying to help the server.

Conclusion

Cancelling an HTTP async Task in the HttpClient results in sending TCP RESET which is considered malicious by some network appliances resulting in blacklisting your IP.

PS. The network appliance belonged to our infrastructure 3rd party provider whose security managed by another third party - it was not in Azure. The real solution would have been to remove such crazy rule, but anyhow, we developers don't always get what we want.



Henrik F. Nielsen: Real-time with ASP.NET SignalR and Azure Mobile .NET Backend (Link)

Posted the blog Real-time with ASP.NET SignalR and Azure Mobile .NET Backend on the Azure Mobile Services Team Blog explaining how to enable real-time communications between your mobile apps and Azure Mobile Services .NET backend.

Have fun!

Henrik


Taiseer Joudeh: Token Based Authentication using ASP.NET Web API 2, Owin, and Identity

Last week I was looking at the top viewed posts on my blog and I noticed that visitors are interested in the authentication part of ASP.NET Web API, CORS Support, and how to authenticate users in single page applications built with AngularJS using token based approach.

So I decided to compile mini tutorial of three posts which covers and connects those topics. In this tutorial we’ll build SPA using AngularJS for the front-end, and ASP.NET Web API 2, Owin middleware, and ASP.NET Identity for the back-end.

The demo application can be accessed on (http://ngAuthenticationWeb.azurewebsites.net). The back-end API can be accessed on (http://ngAuthenticationAPI.azurewebsites.net/) and both are hosted on Microsoft Azure, for learning purposes feel free to integrate and play with the back-end API with your front-end application. The API supports CORS and accepts HTTP calls from any origin. You can check the source code for this tutorial on Github.

AngularJS Authentication

 

Token Based Authentication

As I stated before we’ll use token based approach to implement authentication between the front-end application and the back-end API, as we all know the common and old way to implement authentication is the cookie-based approach were the cookie is sent with each request from the client to the server, and on the server it is used to identify the authenticated user.

With the evolution of front-end frameworks and the huge change on how we build web applications nowadays the preferred approach to authenticate users is to use signed token as this token sent to the server with each request, some of the benefits for using this approach are:

  • Scalability of Servers: The token sent to the server is self contained which holds all the user information needed for authentication, so adding more servers to your web farm is an easy task, there is no dependent on shared session stores.
  • Loosely Coupling: Your front-end application is not coupled with specific authentication mechanism, the token is generated from the server and your API is built in a way to understand this token and do the authentication.
  • Mobile Friendly: Cookies and browsers like each other, but storing cookies on native platforms (Android, iOS, Windows Phone) is not a trivial task, having standard way to authenticate users will simplify our life if we decided to consume the back-end API from native applications.

What we’ll build in this tutorial?

The front-end SPA will be built using HTML5, AngularJS, and Twitter Bootstrap. The back-end server will be built using ASP.NET Web API 2 on top of Owin middleware not directly on top of ASP.NET; the reason for doing so that we’ll configure the server to issue OAuth bearer token authentication using Owin middleware too, so setting up everything on the same pipeline is better approach. In addition to this we’ll use ASP.NET Identity system which is built on top of Owin middleware and we’ll use it to register new users and validate their credentials before generating the tokens.

As I mentioned before our back-end API should accept request coming from any origin, not only our front-end, so we’ll be enabling CORS (Cross Origin Resource Sharing) in Web API as well for the OAuth bearer token provider.

Use cases which will be covered in this application:

  • Allow users to signup (register) by providing username and password then store credentials in secure medium.
  • Prevent anonymous users from viewing secured data or secured pages (views).
  • Once the user is logged in successfully, the system should not ask for credentials or re-authentication for the next 24 hours 30 minutes because we are using refresh tokens.

So in this post we’ll cover step by step how to build the back-end API, and on the next post we’ll cover how we’ll build and integrate the SPA with the API.

Enough theories let’s get our hands dirty and start implementing the API!

Building the Back-End API

Step 1: Creating the Web API Project

In this tutorial I’m using Visual Studio 2013 and .Net framework 4.5, you can follow along using Visual Studio 2012 but you need to install Web Tools 2013.1 for VS 2012 by visiting this link.

Now create an empty solution and name it “AngularJSAuthentication” then add new ASP.NET Web application named “AngularJSAuthentication.API”, the selected template for project will be as the image below. Notice that the authentication is set to “No Authentication” taking into consideration that we’ll add this manually.

Web API Project Template

Step 2: Installing the needed NuGet Packages:

Now we need to install the NuGet packages which are needed to setup our Owin server and configure ASP.NET Web API to be hosted within an Owin server, so open NuGet Package Manger Console and type the below:

Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.1.2
Install-Package Microsoft.Owin.Host.SystemWeb -Version 2.1.0

The  package “Microsoft.Owin.Host.SystemWeb” is used to enable our Owin server to run our API on IIS using ASP.NET request pipeline as eventually we’ll host this API on Microsoft Azure Websites which uses IIS.

Step 3: Add Owin “Startup” Class

Right click on your project then add new class named “Startup”. We’ll visit this class many times and modify it, for now it will contain the code below:

using Microsoft.Owin;
using Owin;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Http;

[assembly: OwinStartup(typeof(AngularJSAuthentication.API.Startup))]
namespace AngularJSAuthentication.API
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();
            WebApiConfig.Register(config);
            app.UseWebApi(config);
        }

    }
}

What we’ve implemented above is simple, this class will be fired once our server starts, notice the “assembly” attribute which states which class to fire on start-up. The “Configuration” method accepts parameter of type “IAppBuilder” this parameter will be supplied by the host at run-time. This “app” parameter is an interface which will be used to compose the application for our Owin server.

The “HttpConfiguration” object is used to configure API routes, so we’ll pass this object to method “Register” in “WebApiConfig” class.

Lastly, we’ll pass the “config” object to the extension method “UseWebApi” which will be responsible to wire up ASP.NET Web API to our Owin server pipeline.

Usually the class “WebApiConfig” exists with the templates we’ve selected, if it doesn’t exist then add it under the folder “App_Start”. Below is the code inside it:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {

            // Web API routes
            config.MapHttpAttributeRoutes();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );

            var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
            jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }

Step 4: Delete Global.asax Class

No need to use this class and fire up the Application_Start event after we’ve configured our “Startup” class so feel free to delete it.

Step 5: Add the ASP.NET Identity System

After we’ve configured the Web API, it is time to add the needed NuGet packages to add support for registering and validating user credentials, so open package manager console and add the below NuGet packages:

Install-Package Microsoft.AspNet.Identity.Owin -Version 2.0.1
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.0.1

The first package will add support for ASP.NET Identity Owin, and the second package will add support for using ASP.NET Identity with Entity Framework so we can save users to SQL Server database.

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “AuthContext” then paste the code snippet below:

public class AuthContext : IdentityDbContext<IdentityUser>
    {
        public AuthContext()
            : base("AuthContext")
        {

        }
    }

As you can see this class inherits from “IdentityDbContext” class, you can think about this class as special version of the traditional “DbContext” Class, it will provide all of the Entity Framework code-first mapping and DbSet properties needed to manage the identity tables in SQL Server. You can read more about this class on Scott Allen Blog.

Now we want to add “UserModel” which contains the properties needed to be sent once we register a user, this model is POCO class with some data annotations attributes used for the sake of validating the registration payload request. So under “Models” folder add new class named “UserModel” and paste the code below:

public class UserModel
    {
        [Required]
        [Display(Name = "User name")]
        public string UserName { get; set; }

        [Required]
        [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
        public string ConfirmPassword { get; set; }
    }

Now we need to add new connection string named “AuthContext” in our Web.Config class, so open you web.config and add the below section:

<connectionStrings>
    <add name="AuthContext" connectionString="Data Source=.\sqlexpress;Initial Catalog=AngularJSAuth;Integrated Security=SSPI;" providerName="System.Data.SqlClient" />
  </connectionStrings>

Step 6: Add Repository class to support ASP.NET Identity System

Now we want to implement two methods needed in our application which they are: “RegisterUser” and “FindUser”, so add new class named “AuthRepository” and paste the code snippet below:

public class AuthRepository : IDisposable
    {
        private AuthContext _ctx;

        private UserManager<IdentityUser> _userManager;

        public AuthRepository()
        {
            _ctx = new AuthContext();
            _userManager = new UserManager<IdentityUser>(new UserStore<IdentityUser>(_ctx));
        }

        public async Task<IdentityResult> RegisterUser(UserModel userModel)
        {
            IdentityUser user = new IdentityUser
            {
                UserName = userModel.UserName
            };

            var result = await _userManager.CreateAsync(user, userModel.Password);

            return result;
        }

        public async Task<IdentityUser> FindUser(string userName, string password)
        {
            IdentityUser user = await _userManager.FindAsync(userName, password);

            return user;
        }

        public void Dispose()
        {
            _ctx.Dispose();
            _userManager.Dispose();

        }
    }

What we’ve implemented above is the following: we are depending on the “UserManager” that provides the domain logic for working with user information. The “UserManager” knows when to hash a password, how and when to validate a user, and how to manage claims. You can read more about ASP.NET Identity System.

Step 7: Add our “Account” Controller

Now it is the time to add our first Web API controller which will be used to register new users, so under file “Controllers” add Empty Web API 2 Controller named “AccountController” and paste the code below:

[RoutePrefix("api/Account")]
    public class AccountController : ApiController
    {
        private AuthRepository _repo = null;

        public AccountController()
        {
            _repo = new AuthRepository();
        }

        // POST api/Account/Register
        [AllowAnonymous]
        [Route("Register")]
        public async Task<IHttpActionResult> Register(UserModel userModel)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }

            IdentityResult result = await _repo.RegisterUser(userModel);

            IHttpActionResult errorResult = GetErrorResult(result);

            if (errorResult != null)
            {
                return errorResult;
            }

            return Ok();
        }

        protected override void Dispose(bool disposing)
        {
            if (disposing)
            {
                _repo.Dispose();
            }

            base.Dispose(disposing);
        }

        private IHttpActionResult GetErrorResult(IdentityResult result)
        {
            if (result == null)
            {
                return InternalServerError();
            }

            if (!result.Succeeded)
            {
                if (result.Errors != null)
                {
                    foreach (string error in result.Errors)
                    {
                        ModelState.AddModelError("", error);
                    }
                }

                if (ModelState.IsValid)
                {
                    // No ModelState errors are available to send, so just return an empty BadRequest.
                    return BadRequest();
                }

                return BadRequest(ModelState);
            }

            return null;
        }
    }

By looking at the “Register” method you will notice that we’ve configured the endpoint for this method to be “/api/account/register” so any user wants to register into our system must issue HTTP POST request to this URI and the pay load for this request will contain the JSON object as below:

{
  "userName": "Taiseer",
  "password": "SuperPass",
  "confirmPassword": "SuperPass"
}

Now you can run your application and issue HTTP POST request to your local URI: “http://localhost:port/api/account/register” or you can try the published API using this end point: http://ngauthenticationapi.azurewebsites.net/api/account/register if all went fine you will receive HTTP status code 200 and the database specified in connection string will be created automatically and the user will be inserted into table “dbo.AspNetUsers”.

Note: It is very important to send this POST request over HTTPS so the sensitive information get encrypted between the client and the server.

The “GetErrorResult” method is just a helper method which is used to validate the “UserModel” and return the correct HTTP status code if the input data is invalid.

Step 8: Add Secured Orders Controller

Now we want to add another controller to serve our Orders, we’ll assume that this controller will return orders only for Authenticated users, to keep things simple we’ll return static data. So add new controller named “OrdersController” under “Controllers” folder and paste the code below:

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        [Authorize]
        [Route("")]
        public IHttpActionResult Get()
        {
            return Ok(Order.CreateOrders());
        }

    }

    #region Helpers

    public class Order
    {
        public int OrderID { get; set; }
        public string CustomerName { get; set; }
        public string ShipperCity { get; set; }
        public Boolean IsShipped { get; set; }

        public static List<Order> CreateOrders()
        {
            List<Order> OrderList = new List<Order> 
            {
                new Order {OrderID = 10248, CustomerName = "Taiseer Joudeh", ShipperCity = "Amman", IsShipped = true },
                new Order {OrderID = 10249, CustomerName = "Ahmad Hasan", ShipperCity = "Dubai", IsShipped = false},
                new Order {OrderID = 10250,CustomerName = "Tamer Yaser", ShipperCity = "Jeddah", IsShipped = false },
                new Order {OrderID = 10251,CustomerName = "Lina Majed", ShipperCity = "Abu Dhabi", IsShipped = false},
                new Order {OrderID = 10252,CustomerName = "Yasmeen Rami", ShipperCity = "Kuwait", IsShipped = true}
            };

            return OrderList;
        }
    }

    #endregion

Notice how we added the “Authorize” attribute on the method “Get” so if you tried to issue HTTP GET request to the end point “http://localhost:port/api/orders” you will receive HTTP status code 401 unauthorized because the request you send till this moment doesn’t contain valid authorization header. You can check this using this end point: http://ngauthenticationapi.azurewebsites.net/api/orders

Step 9: Add support for OAuth Bearer Tokens Generation

Till this moment we didn’t configure our API to use OAuth authentication workflow, to do so open package manager console and install the following NuGet package:

Install-Package Microsoft.Owin.Security.OAuth -Version 2.1.0

After you install this package open file “Startup” again and call the new method named “ConfigureOAuth” as the first line inside the method “Configuration”, the implemntation for this method as below:

public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureOAuth(app);
	    //Rest of code is here;
        }

        public void ConfigureOAuth(IAppBuilder app)
        {
            OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
            {
                AllowInsecureHttp = true,
                TokenEndpointPath = new PathString("/token"),
                AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
                Provider = new SimpleAuthorizationServerProvider()
            };

            // Token Generation
            app.UseOAuthAuthorizationServer(OAuthServerOptions);
            app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());

        }
    }

Here we’ve created new instance from class “OAuthAuthorizationServerOptions” and set its option as the below:

  • The path for generating tokens will be as :”http://localhost:port/token”. We’ll see how we will issue HTTP POST request to generate token in the next steps.
  • We’ve specified the expiry for token to be 24 hours, so if the user tried to use the same token for authentication after 24 hours from the issue time, his request will be rejected and HTTP status code 401 is returned.
  • We’ve specified the implementation on how to validate the credentials for users asking for tokens in custom class named “SimpleAuthorizationServerProvider”.

Now we passed this options to the extension method “UseOAuthAuthorizationServer” so we’ll add the authentication middleware to the pipeline.

Step 10: Implement the “SimpleAuthorizationServerProvider” class

Add new folder named “Providers” then add new class named “SimpleAuthorizationServerProvider”, paste the code snippet below:

public class SimpleAuthorizationServerProvider : OAuthAuthorizationServerProvider
    {
        public override async Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
        {
            context.Validated();
        }

        public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
        {

            context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" });

            using (AuthRepository _repo = new AuthRepository())
            {
                IdentityUser user = await _repo.FindUser(context.UserName, context.Password);

                if (user == null)
                {
                    context.SetError("invalid_grant", "The user name or password is incorrect.");
                    return;
                }
            }

            var identity = new ClaimsIdentity(context.Options.AuthenticationType);
            identity.AddClaim(new Claim("sub", context.UserName));
            identity.AddClaim(new Claim("role", "user"));

            context.Validated(identity);

        }
    }

As you notice this class inherits from class “OAuthAuthorizationServerProvider”, we’ve overridden two methods “ValidateClientAuthentication” and “GrantResourceOwnerCredentials”. The first method is responsible for validating the “Client”, in our case we have only one client so we’ll always return that its validated successfully.

The second method “GrantResourceOwnerCredentials” is responsible to validate the username and password sent to the authorization server’s token endpoint, so we’ll use the “AuthRepository” class we created earlier and call the method “FindUser” to check if the username and password are valid.

If the credentials are valid we’ll create “ClaimsIdentity” class and pass the authentication type to it, in our case “bearer token”, then we’ll add two claims (“sub”,”role”) and those will be included in the signed token. You can add different claims here but the token size will increase for sure.

Now generating the token happens behind the scenes when we call “context.Validated(identity)”.

To allow CORS on the token middleware provider we need to add the header “Access-Control-Allow-Origin” to Owin context, if you forget this, generating the token will fail when you try to call it from your browser. Not that this allows CORS for token middleware provider not for ASP.NET Web API which we’ll add on the next step.

Step 11: Allow CORS for ASP.NET Web API

First of all we need to install the following NuGet package manger, so open package manager console and type:

Install-Package Microsoft.Owin.Cors -Version 2.1.0

Now open class “Startup” again and add the highlighted line of code (line 8) to the method “Configuration” as the below:

public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            ConfigureOAuth(app);

            WebApiConfig.Register(config);
            app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
            app.UseWebApi(config);

        }

Step 12: Testing the Back-end API

Assuming that you registered the username “Taiseer” with password “SuperPass” in the step below, we’ll use the same username to generate token, so to test this out open your favorite REST client application in order to issue HTTP requests to generate token for user “Taiseer”. For me I’ll be using PostMan.

Now we’ll issue a POST request to the endpoint http://ngauthenticationapi.azurewebsites.net/token the request will be as the image below:

OAuth Token Request

Notice that the content-type and payload type is “x-www-form-urlencoded” so the payload body will be on form (grant_type=password&username=”Taiseer”&password=”SuperPass”). If all is correct you’ll notice that we’ve received signed token on the response.

As well the “grant_type” Indicates the type of grant being presented in exchange for an access token, in our case it is password.

Now we want to use this token to request the secure data using the end point http://ngauthenticationapi.azurewebsites.net/api/orders so we’ll issue GET request to the end point and will pass the bearer token in the Authorization header, so for any secure end point we’ve to pass this bearer token along with each request to authenticate the user.

Note: that we are not transferring the username/password as the case of Basic authentication.

The GET request will be as the image below:

Token Get Secure Resource

If all is correct we’ll receive HTTP status 200 along with the secured data in the response body, if you try to change any character with signed token you directly receive HTTP status code 401 unauthorized.

Now our back-end API is ready to be consumed from any front end application or native mobile app.

Update (2014-08-11) Thanks for Attila Hajdrik for forking my repo and updating it to use MongoDb instead of Entity Framework, you can check it here.

You can check the demo application, play with the back-end API for learning purposes (http://ngauthenticationapi.azurewebsites.net), and check the source code on Github.

Follow me on Twitter @tjoudeh

References

 

The post Token Based Authentication using ASP.NET Web API 2, Owin, and Identity appeared first on Bit of Technology.


Darrel Miller: Making your ASP.NET Web API funcky with an OWIN appFunc

The OWIN specification defines a delegate called appFunc that allows any OWIN compatible host to work with any OWIN compatible application.  This post shows you how to turn an ASP.NET Web API into an AppFunc.

FunkyDancers

AppFunc is defined as ,

using AppFunc = Func<IDictionary<string, object>,Task>; 

In other words, a function that accepts an object that implements IDictionary<string,object> and returns a Task.  The dictionary contains all the OWIN environment values and the Task is used to signal when the HTTP application has finished processing the request.

Microsoft have built OWIN infrastructure code, aka Katana, to support OWIN applications and host.  Unfortunately when creating the glue code that allows Web API to run on OWIN hosts, they hid the appFunc behind "helper" code.  Using Katana does add a whole lot of value, but I feel that being required to use a Katana-enabled host to host a Katana-enabled application kind of defeats the point of OWIN, even though OWIN is being used under the covers.

Fortunately, it is not a big task to use the Katana Web API infrastructure and expose it as a real OWIN appFunc.  I do use the Katana Web API adapter code to avoid having to re-write the code that maps the Owin environment into HttpRequestMessage and HttpResponseMessage.  You can install this by using the Microsoft.AspNet.WebApi.Owin nuget package.

 public static class WebApiAdapter
    {
        public static Func<IDictionary<string, object>, Task> 
CreateWebApiAppFunc(
HttpConfiguration config,
HttpMessageHandlerOptions options = null) { var app = new HttpServer(config); if (options == null) { options = new HttpMessageHandlerOptions() { MessageHandler = app, BufferPolicySelector = new OwinBufferPolicySelector(), ExceptionLogger = new WebApiExceptionLogger(), ExceptionHandler = new WebApiExceptionHandler() }; } var handler = new HttpMessageHandlerAdapter(null, options); return (env) => handler.Invoke(new OwinContext(env)); } } public class WebApiExceptionLogger : IExceptionLogger { public Task LogAsync(ExceptionLoggerContext context,
CancellationToken cancellationToken) { return Task.FromResult<object>(null); } } public class WebApiExceptionHandler : IExceptionHandler { public Task HandleAsync(ExceptionHandlerContext context,
CancellationToken cancellationToken) { return Task.FromResult<object>(null); } }
The current the ExceptionLogger and ExceptionHandler are "do nothing" implementations, but these can be replaced with application specific code.  Passing an HttpConfiguration instance to the CreateWebApiAppFunc method will return an OWIN appFunc that should work with any OWIN host.

There is an example of how you can host an AppFunc in my earlier blog post.


Filip Woj: Announcing ASP.NET Web API Recipes

It is my pleasure to announce that this summer my ASP.NET Web API book will be released. It’s entitled "ASP.NET Web API Recipes", and will be published by Apress. While the publication date is not set in stone yet (probably … Continue reading

The post Announcing ASP.NET Web API Recipes appeared first on StrathWeb.


Darrel Miller: Streamlining self-hosting with Chocolatey, Nuget, Owin and Topshelf – part 2

In the previous post in this series, we talked about how to create a Windows Service that would use an OWIN compatible host, to host an OWIN HTTP application and package that up into an easy to manage executable. This post describes an approach to deploying that executable using  simple command line tooling.

Push vs Pull Deployments

TugOfWar I often hear people talk about xcopy deployment as if it were the epitomy of deployment simplicity.  My experience is that very often I want to deploy to a machine that is in a completely different security realm and the idea of being able to simply copy files from the source machine to target machine is not practical.  I also find that I much prefer to pull stuff onto a machine rather than push onto it.  If you are running a big web farm, then maybe pushing does make more sense, but if you are running a big web farm, I highly doubt you are considering self-hosting with Windows Services!

Chocolatey

One of the most promising solutions for doing pull deployments is Chocolatey.   Chocolatey downloads specially configured Nuget packages from a standard Nuget feed, to install applications onto machines.  In my opinion the future of Chocolatey was recently validated when Microsoft announced that Powershell V5 will contain tooling called OneGet that will provide “in-the-box” support for Chocolatey packages.

I thought Nugets were just for libraries

The majority of Nuget usage at the moment is for consuming reusable .Net libraries.  However, the internal structure of a Nuget package is quite flexible and is more than capable of delivering a payload of binaries and Powershell scripts for installing those binaries.

The oversimplified explanation of a Chocolatey Nuget is that it is a regular nuget with a powershell script called ChocolateyInstall.ps1.  After you have installed Chocolatey, there are a number of command line tools that are available for managing packages.  For example,

cinst MyApplication –Source https://myfeeds.company.com/feed

This command instructs Chocolatey to download the MyApplication nuget package from the nuget feed specified in the “source” parameter and extract the contents of the nuget into a sub-folder of [SystemDrive]:\Chocolatey\lib.  Once that is done, Chocolatey will attempt to run the ChocolateyInstall.ps1 file if one was in the nuget.  That’s the essence of the process.  For our purposes we are going to use the install script to run our application with the Topshelf “install” command to install the Windows Service.  We will also include a ChocolatelyUninstall.ps1 to handle the removal of the Windows Service when the package is uninstalled.

Putting your HTTP Application in a Nuget

I tried using the shortcut approach of building a nuget package by generating the .nuspec file directly from the application project, however, when you do this, the Nuget only contains the binary of the project and assemblies that were referenced directly.  Assemblies that are referenced via Nuget references are left simply as references and not embedded in the nupkg.  This has the benefit of making the Nuget really lightweight, but it would require some fancy powershell footwork after deploying our service to copy all the binaries from the dependent nuget packages into a single folder, ensuring that we get the correct framework version of all the binaries.

KinderSurprise

The simpler solution is just to package up the binaries that are in the debug/release folder.  A sample nuspec would look something like this:

<?xml version="1.0"?>
<package >
  <metadata>
    <id>SampleService</id>
    <version>1.0.2</version>
    <title>Sample Service</title>
    <authors>Darrel Miller</authors>
    <owners>Darrel Miller</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Sample Owin Http host</description>
    <copyright>Copyright 2014</copyright>
  </metadata>
  <files>
      <file src="..\..\SampleService\bin\release\*" target="tools"/>
      <file src="ChocolateyInstall.ps1" target="tools"/>
      <file src="ChocolateyUninstall.ps1" target="tools"/>
  </files>
</package>

Instead of using nuget.exe to pack this file into a nuget, you use the Chocolatey cpack command to create the nupkg file.

The install script looks like this,

$packageName = 'SampleService' 
$serviceFileName = "SampleService.exe"

try { 
  
  $installDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)" 
  $fileToInstall = Join-Path $installDir $serviceFileName
  . $fileToInstall install
  . $fileToInstall start
Write-ChocolateySuccess "$packageName" } catch { Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)" throw }

And the uninstall script is almost identical:

$packageName = 'SampleService' 
$serviceFileName = "SampleService.exe"

try { 
  
  $installDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)" 
  $fileToInstall = Join-Path $installDir $serviceFileName
  . $fileToInstall uninstall

  Write-ChocolateySuccess "$packageName"
} catch {
  Write-ChocolateyFailure "$packageName" "$($_.Exception.Message)"
  throw 
}

I must confess ignorance to the incantations required to get the install folder.  I simply found other scripts that did something similar and “re-purposed”.  The source for this sample can be found here.

Now where should I put my nupkg

Once you have built your nuget package you can host it either on Chocolatey.org, on Nuget itself, or do what I did and create a feed on myget.org and upload your package there.  If you create an account on MyGet.org, all the instructions on how to upload a package are provided on your Feed page.

Installing the service is as simple as using the Chocolatey "cinst" command:

cinst SampleService -source https://myget.org/f/darrel

If you are brave enough to trust me, you can try installing it on your own machine.  If you currently don't have Chocolatey installed, there is single command line that you  can copy and pasted from the Chocolatey home page that will set it up for you.  Once you have Chocolatey and the SampleService installed you can prove it is working by doing the following:

start http://localhost:1002/

If everything works as intended you should see the following,

Install

and once it is installed, and you launch the service you will see this magnificent output.

HelloWorld

Removing the service is as easy as,

cuninst SampleService

What doesn't work?

What we have seen so far gives us a solution that makes it very simple to deploy to a new machine.  My initial attempts to do an in-place update has not worked as I had hoped.  Which leaves us to do a uninstall old version / install new version process.  That's fine as long as there isn't configuration data that you want to bring over from the old version to the new version.  I'm sure there is a solution to this, I just haven't dug deep enough yet.

Using a self-hosted windows service is not likely to fit the bill for every web api you deploy, however, it can sometimes be the ideal solution. Hopefully you will find having this set of tricks up your sleeve will come in handy one day.

Image Credit: Kinder Surprise https://flic.kr/p/fw9eVB
Image Credit:  Tug of War https://flic.kr/p/nD2nj


Dominick Baier: IdentityServer v3 Nuget and Self-Hosting

Thanks to Damian and Maurice we now have a build script for IdSrv3 that creates a Nuget package *and* internalizes all dependencies. So in other words you only need to reference a single package (well strictly speaking two) to self host the STS (including Autofac, Web API, various Katana components etc). Pretty cool. Thanks guys!

A picture says more than 1000 words (taken from the new self host sample in the repo).

image


Filed under: IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: Covert Redirect – really?

In the era where security vulnerabilities have logos, stickers and mainstream media coverage – it seems to be really easy to attract attention with simple input validation flaws. Quoting:

“Covert Redirect is an application that takes a parameter and redirects a user to the parameter value WITHOUT SUFFICIENT validation. This is often the of result of a website’s overconfidence in its partners.”

Oh yes – and amongst a myriad of other scenarios this also applies to URLs of redirect based authentication/token protocols like OAuth2, OpenID Connect, WS-Federation or SAML2p. And guys like Egor Homakov have already shown a number of times that you are bound to be doomed if you give external parties too much control over your redirect URLs.

The good thing is, that every serious implementer of the above protocols also reads the specs and accompanying threat model – e.g. quoting the OpenID Connect spec (section 3.1.2.1) :

redirect_uri
REQUIRED. Redirection URI to which the response will be sent. This URI MUST exactly match one of the Redirection URI values for the Client pre-registered at the OpenID Provider, with the matching performed as described in Section 6.2.1 of [RFC3986] (Simple String Comparison)”

Or my blog post from over a year ago:

“If you don’t properly validate the redirect URI you might be sending the token or code to an unintended location. We decided to a) always require SSL if it is a URL and b) do an exact match against the registered redirect URI in our database. No sub URLs, no query strings. Nothing.”

So this type of attack is really a thing of the past – right?


Filed under: .NET Security, AuthorizationServer, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.