Andrew Lock: Exploring Middleware as MVC Filters in ASP.NET Core 1.1

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

One of the new features released in ASP.NET Core 1.1 is the ability to use middleware as an MVC Filter. In this post I'll take a look at how the feature is implemented by peering into the source code, rather than focusing on how you can use it. In the next post I'll look at how you can use the feature to allow greater code reuse.

Middleware vs Filters

The first step is to consider why you would choose to use middleware over filters, or vice versa. Both are designed to handle cross-cutting concerns of your application and both are used in a 'pipeline', so in some cases you could choose either successfully.

The main difference between them is their scope. Filters are a part of MVC, so they are scoped entirely to the MVC middleware. Middleware only has access to the HttpContext and anything added by preceding middleware. In contrast, filters have access to the wider MVC context, so can access routing data and model binding information for example.

Generally speaking, if you have a cross cutting concern that is independent of MVC then using middleware makes sense, if your cross cutting concern relies on MVC concepts, or must run midway through the MVC pipeline, then filters make sense.

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

So why would you want to use middleware as filters then? A couple of reasons come to mind for me.

First, you have some middleware that already does what you want, but you now need the behaviour to occur midway through the MVC middleware. You could rewrite your middleware as a filter, but it would be nicer to just be able to plug it in as-is. This is especially true if you are using a piece of third-party middleware and you don't have access to the source code.

Second, you have functionality that needs to logically run as both middleware and a filter. In that case you can just have the one implementation that is used in both places.

Using the MiddlewareFilterAttribute

On the announcement post, you will find an example of how to use Middleware as filters. Here I'll show a cut down example, in which I want to run MyCustomMiddleware when a specific MVC action is called.

There are two parts to the process, the first is to create a middleware pipeline object:

public class MyPipeline  
{
    public void Configure(IApplicationBuilder applicationBuilder) 
    {
        var options = // any additional configuration

        applicationBuilder.UseMyCustomMiddleware(options);
    }
}

and the second is to use an instance of the MiddlewareFilterAttribute on an action or a controller, wherever it is needed.

[MiddlewareFilter(typeof(MyPipeline))]
public IActionResult ActionThatNeedsCustomfilter()  
{
    return View();
}

With this setup, MyCustomMiddleware will run each time the action method ActionThatNeedsCustomfilter is called.

It's worth noting that the MiddlewareFilterAttribute on the action method does not take a type of the middleware component itself (MyCustomMiddleware), it actually takes a pipeline object which configures the middleware itself. Don't worry about this too much as we'll come back to it again later.

For the rest of this post, I'll dip into the MVC repository and show how the feature is implemented.

The MiddlewareFilterAttribute

As we've already seen, the middleware filter feature starts with the MiddlewareFilterAttribute applied to a controller or method. This attribute implements the IFilterFactory interface which is useful for injecting services into MVC filters. The implementation of this interface just requires one method, CreateInstance(IServiceProvider provider):

public class MiddlewareFilterAttribute : Attribute, IFilterFactory, IOrderedFilter  
{
    public MiddlewareFilterAttribute(Type configurationType)
    {
        ConfigurationType = configurationType;
    }

    public Type ConfigurationType { get; }

    public IFilterMetadata CreateInstance(IServiceProvider serviceProvider)
    {
        var middlewarePipelineService = serviceProvider.GetRequiredService<MiddlewareFilterBuilder>();
        var pipeline = middlewarePipelineService.GetPipeline(ConfigurationType);

        return new MiddlewareFilter(pipeline);
    }
}

The implementation of the attribute is fairly self explanatory. First a MiddlewareFilterBuilder object is obtained from the dependency injection container. Next, GetPipeline is called on the builder, passing in the ConfigurationType that was supplied when creating the attribute (MyPipeline in the previous example).

GetPipeline returns a RequestDelegate which represents a middleware pipeline which takes in an HttpContext and returns a Task:

public delegate Task RequestDelegate(HttpContext context);  

Finally, the delegate is used to create a new MiddlewareFilter, which is returned by the method. This pattern of using an IFilterFactory attribute to create an actual filter instance is very common in the MVC code base, and works around the problems of service injection into attributes, as well as ensuring each component sticks to the single responsibility principle.

Building the pipeline with the MiddlewareFilterBuilder

In the last snippet we saw the MiddlewareFilterBuilder being used to turn our MyPipeline type into an actual, runnable piece of middleware. Taking a look inside the MiddlewareFilterBuilder, you will see an interesting use case of a Lazy<> with a ConcurrentDictionary, to ensure that each pipeline Type passed in to the service is only ever created once. This was the usage I wrote about in my last post.

The call to GetPipeline initialises a pipeline for the provided type using the BuildPipeline method, shown below in abbreviated form:

private RequestDelegate BuildPipeline(Type middlewarePipelineProviderType)  
{
    var nestedAppBuilder = ApplicationBuilder.New();

    // Get the 'Configure' method from the user provided type.
    var configureDelegate = _configurationProvider.CreateConfigureDelegate(middlewarePipelineProviderType);
    configureDelegate(nestedAppBuilder);

    nestedAppBuilder.Run(async (httpContext) =>
    {
        // additional end-middleware, covered later
    });

    return nestedAppBuilder.Build();
}

This method creates a new IApplicationBuilder, and uses it to configure a middleware pipeline, using the custom pipeline supplied earlier (MyPipeline'). It then adds an additional piece of 'end-middleware' at the end of the pipeline which I'll come back to later, and builds the pipeline into a RequestDelegate.

Creating the pipeline from MyPipeline is performed by a MiddlewareFilterConfigurationProvider, which attempts to find an appropriate Configure method on it.

You can think of the MyPipeline class as a mini-Startup class. Just like the Startup class you need a Configure method to add middleware to an IApplicationBuilder, and just like in Startup, you can inject additional services into the method. One of the big differences is that you can't have environment-specific Configure methods like ConfigureDevelopment here - your class must have one, and only one, configuration method called Configure.

The MiddlewareFilter

So just to recap, you add a MiddlewareFilterAttribute to one of your action methods or controllers, passing in a pipeline to use as a filter, e.g. MyPipeline. This uses a MiddlewareFilterBuilder to create a RequestDelegate, which in turn is used to create a MiddlewareFilter. This is the object actually added to the MVC filter pipeline.

The MiddlewareFilter implements IAsyncResourceFilter, so it runs early in the filter pipeline - after AuthorizationFilters have run, but before Model Binding and Action filters. This allows you to potentially short-circuit requests completely should you need to.

The MiddlewareFilter implements the single required method OnResourceExecutionAsync. The execution is very simple. First it records the MVC ResourceExecutingContext context of the filter, as well as the next filter to execute ResourceExecutionDelegate, as a new MiddlewareFilterFeature. This feature is then stored against the HttpContext itself, so it can be accessed elsewhere. The middleware pipeline we created previously is then invoked using the HttpContext.

public class MiddlewareFilter : IAsyncResourceFilter  
{
    private readonly RequestDelegate _middlewarePipeline;
    public MiddlewareFilter(RequestDelegate middlewarePipeline)
    {
        _middlewarePipeline = middlewarePipeline;
    }

    public Task OnResourceExecutionAsync(ResourceExecutingContext context, ResourceExecutionDelegate next)
    {
        var httpContext = context.HttpContext;

        var feature = new MiddlewareFilterFeature()
        {
            ResourceExecutionDelegate = next,
            ResourceExecutingContext = context
        };
        httpContext.Features.Set<IMiddlewareFilterFeature>(feature);

        return _middlewarePipeline(httpContext);
    }

From the point of view of the middleware pipeline we created, it is as though it was called as part of the normal pipline; it just receives an HttpContext to work with. If needs be though, it can access the MVC context by accessing the MiddlewareFilterFeature.

If you have written any filters previously, something may seem a bit off with this code. Normally, you would call await next() to execute the next filter in the pipeline before returning, but we are just returning the Task from our RequestDelegate invocation. How does the pipeline continue? To see how, we'll skip back to the 'end-middleware' I glossed over in BuildPipeline

Using the end-middleware to continue the filter pipeline

The middleware added at the end of the BuildPipeline method is responsible for continuing the execution of the filter pipeline. An abbreviated form looks like this:

nestedAppBuilder.Run(async (httpContext) =>  
{
    var feature = httpContext.Features.Get<IMiddlewareFilterFeature>();

    var resourceExecutionDelegate = feature.ResourceExecutionDelegate;
    var resourceExecutedContext = await resourceExecutionDelegate();

    if (!resourceExecutedContext.ExceptionHandled && resourceExecutedContext.Exception != null)
    {
        throw resourceExecutedContext.Exception;
    }
});

There are two main functions of this middleware. The primary goal is ensuring the filter pipeline is continued after the MiddlewareFilter has executed. This is achieved by loading the IMiddlewareFeatureFeature which was saved to the HttpContext when the filter began executing. It can then access the next filter via the ResourceExecutionDelegate and await its execution as usual.

The second goal, is to behave like a middleware pipeline rather than a filter pipeline when exceptions are thrown. That is, if a later filter or action method throws an exception, and no filter handles the exception, then the end-middleware re-throws it, so that the middleware pipeline used in the filter can handle it as middleware normally would (with a try-catch).

Note that Get<IMiddlewareFilterFeature>() will be called before the end of each MiddlewareFilter. If you have multiple MiddlewareFilters in the pipeline, each one will set a new instance of IMiddlewareFilterFeature, overwriting the values saved earlier. I haven't dug into it, but that could potentially cause an issue if you have middleware in your MyCustomMiddleware that both operates on the response being sent back through the pipeline after other middleware has executed, and also tries to load the IMiddlewareFilterFeature. In that case, it will get the IMiddlewareFilterFeature associated with a different MiddlewareFilter. It's a pretty unlikely scenario I suspect, but still, just watch out for it.

Wrapping up

That brings us to the end of this look under the covers of middleware filters. hopefully you found it interesting, personally, I just enjoy looking at the repos as a source of inspiration should I ever need to implement something similar in the future. Hope you enjoyed it!


Ben Foster: Using .NET Core Configuration with legacy projects

In .NET Core, configuration has been re-engineered, throwing away the System.Configuration model that relied on XML-based configuration files and introducing a number of new configuration components offering more flexibility and better extensibility.

At its lowest level, the new configuration system still provides access to key/value based settings. However, it also supports multiple configuration sources such as JSON files, and probably my favourite feature, strongly typed binding to configuration classes.

Whilst the new configuration system sit unders the ASP.NET repository on GitHub, it doesn't actually have any dependency on any of the new ASP.NET components meaning it can also be used in your non .net core projects too.

In this post I'll cover how to use .NET Core Configuration in an ASP.NET Web API application.

Install the packages

The new .NET Core configuration components are published under Microsoft.Extensions.Configuration.* packages on NuGet. For this demo I've installed the following packages:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.Json (support for JSON configuration files)
  • Microsoft.Extensions.Configuration.Binder (strongly-typed binding of configuration settings)

Initialising configuration

To initialise the configuration system we use ConfigurationBuilder. When you install additional configuration sources the builder will be extended with a number of new methods for adding those sources. Finally call Build() to create a configuration instance:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", optional: true)
    .Build();

Accessing configuration settings

Once you have the configuration instance, settings can be accessed using their key:

var applicationName = configuration["ApplicationName"];

If your configuration settings have a heirarchical structure (likely if you're using JSON or XML files) then each level in the heirarchy will be separated with a :.

To demonstrate I've added a appsettings.json file containing a few configuration settings:

{
  "connectionStrings": {
    "MyDb": "server=localhost;database=mydb;integrated security=true"
  },
  "apiSettings": {
    "url": "http://localhost/api",
    "apiKey": "sk_1234566",
    "useCache":  true
  }
}

I've then wired up an endpoint in my controller to return the configuration, using keys to access my values:

public class ConfigurationController : ApiController
{
    public HttpResponseMessage Get()
    {
        var config = new
        {
            MyDbConnectionString = Startup.Config["ConnectionStrings:MyDb"],
            ApiSettings = new
            {
                Url = Startup.Config["ApiSettings:Url"],
                ApiKey = Startup.Config["ApiSettings:ApiKey"],
                UseCache = Startup.Config["ApiSettings:UseCache"],
            }
        };

        return Request.CreateResponse(config);
    }
}

When I hit my /configuration endpoint I get the following JSON response:

{
    "MyDbConnectionString": "server=localhost;database=mydb;integrated security=true",
    "ApiSettings": {
        "Url": "http://localhost/api",
        "ApiKey": "sk_1234566",
        "UseCache": "True"
    }
}

Strongly-typed configuration

Of course accessing settings in this way isn't a vast improvement over using ConfigurationManager and as you'll notice above, we're not getting the correct type for all of our settings.

Fortunately the new .NET Core configuration system supports strongly-typed binding of your configuration, using Microsoft.Extensions.Configuration.Binder.

I created the following class to bind my configuration to:

public class AppConfig
{
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

To bind to this class directly, use the Get<T> extensions provided by the binder package. Here's my updated controller:

public HttpResponseMessage Get()
{
    var config = Startup.Config.Get<AppConfig>();
    return Request.CreateResponse(config);
}

The response:

{
   "ConnectionStrings":{
      "MyDb":"server=localhost;database=mydb;integrated security=true"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}

Now I can access my application configuration in a much nicer way:

if (config.ApiSettings.UseCache)
{

}

What about Web/App.config?

So far I've demonstrated how to use some of the new configuration features in a legacy application. But what if you still rely on traditional XML based configuration files like web.config or app.config?

In my application I still have a few settings in app.config (I'm self-hosting the API) that I require in my application. Ideally I'd like to use the .NET core configuration system to bind these to my AppConfig class too:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <connectionStrings>
    <add name="MyLegacyDb" connectionString="server=localhost;database=legacy" />
  </connectionStrings>
  <appSettings>
    <add key="ApplicationName" value="CoreConfigurationDemo"/>
  </appSettings>
</configuration>

There is an XML configuration source for .NET Core. However, if you try and use this for appSettings or connectionStrings elements you'll find the generated keys are not really ideal for strongly typed binding:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", optional: true)
    .AddXmlFile("app.config")
    .Build();

If we inspect the configuration after calling Build() we get the following key/value for the MyLegacyDb connection string:

[connectionStrings:add:MyLegacyDb:connectionString, server=localhost;database=legacy]

This is due to how the XML source binds XML attributes.

Given that we still have access to the older System.Configuration system it makes sense to use this to access our XML config files and then plug the values into the new .NET core configuration system. We can do this by creating a custom configuration provider.

Creating a custom configuration provider.

To implement a custom configuration provider you implement the IConfigurationProvider and IConfigurationSource interfaces. You can also derive from the abstract class ConfigurationProvider which will save you writing some boilerplate code.

For a more advanced implementation that requires reading file contents and supports multiple files of the same type, check out Andrew Lock's write-up on how to add a YAML configuration provider.

Since I'm relying on System.Configuration.ConfigurationManager to read app.config and do not need to support multiple files, my implementation is quite simple:

public class LegacyConfigurationProvider : ConfigurationProvider, IConfigurationSource
{
    public override void Load()
    {
        foreach (ConnectionStringSettings connectionString in ConfigurationManager.ConnectionStrings)
        {
            Data.Add($"ConnectionStrings:{connectionString.Name}", connectionString.ConnectionString);
        }

        foreach (var settingKey in ConfigurationManager.AppSettings.AllKeys)
        {
            Data.Add(settingKey, ConfigurationManager.AppSettings[settingKey]);
        }
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return this;
    }
}

When ConfigurationBuilder.Build() is called the Build method of each configured source is executed, which returns a IConfigurationProvider used to get the configuration data. Since we're deriving from ConfigurationProvider we can override Load, adding each of the connection strings and application settings from app.config.

I updated my AppConfig to include the new setting and connection string as below:

public class AppConfig
{
    public string ApplicationName { get; set; }
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
        public string MyLegacyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

The only change I need to make to my application is to add the configuration provider to my configuration builder:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", optional: true)
    .Add(new LegacyConfigurationProvider())
    .Build();

Then, when I hit my /configuration endpoint I get my complete configuration, bound from both my JSON file and app.config:

{
   "ApplicationName":"CoreConfigurationDemo",
   "ConnectionStrings":{
      "MyDb":"server=localhost;integrated security=true",
      "MyLegacyDb":"server=localhost;database=legacy"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}


Damien Bowden: Extending Identity in IdentityServer4 to manage users in ASP.NET Core

This article shows how Identity can be extended and used together with IdentityServer4 to implement application specific requirements. The application allows users to register and can access the application for 7 days. After this, the user cannot log in. Any admin can activate or deactivate a user using a custom user management API. Extra properties are added to the Identity user model to support this. Identity is persisted using EFCore and SQLite. The SPA application is implemented using Angular 2, Webpack 2 and Typescript 2.

Code: github

Other posts in this series:

Updating Identity

Updating Identity is pretty easy. The package provides the IdentityUser class implemented by the ApplicationUser. You can add any extra required properties to this class. This requires the Microsoft.AspNetCore.Identity.EntityFrameworkCore package which is included in the project as a NuGet package.

using System;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;

namespace IdentityServerWithAspNetIdentity.Models
{
    public class ApplicationUser : IdentityUser
    {
        public bool IsAdmin { get; set; }
        public string DataEventRecordsRole { get; set; }
        public string SecuredFilesRole { get; set; }
        public DateTime AccountExpires { get; set; }
    }
}

Identity needs to be added to the application. This is done in the startup class in the ConfigureServices method using the AddIdentity extension. SQLite is used to persist the data. The ApplicationDbContext which uses SQLite is then used as the store for Identity.

services.AddDbContext<ApplicationDbContext>(options =>
	options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

The configuration is read from the appsettings for the SQLite database. The configuration is read using the ConfigurationBuilder in the Startup constructor.

"ConnectionStrings": {
        "DefaultConnection": "Data Source=C:\\git\\damienbod\\AspNet5IdentityServerAngularImplicitFlow\\src\\ResourceWithIdentityServerWithClient\\usersdatabase.sqlite"
    },
   

The Identity store is then created using the EFCore migrations.

dotnet ef migrations add testMigration

dotnet ef database update

The new properties in the Identity are used in three ways; when creating a new user, when creating a token for a user and validating the token on a resource using policies.

Using Identity creating a new user

The Identity ApplicationUser is created in the Register method in the AccountController. The new extended properties which were added to the ApplicationUser can be used as required. In this example, a new user will have access for 7 days. If the user can set custom properties, the RegisterViewModel model needs to be extended and the corresponding view.

[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Register(RegisterViewModel model, string returnUrl = null)
{
	ViewData["ReturnUrl"] = returnUrl;
	if (ModelState.IsValid)
	{
		var dataEventsRole = "dataEventRecords.user";
		var securedFilesRole = "securedFiles.user";
		if (model.IsAdmin)
		{
			dataEventsRole = "dataEventRecords.admin";
			securedFilesRole = "securedFiles.admin";
		}

		var user = new ApplicationUser {
			UserName = model.Email,
			Email = model.Email,
			IsAdmin = model.IsAdmin,
			DataEventRecordsRole = dataEventsRole,
			SecuredFilesRole = securedFilesRole,
			AccountExpires = DateTime.UtcNow.AddDays(7.0)
		};

		var result = await _userManager.CreateAsync(user, model.Password);
		if (result.Succeeded)
		{
			await _signInManager.SignInAsync(user, isPersistent: false);
			_logger.LogInformation(3, "User created a new account with password.");
			return RedirectToLocal(returnUrl);
		}
		AddErrors(result);
	}

	return View(model);
}

Using Identity creating a token in IdentityServer4

The Identity properties need to be added to the claims so that the client SPA or whatever client it is can use the properties. In IdentityServer4, the IProfileService interface is used for this. Each custom ApplicationUser property is added as claims as required.

using System;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using IdentityModel;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using IdentityServerWithAspNetIdentity.Models;
using Microsoft.AspNetCore.Identity;

namespace ResourceWithIdentityServerWithClient
{
    public class IdentityWithAdditionalClaimsProfileService : IProfileService
    {
        private readonly IUserClaimsPrincipalFactory<ApplicationUser> _claimsFactory;
        private readonly UserManager<ApplicationUser> _userManager;

        public IdentityWithAdditionalClaimsProfileService(UserManager<ApplicationUser> userManager,  IUserClaimsPrincipalFactory<ApplicationUser> claimsFactory)
        {
            _userManager = userManager;
            _claimsFactory = claimsFactory;
        }

        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();

            var user = await _userManager.FindByIdAsync(sub);
            var principal = await _claimsFactory.CreateAsync(user);

            var claims = principal.Claims.ToList();
            if (!context.AllClaimsRequested)
            {
                claims = claims.Where(claim => context.RequestedClaimTypes.Contains(claim.Type)).ToList();
            }

            claims.Add(new Claim(JwtClaimTypes.GivenName, user.UserName));

            if (user.IsAdmin)
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "admin"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "user"));
            }

            if (user.DataEventRecordsRole == "dataEventRecords.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
            }

            if (user.SecuredFilesRole == "securedFiles.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
            }

            claims.Add(new System.Security.Claims.Claim(StandardScopes.Email.Name, user.Email));
            

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            context.IsActive = user != null && user.AccountExpires > DateTime.UtcNow;
        }
    }
}

Using the Identity properties validating a token

The IsAdmin property is used to define whether a logged on user has the admin role. This was added to the token using the admin claim in the IProfileService. Now this can be used by defining a policy and validating the policy in a controller. The policies are added in the Startup class in the ConfigureServices method.

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("admin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role", "dataEventRecords.user");
	});
});

The policy can then be used for example in a MVC Controller using the Authorize attribute. The admin policy is used in the UserManagementController.

[Authorize("admin")]
[Produces("application/json")]
[Route("api/UserManagement")]
public class UserManagementController : Controller
{

Now that users can be admin users and expire after 7 days, the application requires a UI to manage this. This UI is implemented in the Angular 2 SPA. The UI requires a user management API to get all the users and also update the users. The Identity EFCore ApplicationDbContext context is used directly in the controller to simplify things, but usually this would be separated from the Controller, or if you have a lot of users, some type of search logic would need to be supported with a filtered result list. I like to have no logic in the MVC controller.

using System;
using System.Collections.Generic;
using System.Linq;
using IdentityServerWithAspNetIdentity.Data;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using ResourceWithIdentityServerWithClient.Model;

namespace ResourceWithIdentityServerWithClient.Controllers
{
    [Authorize("admin")]
    [Produces("application/json")]
    [Route("api/UserManagement")]
    public class UserManagementController : Controller
    {
        private readonly ApplicationDbContext _context;

        public UserManagementController(ApplicationDbContext context)
        {
            _context = context;
        }

        [HttpGet]
        public IActionResult Get()
        {
            var users = _context.Users.ToList();
            var result = new List<UserDto>();

            foreach(var applicationUser in users)
            {
                var user = new UserDto
                {
                    Id = applicationUser.Id,
                    Name = applicationUser.Email,
                    IsAdmin = applicationUser.IsAdmin,
                    IsActive = applicationUser.AccountExpires > DateTime.UtcNow
                };

                result.Add(user);
            }

            return Ok(result);
        }
        
        [HttpPut("{id}")]
        public void Put(string id, [FromBody]UserDto userDto)
        {
            var user = _context.Users.First(t => t.Id == id);

            user.IsAdmin = userDto.IsAdmin;
            if(userDto.IsActive)
            {
                if(user.AccountExpires < DateTime.UtcNow)
                {
                    user.AccountExpires = DateTime.UtcNow.AddDays(7.0);
                }
            }
            else
            {
                // deactivate user
                user.AccountExpires = new DateTime();
            }

            _context.Users.Update(user);
            _context.SaveChanges();
        }   
    }
}

Angular 2 User Management Component

The Angular 2 SPA is built using Webpack 2 with typescript. See https://github.com/damienbod/Angular2WebpackVisualStudio on how to setup a Angular 2, Webpack 2 app with ASP.NET Core.

The Angular 2 requires a service to access the ASP.NET Core MVC service. This is implemented in the UserManagementService which needs to be added to the app.module then.

import { Injectable } from '@angular/core';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import 'rxjs/add/operator/map';
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { SecurityService } from '../services/SecurityService';
import { User } from './models/User';

@Injectable()
export class UserManagementService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration, private _securityService: SecurityService) {
        this.actionUrl = `${_configuration.Server}/api/UserManagement/`;   
    }

    private setHeaders() {

        console.log("setHeaders started");

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        var token = this._securityService.GetToken();
        if (token !== "") {
            let tokenValue = 'Bearer ' + token;
            console.log("tokenValue:" + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

    public GetAll = (): Observable<User[]> => {
        this.setHeaders();
        let options = new RequestOptions({ headers: this.headers, body: '' });

        return this._http.get(this.actionUrl, options).map(res => res.json());
    }

    public Update = (id: string, itemToUpdate: User): Observable<Response> => {
        this.setHeaders();
        return this._http
            .put(this.actionUrl + id, JSON.stringify(itemToUpdate), { headers: this.headers });
    }
}

The UserManagementComponent uses the service and displays all the users, and provides a way of updating each user.

import { Component, OnInit } from '@angular/core';
import { SecurityService } from '../services/SecurityService';
import { Observable } from 'rxjs/Observable';
import { Router } from '@angular/router';

import { UserManagementService } from '../user-management/UserManagementService';
import { User } from './models/User';

@Component({
    selector: 'user-management',
    templateUrl: 'user-management.component.html'
})

export class UserManagementComponent implements OnInit {

    public message: string;
    public Users: User[];

    constructor(
        private _userManagementService: UserManagementService,
        public securityService: SecurityService,
        private _router: Router) {
        this.message = "user-management";
    }
    
    ngOnInit() {
        this.getData();
    }

    private getData() {
        console.log('User Management:getData starting...');
        this._userManagementService
            .GetAll()
            .subscribe(data => this.Users = data,
            error => this.securityService.HandleError(error),
            () => console.log('User Management Get all completed'));
    }

    public Update(user: User) {
        this._userManagementService.Update(user.id, user)
            .subscribe((() => console.log("subscribed")),
            error => this.securityService.HandleError(error),
            () => console.log("update request sent!"));
    }

}

The UserManagementComponent template uses the Users data to display, update etc.

<div class="col-md-12" *ngIf="securityService.IsAuthorized">
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body"  *ngIf="Users">
            <table class="table">
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>IsAdmin</th>
                        <th>IsActive</th>
                        <th></th>
                    </tr>
                </thead>
                <tbody>
                    <tr style="height:20px;" *ngFor="let user of Users">
                        <td>{{user.name}}</td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isAdmin" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isActive" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <button (click)="Update(user)" class="form-control">Update</button>
                        </td>
                    </tr>
                </tbody>
            </table>

        </div>
    </div>
</div>

The user-management component and the service need to be added to the module.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';

import { SecurityService } from './services/SecurityService';
import { DataEventRecordsService } from './dataeventrecords/DataEventRecordsService';
import { DataEventRecord } from './dataeventrecords/models/DataEventRecord';

import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { DataEventRecordsListComponent } from './dataeventrecords/dataeventrecords-list.component';
import { DataEventRecordsCreateComponent } from './dataeventrecords/dataeventrecords-create.component';
import { DataEventRecordsEditComponent } from './dataeventrecords/dataeventrecords-edit.component';

import { UserManagementComponent } from './user-management/user-management.component';


import { HasAdminRoleAuthenticationGuard } from './guards/hasAdminRoleAuthenticationGuard';
import { HasAdminRoleCanLoadGuard } from './guards/hasAdminRoleCanLoadGuard';
import { UserManagementService } from './user-management/UserManagementService';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent,
        DataEventRecordsListComponent,
        DataEventRecordsCreateComponent,
        DataEventRecordsEditComponent,
        UserManagementComponent
    ],
    providers: [
        SecurityService,
        DataEventRecordsService,
        UserManagementService,
        Configuration,
        HasAdminRoleAuthenticationGuard,
        HasAdminRoleCanLoadGuard
    ],
    bootstrap:    [AppComponent],
})

export class AppModule {}

Now the Identity users can be managed fro the Angular 2 UI.

extendingidentity_01

Links

https://github.com/IdentityServer/IdentityServer4

http://docs.identityserver.io/en/dev/

https://github.com/IdentityServer/IdentityServer4.Samples

https://docs.asp.net/en/latest/security/authentication/identity.html

https://github.com/IdentityServer/IdentityServer4/issues/349

ASP.NET Core, Angular2 with Webpack and Visual Studio



Andrew Lock: Troubleshooting ASP.NET Core 1.1.0 install problems

Troubleshooting ASP.NET Core 1.1.0 install problems

I was planning on playing with the latest .NET Core 1.1.0 preview recently, but I ran into a few issues getting it working on my Mac. As I suspected, this was entirely down to my mistakes and my machine's setup, but I'm documenting it here in case any one else runs into similar problem!

Note that as of yesterday the RTM release of 1.1.0 is out, so while not strictly applicable, I would probably have run into the same problems! I've updated the post to reflect the latest version numbers.

TL;DR; There were two issues I ran into. First, the global.json file I used specified an older version of the tooling. Second, I had an older version of the tooling installed that was, according to SemVer, newer than the version I had just installed!

Installing ASP.NET Core 1.1.0

I began by downloading the .NET Core 1.1.0 installer for macOS from the downloads page, following the instructions from the announcement blog post. The installation was quick and went smoothly, installing side-by side with the existing .NET Core 1.0 RTM install.

Troubleshooting ASP.NET Core 1.1.0 install problems

Creating a new 1.1.0 project

According to the blog post, once you've run the installer you should be able to start creating 1.1.0 applications. Running donet new with the .NET CLI should create a new 1.1.0 application, with a project.json that contains an updated Microsoft.NETCore.App dependency, looking something like:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0"
      }
    },
    "imports": "dnxcore50"
  }
}

So I created a sub folder for a test project, ran dotnet new and eagerly checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

Hmmm, that doesn't look right, we still seem to be getting a 1.0.0 project instead of 1.1.0…

Check the global.json

My first thought was that the install hadn't worked correctly - it is a preview after all (it was when i originally tried it!) so wouldn't be completely unheard of. Running dotnet --version to check the version of the CLI being run returned

$ dotnet --version
1.0.0-preview2-003121  

So the preview 2 tooling is being used, which corresponds to the .NET Core 1.0 RTM release, definitely the wrong version.

It was then I remembered a similar issue I had when moving from RC2 to the RTM release - check the global.json! When I had created my sub folder for testing dotnet new, I had automatically copied across a global.json from a previous project. Looking inside, this was what I found:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003121"
  }
}

Bingo! If an SDK version is specified in global.json then it will be used preferentially over the latest tooling. Updating the sdk section with the appropriate value, or removing the SDK section means the latest tooling should be used, which should let me create my 1.1.0 project.

Take two - preview fun

After removing the sdk section of the global.json, I ran dotnet new again, and checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

D'oh, that's still not right! At this point, I started to think something must have gone wrong with the installation, as I couldn't think of any other explanation. Luckily, it's easy to see which versions of the SDK are installed by checking the file system. On a Mac, you can see them at:

/usr/local/share/dotnet/sdk/

Checking the folder, this is what I saw, notice anything odd?

Troubleshooting ASP.NET Core 1.1.0 install problems

There's quite a few different versions of the SDK in there, including the 1.0.0 RTM version (1.0.0-preview2-003121) and also the 1.1.0 Preview 1 version (1.0.0-preview2.1-003155). However there's also a slightly odd one that stands out - 1.0.0-preview3-003213. (Note, with the 1.1 RTM there is a whole new version, 1.0.0-preview2-1-003177)

Most people installing the .NET Core SDK will not run into this issue, as they likely won't have this additional preview3 version. I only have it installed (I think) because I created a couple of pull requests to the ASP.NET Core repositories recently. The way versioning works in the ASP.NET Core repositories for development versions means that although there is a preview3 version of the tooling, it is actually older that the preview2.1 version of the tooling just released, and generates 1.0.0 projects.

When you run dotnet new, and in the absence of a global.json with an sdk section, the CLI will use the most recent version of the tooling as determined by SemVer. Consequently, it had been using the preview3 version and generating 1.0.0 projects!

The simple solution was to delete the 1.0.0-preview3-003213 folder, and re-run dotnet new:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0-preview1-001100-00"
      }
    },
    "imports": "dnxcore50"
  }
}

Lo and behold, a 1.1.0 project!

Summary

The final issue I ran into is not something that general users have to worry about. The only reason it was a problem for me was due to working directly with the GitHub repo, and the slightly screwy SemVer versions when using development packages.

The global.json issue is one that you might run into when upgrading projects. It's well documented that you need to update it when upgrading, but it's easy to overlook.

Anyway, the issues I experienced were entirely down to my setup and stupidity rather than the installer or documentation, so hopefully things go smoother for you. Now time to play with new features!


Andrew Lock: Making ConcurrentDictionary GetOrAdd thread safe using Lazy

Making ConcurrentDictionary GetOrAdd thread safe using Lazy

I was browsing the ASP.NET Core MVC GitHub repo the other day, checking out the new 1.1.0 Preview 1 code, when I spotted a usage of ConcurrentDictionary that I thought was interesting. This post explores the GetOrAdd function, the level of thread safety it provides, and ways to add additional threading constraints.

I was looking at the code that enables using middleware as MVC filters where they are building up a filter pipeline. This needs to be thread-safe, so they sensibly use a ConcurrentDictionary<>, but instead of a dictionary of RequestDelegate, they are using a dictionary of Lazy<RequestDelegate>. Along with the initialisation is this comment:

// 'GetOrAdd' call on the dictionary is not thread safe and we might end up creating the pipeline more
// once. To prevent this Lazy<> is used. In the worst case multiple Lazy<> objects are created for multiple
// threads but only one of the objects succeeds in creating a pipeline.
private readonly ConcurrentDictionary<Type, Lazy<RequestDelegate>> _pipelinesCache  
    = new ConcurrentDictionary<Type, Lazy<RequestDelegate>>();

This post will explore the pattern they are using and why you might want to use it in your code.

tl;dr; To make a ConcurrentDictionary only call a delegate once when using GetOrAdd, store your values as Lazy<T>, and use by calling GetOrAdd(key, valueFactory).Value.

The GetOrAdd function

The ConcurrentDictionary is a dictionary that allows you to add, fetch and remove items in a thread-safe way. If you're going to be accessing a dictionary from multiple threads, then it should be your go-to class.

The vast majority of methods it exposes are thread safe, with the notable exception of one of the GetOrAdd overloads:

TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory);  

This overload takes a key value, and checks whether the key already exists in the database. If the key already exists, then the associated value is returned; if the key does not exist, the provided delegate is run, the value is stored in the dictionary, and then returned to the caller.

For example, consider the following little program.

public static void Main(string[] args)  
{
    var dictionary = new ConcurrentDictionary<string, string>();

    var value = dictionary.GetOrAdd("key", x => "The first value");
    Console.WriteLine(value);

    value = dictionary.GetOrAdd("key", x => "The second value");
    Console.WriteLine(value);
}

The first time GetOrAdd is called, the dictionary is empty, so the value factory runs and returns the string "The first value", storing it against the key. On the second call, GetOrAdd finds the saved value and uses that instead of calling the factory. The output gives:

The first value  
The first value  

GetOrAdd and thread safety.

Internally, the ConcurrentDictionary uses locking to make it thread safe for most methods, but GetOrAdd does not lock while valueFactory is running. This is done to prevent unknown code from blocking all the threads, but it means that valueFactory might run more than once if it is called simultaneously from multiple threads. Thread safety kicks in when saving the returned value to the dictionary and when returning the generated value back to the caller however, so you will always get the same value back from each call.

For example, consider the program below, which uses tasks to run threads simultaneously. It works very similarly to before, but runs the GetOrAdd function on two separate threads. It also increments a counter every time the valueFactory is run.

public class Program  
{
    private static int _runCount = 0;
    private static readonly ConcurrentDictionary<string, string> _dictionary
        = new ConcurrentDictionary<string, string>();

    public static void Main(string[] args)
    {
        var task1 = Task.Run(() => PrintValue("The first value"));
        var task2 = Task.Run(() => PrintValue("The second value"));
        Task.WaitAll(task1, task2);

        PrintValue("The third value")

        Console.WriteLine($"Run count: {_runCount}");
    }

    public static void PrintValue(string valueToPrint)
    {
        var valueFound = _dictionary.GetOrAdd("key",
                    x =>
                    {
                        Interlocked.Increment(ref _runCount);
                        Thread.Sleep(100);
                        return valueToPrint;
                    });
        Console.WriteLine(valueFound);
    }
}

The PrintValue function again calls GetOrAdd on the ConcurrentDictionary, passing in a Func<> that increments the counter and returns a string. Running this program produces one of two outputs, depending on the order the threads are scheduled; either

The first value  
The first value  
The first value  
Run count: 2  

or

The second value  
The second value  
The second value  
Run count: 2  

As you can see, you will always get the same value when calling GetOrAdd, depending on which thread returns first. However the delegate is being run on both asynchronous calls, as shown by _runCount=2, as the value had not been stored from the first call before the second call runs. Stepping through, the interactions could look something like this:

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, and returns the value "The first value" back to the concurrent dictionary. The dictionary checks there is still no value for "key", and inserts the new KeyValuePair. Finally, it returns "The first value" to the caller.

  4. Thread B completes its invocation and returns the value "The second value" back to the concurrent dictionary. The dictionary sees the value for "key" stored by Thread A, so it discards the value it created and uses that one instead, returning the value back to the caller.

  5. Thread C calls GetOrAdd and finds the value already exists for "key", so returns the value, without having to invoke valueFactory

In this case, running the delegate more than once has no adverse effects - all we care about is that the same value is returned from each call to GetOrAdd. But what if the delegate has side effects such that we need to ensure it is only run once?

Ensuring the delegate only runs once with Lazy

As we've seen, there are no guarantees made by ConcurrentDictionary about the number of times the Func<> will be called. When building a middleware pipeline, however, we need to be sure that the middleware is only built once, as it could be doing some bootstrapping that is expensive or not thread safe. The solution that the ASP.NET Core team used is to use Lazy<> initialisation.

The output we are aiming for is

The first value  
The first value  
The first value  
Run count: 1  

or similarly for "The second value" - it doesn't matter which wins out, the important points are that the same value is returned every time, and that _runCount is always 1.

Looking back at our previous example, instead of using a ConcurrentDictionary<string, string>, we create a ConcurrentDictionary<string, Lazy<string>>, and we update the PrintValue() method to create a lazy object instead:

public static void PrintValueLazy(string valueToPrint)  
{
    var valueFound = _lazyDictionary.GetOrAdd("key",
                x => new Lazy<string>(
                    () =>
                        {
                            Interlocked.Increment(ref _runCount);
                            Thread.Sleep(100);
                            return valueToPrint;
                        }));
    Console.WriteLine(valueFound.Value);
}

There are only two changes we have made here. We have updated the GetOrAdd call to return a Lazy<string> rather than a string directly, and we are calling valueFound.Value to get the actual string value to write to the console. To see why this solves the problem, lets step through the example to see an example of what happens when we run the whole program.

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, returning an uninitialised Lazy<string> object. The delegate inside the Lazy<string> has not been run at this point, we've just created the Lazy<string> container. The dictionary checks there is still no value for "key", and so inserts the Lazy<string> against it, and finally, returns the Lazy<string> back to the caller.

  4. Thread B completes its invocation, similarly returning an uninitialised Lazy<string> object. As before, the dictionary sees the Lazy<string> object for "key" stored by Thread A, so it discards the Lazy<string> it just created and uses that one instead, returning it back to the caller.

  5. Thread A calls Lazy<string>.Value. This invokes the provided delegate in a thread safe way, such that if it is called simultaneously by two threads, it will only run the delegate once.

  6. Thread B calls Lazy<string>.Value. This is the same Lazy<string> object that Thread A just initialised (remember the dictionary ensures you always get the same value back.) If Thread A is still running the initialisation delegate, then Thread B just blocks until it finishes and it can access the result. We just get the final return string, without invoking the delegate for a second time. This is what gives us the run-once behaviour we need.

  7. Thread C calls GetOrAdd and finds the Lazy<string> object already exists for "key", so returns the value, without having to invoke valueFactory. The Lazy<string> has already been initialised, so the resulting value is returned directly.

We still get the same behaviour from the ConcurrentDictionary in that we might run the valueFactory more than once, but now we are just calling new Lazy<>() inside the factory. In the worst case, we create multiple Lazy<> objects, which get discarded by the ConcurrentDictionary when consolidating inside the GetOrAdd method.

It is the Lazy<> object which enforces that we only run our expensive delegate once. By calling Lazy<>.Value we trigger the delegate to run in a thread safe way, such that we can be sure it will only be run by one thread at a time. Other threads which call Lazy<>.Value simultaneously will be blocked until the first call completes, and then will use the same result.

Summary

When using GetOrAdd , if your valueFactory is idempotent and not expensive, then there is no need for this trick. You can be sure you will always get the same value with each call, but you need to be aware the valueFactory may run multiple times.

If you have an expensive operation that must be run only once as part of a call to GetOrAdd, then using Lazy<> is a great solution. The only caveat to be aware of is that Lazy<>.Value will block other threads trying to access the value until the first call completes. Depending on your use case, this may or may not be a problem, and is the reason GetOrAdd does not have these semantics by default.


Andrew Lock: Fixing a bug: when concatenated strings turn into numbers in JavaScript

Fixing a bug: when concatenated strings turn into numbers in JavaScript

This is a very quick post about trying to fix a JavaScript bug that plagued me for an hour this morning.

tl;dr I was tripped up by a rogue unary operator and slap-dash copy-pasting.

The setup

On an existing web page, there was some JavaScript that builds up a string to insert into the DOM:

function GetTemplate(url, html)  
{
   // other details removed
   var template = '<div class="something"><a href="'
                  + url
                  + '" target="_blank"><strong>Details: </strong><span>'
                  + html
                  + '</span></a></div>';
  return template;
}

Ignore for now whether this code is an abomination and the vulnerabilities etc - it is what it is.

The requirement was simple: insert an optional additional <span> tag before the <strong>, only if the value of a variable provided is 'truthy'. Seems pretty easy right? It should have been.

The first attempt

I set about quickly rolling out the fix and came up with code something like this:

function GetTemplate(url, html, summary) {  
   // other details removed
   var template = '<div class="something"><a href="'
                  + url
                  + '" target="_blank">';

   if(summary) {
       template += '<span class="summary">' 
           + summary 
           + '</span>';
   }

   template +=
       +'<strong>Details: </strong><span>'
       + html
       + '</span></a></div>';

  return template;
}

All looked ok to me, F5 to reload the page, and… oh dear, that doesn't look right…

Fixing a bug: when concatenated strings turn into numbers in JavaScript

Can you spot what I did wrong?

The HTML that was generated looked like this:

<div class="something"><a href="https://thewebsite.com" target="blank">  
    <span class="summary">The summary</span>NaNThis is the inner message</span></a>
</div>  

Spotted it yet?

Concatenation vs addition

Looking at the generated HTML, there appears to be a Rogue "NaN" string that has found it's way into the generated HTML and there's also no sign of the <strong> tag in the output. The presence of the NaN was a pretty clear indicator that there was some conversion to numbers going on here, but I couldn't for the life of me see where!

As I'm sure you all know, in JavaScript the + symbol can be used both for numeric addition and string concatenation, depending on the variables either side. For example,

console.log('value:' + 3);           // 'value:3'  
console.log(3 + 1);                   // 4  
console.log('value:' + 3 + '+' + 1); // 'value:3+1'  
console.log('value:' + 3 + 1);       // 'value:31'  
console.log('value:' + (3 + 1));     // 'value:4'  
console.log(3 + ' is the value');    // '3 is the value'  

In these examples, when either the left or right operands of the + symbol are a string, the other operand is coerced to a string and a concatenation is performed. Otherwise the operator is treated as an addition.

The presence of the NaN in the output string indicated there must be some something going on where a string was trying to be used as a number. But given the concatenation rules above, and the fact we weren't using parseInt() or similar anywhere, it just didn't make any sense!

The culprit

Narrowing the problem down, the issue appeared to be in the third block of string concatenation, in which the strong tag is added:

template +=  
       +'<strong>Details: </strong><span>'
       + html
       + '</span></a></div>';

If you still haven't spotted it, writing it all one line may do the trick for you:

template += +'<strong>Details: </strong><span>' + html + '</span></a></div>';  

Right at the beginning of that statement I am calling 'string' += +'string'. See the extra + that's crept in through copying and pasting errors? That's the source of all my woes - a unary operation. To quote the You Don't Know JS book by Kyle Simpson:

+c here is showing the unary operator form (operator with only one operand) of the + operator. Instead of performing mathematic addition (or string concatenation -- see below), the unary + explicitly coerces its operand (c) to a number value.This has an interesting effect on the subsequent string in that it tries to convert it to a number.

This was the exact problem I had. The rogue + was attempting to convert the string <strong>Details: </strong><span> to a number, was failing and returning NaN. This was then coerced to a string as a result of the subsequent concatenations, and broke my HTML! Removing that + fixed everything.

Bonus

As an interesting side point to this, I was using gulp-uglify to minify the resulting javascript as part of the build. As part of that minification, the 'unary operator plus value' combination (+'<strong>Details: </strong><span>') was actually being stored in the minified javascript as an explicit NaN. Gulp had seen my error and set it in stone for me!

I'm sure there's a lesson to be learnt here about not rushing and copying and pasting, but my immediate thought was for a small gulp plugin that warns you about unexpected NaNs in your minified code! I wouldn't be surprised if that already exists…


Andrew Lock: Using dependency injection in a .Net Core console application

Using dependency injection in a .Net Core console application

One of the key features of ASP.NET Core is baked in dependency injection. There may be various disagreements on the way that is implemented, but in general encouraging a good practice by default seems like a win to me.

Whether you choose to use the built in container or a third party container will likely come down to whether the built in container is powerful enough for your given project. For small projects it may be fine, but if you need convention based registration, logging/debugging tools, or more esoteric approaches like property injection, then you'll need to look elsewhere. Luckily, third party container are pretty easy to integrate, and are going to be getting easier.

Why use the built-in container?

One question that's come up a few times, is whether you can use the built-in provider in a .NET Core console application? The short answer is not out-of-the-box, but adding it in is pretty simple. Having said that, whether it is worth using in this case is another question.

One of the advantage of the built-in container in ASP.NET Core is that the framework libraries themselves register their dependencies with it. When you call the AddMvc() extension method in your Startup.ConfigureServices method, the framework registers a whole plethora of services with the container. If you later add a third-party container, those dependencies are passed across to be re-registered, so they are available when resolved via the third-party.

If you are writing a console app, then you likely don't need MVC or other ASP.NET Core specific services. In that case, it may be just as easy to start right off the bat using StructureMap or AutoFac instead of the limited built-in provider.

Having said that, most common services designed for use with ASP.NET Core will have extensions for registering with the built in container via IServiceCollection, so if you are using services such as logging, or the Options pattern, then it is certainly easier to use the provided extensions, and plug a third party on top of that if required.

Adding DI to a console app

If you decide the built-in container is the right approach, then adding it to your application is very simple using the Microsoft.Extensions.DependencyInjection package. To demonstrate the approach, I'm going to create a simple
application that has two services:

public interface IFooService  
{
    void DoThing(int number);
}

public interface IBarService  
{
    void DoSomeRealWork();
}

Each of these services will have a single implementation. The BarService depends on an IFooService, and the FooService uses an ILoggerFactory to log some work:

public class BarService  
{
    private readonly IFooService _fooService;
    public BarService(IFooService fooService)
    {
        _fooService = fooService;
    }

    public void DoSomeRealWork()
    {
        for (int i = 0; i < 10; i++)
        {
            _fooService.DoThing(i);
        }
    }
}

public class FooService : IFooService  
{
    private readonly ILogger<FooService> _logger;
    public FooService(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<FooService>();
    }

    public void DoThing(int number)
    {
        _logger.LogInformation($"Doing the thing {number}");
    }
}

As you could see above, I'm using the new logging infrastructure in my app, so I will need to add the appropriate package to my project.json. I'll also add the DependencyInjection package and the Microsoft.Extensions.Logging.Console package so I can see the results of my logging:

{
  "dependencies": {
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.DependencyInjection": "1.0.0"
  }
}

Finally, I'll update my static void main to put all the pieces together. We'll walk through through it in a second.

using Microsoft.Extensions.DependencyInjection;  
using Microsoft.Extensions.Logging;

public class Program  
{
    public static void Main(string[] args)
    {
        //setup our DI
        var serviceProvider = new ServiceCollection()
            .AddLogging()
            .AddSingleton<IFooService, FooService>()
            .AddSingleton<IBarService, BarService>()
            .BuildServiceProvider();

        //configure console logging
        serviceProvider
            .GetService<ILoggerFactory>()
            .AddConsole(LogLevel.Debug);

        var logger = serviceProvider.GetService<ILoggerFactory>>()
            .CreateLogger<Program>();
        logger.LogDebug("Starting application");

        //do the actual work here
        var bar = serviceProvider.GetService<IBarService>();
        bar.DoSomeRealWork();

        logger.LogDebug("All done!");

    }
}

The first thing we do is configure the dependency injection container by creating a ServiceCollection, adding our dependencies, and finally building an IServiceProvider. This process is equivalent to the ConfigureServices method in an ASP.NET Core project, and is pretty much what happens behind the scenes. You can see we are using the IServiceCollection extension method to add the logging services to our application, and then registering our own services. The serviceProvider is our container we can use to resolve services in our application.

In the next step, we need to configure the logging infrastructure with a provider, so the results are output somewhere. We first fetch an instance of ILoggerFactory from our newly constructed serviceProvider, and add a console logger.

The remainder of the program shows more dependency-injection in progress. We first fetch an ILogger<T> from the container, and then fetch an instance of IBarService. As per our registrations, the IBarService is an instance of BarService, which will have an instance of FooService injected in it.

If can then run our application and see all our beautifully resolved dependencies!

Using dependency injection in a .Net Core console application

Adding StructureMap to your console app

As described previously, the built-in container is useful for adding framework libraries using the extension methods, like we saw with AddLogging above. However it is much less fully featured than many third-party containers.

For completeness, I'll show how easy it is to update the application to use a hybrid approach, using the built in container to easily add any framework dependencies, and using StructureMap for your own code. If you want a more detailed description of adding StructureMap to and ASP.NET Core application, see the post here.

First you need to add StructureMap to your project.json dependencies:

{
  "dependencies": {
    "StructureMap.Microsoft.DependencyInjection": "1.2.0"
  }
}

Now we'll update our static void main to use StructureMap for registering our custom dependencies:

public static void Main(string[] args)  
{
    // add the framework services
    var services = new ServiceCollection()
        .AddLogging();

    // add StructureMap
    var container = new Container();
    container.Configure(config =>
    {
        // Register stuff in container, using the StructureMap APIs...
        config.Scan(_ =>
                    {
                        _.AssemblyContainingType(typeof(Program));
                        _.WithDefaultConventions();
                    });
        // Populate the container using the service collection
        config.Populate(services);
    });

    var serviceProvider = container.GetInstance<IServiceProvider>();

    // rest of method as before
}

At first glance this may seem more complicated than the previous version, and it is, but it is also far more powerful. In the StructureMap example, we didn't have to explicitly register our IFooService or IBarService services - they were automatically registered by convention. When your apps start to grow, this sort of convention-based registration becomes enormously powerful, especially when couple with the error handling and debugging capabilities available to you.

In this example I showed how to use StructureMap with the adapter to work with the IServiceCollection extension methods, but there's obviously no requirement to do that. Using StructureMap as your only registration source is perfectly valid, you'll just have to manually register any services added as part of the AddPLUGIN extension methods directly.

Summary

In this post I discussed why you might want to use the built-in container for dependency injection in a .NET Core application. I showed how you could add a new ServiceCollection to your project, register and configure the logging framework, and retrieve configured instances of services from it.

Finally, I showed how you could use a third-party container in combination with the built-in container to allow you to use more powerful registration features, such as convention based registration.


Pedro Félix: Accessing the HTTP Context on ASP.NET Core

TL;DR

On ASP.NET Core, the access to the request context can be done via the new IHttpContextAccessor interface, which provides a HttpContext property with this information. The IHttpContextAccessor is obtained via dependency injection or directly from the service locator. However, it requires an explicit service collection registration, mapping the IHttpContextInterface to the HttpContextAccessor concrete class, with singleton scope.

Not so short version

System.Web

On classical ASP.NET, the current HTTP context, containing both request and response information, can be accessed anywhere via the omnipresent System.Web.HttpContext.Current static property. Internally, this property uses information stored in the CallContext object representing the current call flow. This CallContext is preserved even when the same flow crosses multiple threads, so it can handle async methods.

ASP.NET Web API

On ASP.NET Web API, obtaining the current HTTP context without having to flow it explicitly on every call is typically achieved with the help of the dependency injection container.
For instance, Autofac provides the RegisterHttpRequestMessage extension method on the ContainerBuilder, which allows classes to have HttpRequestMessage constructor dependencies.
This extension method configures a delegating handler that registers the input HttpRequestMessage instance into the current lifetime scope.

ASP.NET Core

ASP.NET Core uses a different approach. The access to the current context is provided via a IHttpContextAccessor service, containing a single HttpContext property with both a getter and a setter. So, instead of directly injecting the context, the solution is based on injecting an accessor to the context.
This apparently superfluous indirection provides one benefit: the accessor can have singleton scope, meaning that it can be injected into singleton components.
Notice that injecting a per HTTP request dependency, such as the request message, directly into another component is only possible if the component has the same lifetime scope.

In the current ASP.NET Core 1.0.0 implementation, the IHttpContextAccessor service is implemented by the HttpContextAccessor concrete class and must be configured as a singleton.
 

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}

Notice that this registration is not done by default and must be explicitly performed. If not, any IHttpContextAccessor dependency will result in an activation exception.
On the other hand, no additional configuration is need to capture the context at the beginning of each request, because this is automatically done.

The following implementation details shed some light on this behavior:

  • Each time a new request starts to be handled, a common IHttpContextFactory reference is used to create the HttpContext. This common reference is obtained by the WebHost during startup and used for all requests.

  • The used HttpContextFactory concrete implementation is initialized with an optional IHttpContextAccessor implementation. When available, this accessor is assigned with each created context. This means that if any accessor is registered on the services, then it will automatically be used to set all created contexts.

  • How can the same accessor instance hold different contexts, one for each call flow? The answer lies in the HttpContextAccessor concrete implementation and its use of AsyncLocal to store the context separately for each logical call flow. It is this characteristics that allows a singleton scoped accessor to provide request scoped contexts.

To conclude:

  • Everywhere the HTTP context is needed, declare an IHttpContextAccessor dependency and use it to fetch the context.

  • Don’t forget to explicitly register the IHttpContextAccessor interface on the service collection, mapping it to the concrete HttpContextAccessor type.

  • Also, don’t forget to make this registration with singleton scope.



Andrew Lock: Adding Cache-Control headers to Static Files in ASP.NET Core

Adding Cache-Control headers to Static Files in ASP.NET Core

Thanks to the ASP.NET Core middleware pipeline, it is relatively simple to add additional HTTP headers to your application by using custom middleware. One common use case for this is to add caching headers.

Allowing clients and CDNs to cache your content can have a massive effect on your application's performance. By allowing caching, your application never sees these additional requests and never has to allocate resources to process them, so it is more available for requests that cannot be cached.

In most cases you will find that a significant proportion of the requests to your site can be cached. A typical site serves both dynamically generated content (e.g. in ASP.NET Core, the HTML generated by your Razor templates) and static files (CSS stylesheets, JS, images etc). The static files are typically fixed at the time of publish, and so are perfect candidates for caching.

In this post I'll show how you can add headers to the files served by the StaticFileMiddleware to increase your site's performance. I'll also show how you can add a version tag to your file links, to ensure you don't inadvertently serve stale data.

Note that this is not the only way to add cache headers to your site. You can also use the ResponseCacheAttribute in MVC to decorate Controllers and Actions if you are returning data which is safe to cache.

You could also consider adding caching at the reverse proxy level (e.g. in IIS or Nginx), or use a third party provider like CloudFlare.

Adding Caching to the StaticFileMiddleware

When you create a new ASP.NET Core project from the default template, you will find the StaticFileMiddleware is added early in the middleware pipeline, with a call to AddStaticFiles() in Startup.Configure():

public void Configure(IApplicationBuilder app)  
{
    // looging and exception handler removed for clarity

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

This enables serving files from the wwwroot folder in your application. The default template contains a number of static files (site.css, bootstrap.css, banner1.svg) which are all served by the middleware when running in development mode. It is these we wish to cache.

Don't we get caching by default?

Before we get to adding caching, lets investigate the default behaviour. The first time you load your application, your browser will fetch the default page, and will download all the linked assets. Assuming everything is configured correctly, these should all return a 200 - OK response with the file data:

Adding Cache-Control headers to Static Files in ASP.NET Core

As well as the file data, by default the response header will contain ETag and Last-Modified values:

HTTP/1.1 200 OK  
Date: Sat, 15 Oct 2016 14:15:52 GMT  
Content-Type: image/svg+xml  
Last-Modified: Sat, 15 Oct 2016 13:43:34 GMT  
Accept-Ranges: bytes  
ETag: "1d226ea1f827703"  
Server: Kestrel  

The second time a resource is requested from your site, your browser will send this ETag and Last-Modified value in the header as If-None-Match and If-Modified-Since. This tells the server that it doesn't need to send the data again if the file hasn't changed. If it hasn't changed, the server will send a 304 - Not Modified response, and the browser will use the data it received previously instead.

This level of caching comes out-of-the-box with the StaticFileMiddleware, and gives improved performance by reducing the amount of bandwidth required. However it is important to note that the client is still sending a request to your server - the response has just been optimised. This becomes particularly noticeable with high latency connections or pages with many files - the browser still has to wait for the response to come back as 304:

Adding Cache-Control headers to Static Files in ASP.NET Core

The image above uses Chrome's built in network throttling to emulate a GPRS connection with a very large latency of 500ms. You can see that the first Index page loads in 1.59s, after which the remaining static files are requested. Even though they all return 304 responses using only 250 bytes, the page doesn't actually finish loading until and additional 2.5s have passed!

Adding cache headers to static files

Rather than requiring the browser to always check if a file has changed, we now want it to assume that the file is the same, for a predetermined length of time. This is the purpose of the Cache-Control header.

In ASP.NET Core, you can easily add this this header when you configure the StaticfileMiddleware:

using Microsoft.Net.Http.Headers;

app.UseStaticFiles(new StaticFileOptions  
{
    OnPrepareResponse = ctx =>
    {
        const int durationInSeconds = 60 * 60 * 24;
        ctx.Context.Response.Headers[HeaderNames.CacheControl] =
            "public,max-age=" + durationInSeconds;
    }
});

One of the overloads of UseStaticFiles takes a StaticFileOptions parameter, which contains the property OnPrepareResponse. This action can be used to specify any additional processing that should occur before a response is sent. It is passed a single parameter, a StaticFileResponseContext, which contains the current HttpContext and also an IFileInfo property representing the current file.

If set, the Action<StaticFileResponseContext> is called before each successful response, whether a 200 or 304 response, but it won't be called if the file was not found (and instead returns a 404).

In the example provided above, we are setting the Cache-Control header (using the constant values defined in Microsoft.Net.Http.Headers) to cache our files for 24 hours. You can read up on the details of the various associated cache headers here. In this case, we marked the response as public as we want intermediate caches between our server and the user to store the cached file too.

If we run our high-latency scenario again, we can see our results in action:

Adding Cache-Control headers to Static Files in ASP.NET Core

Our index page still takes 1.58s to load, but as you can see, all our static files are loaded from the cache, which means no requests to our server, and consequently no latency! We're all done in 1.61s instead of the 4.17s we had previously.

Once the max-age duration we specified has expired, or after the browser evicts the files from its cache, we'll be back to making requests to the server, but until then we can see a massive improvement. What's more, if we use a CDN or there are intermediate cache servers between the user's browser and our server, then they will also be able to serve the cached content, rather than the request having to make it all the way to your server.

Note: Chrome is a bit funny with respect to cache behaviour - if you reload a page using F5 or the Reload button, it will generally not use cached assets. Instead it will pull them down fresh from the server. If you are struggling to see the fruits of your labour, navigate to a different page by clicking a link - you should see the correct caching behaviour then.

Cache busting for file changes

Before we added caching we saw that we return an ETag whenever we serve a static file. This is calculated based on the properties of the file such that if the file changes, the ETag will change. For those interested, this is the snippet of code that is used in ASP.NET Core:

_length = _fileInfo.Length;

DateTimeOffset last = _fileInfo.LastModified;  
// Truncate to the second.
_lastModified = new DateTimeOffset(last.Year, last.Month, last.Day, last.Hour, last.Minute, last.Second, last.Offset).ToUniversalTime();

long etagHash = _lastModified.ToFileTime() ^ _length;  
_etag = new EntityTagHeaderValue('\"' + Convert.ToString(etagHash, 16) + '\"');  

This works great before we add caching - if the ETag hasn't changed we return a 304, otherwise we return a 200 response with the new data.

Unfortunately, once we add caching, we are no longer making a request to the server. The file could have completely changed or have been deleted entirely, but if the browser doesn't ask, the server can't tell them!

One common solution around this is to append a querystring to the url when you reference the static file in your markup. As the browser determines uniqueness of requests including the querystring, it treats https://localhost/css/site.css?v=1 as a different file to https://localhost/css/site.css?v=2. You can use this approach by updating any references to the file in your markup whenever you change the file.

While this works, it requires you to find every reference to your static file anywhere on your site whenever you change the file, so it can be a burden to manage. A simpler technique is to have the querystring be calculated based on the content of the file itself, much like an ETag. That way, when the file changes, the querystring will automatically change.

This is not a new technique - Mads Kristensen describes one method of achieving it with ASP.NET 4.X here but with ASP.NET Core we can use the link, script and image Tag Helpers to do the work for us.

It is highly likely that you are actually already using these tag helpers, as they are used in the default templates for exactly this purpose! For example, in _Layout.cshtml, you will find the following link:

<script src="~/js/site.js" asp-append-version="true"></script>  

The Tag Helper is added with the markup asp-append-version="true" and ensures that when rendered, the link will be rendered with a hash of the file as a querysting:

<script src="/js/site.js?v=Ynfdc1vuMNOWZfqTj4N3SPcebazoGXiIPgtfE-b2TO4"></script>  

If the file changes, the SHA256 hash will also change, and the cache will be automatically bypassed! You can add this Tag Helper to img, script and link elements, though there is obviously a degree of overhead as a hash of the file has to be calculated on first request. For files which are very unlikely to ever change (e.g. some images) it may not be worth the overhead to add the helper, but for others it will no doubt prevent quirky behaviour once you add caching!

Summary

In this post we saw the built in caching using ETags provided out of the box with the StaticFileMiddleware. I then showed how to add caching to the requests to prevent unnecessary requests to the server. Finally, I showed how to break out of the cache when the file changes, by using Tag Helpers to add a version querystring to the file request.


Damien Bowden: Angular2 search with ASP.NET Core and Elasticsearch

This article shows how a website search could be implemented using Angular 2, ASP.NET Core and Elasticsearch. Most users expect autocomplete and a flexible search like some of known search websites. When the user enters a char in the search input field, an autocomplete using a shingle token filter with a terms aggregation used to suggest possible search terms. When a term is selected, a match query request is sent and uses an edge ngram indexed field to search for hits or matches. Server side paging is then implemented to iterate though the results.

Code: https://github.com/damienbod/Angular2AutoCompleteAspNetCoreElasticsearch

ASP.NET Core server side search

The Elasticsearch index and queries was built using the ideas from these 2 excellent blogs, bilyachat and qbox.io. ElasticsearchCrud is used as the dotnet core client for Elasticsearch. To setup the index, a mapping needs to be defined as well as the index with the required settings analysis with filters, analyzers and tokenizers. See the Elasticsearch documentation for detailed information.

In this example, 2 custom analyzers are defined, one for the autocomplete and one for the search. The autocomplete analyzer uses a custom shingle token filter called autocompletefilter, a stopwords token filter, lowercase token filter and a stemmer token filter. The edge_ngram_search analyzer uses an edge ngram token filter and a lowercase filter.

private IndexDefinition CreateNewIndexDefinition()
{
	return new IndexDefinition
	{
		IndexSettings =
		{
			Analysis = new Analysis
			{
				Filters =
				{
					CustomFilters = new List<AnalysisFilterBase>
					{
						new StemmerTokenFilter("stemmer"),
						new ShingleTokenFilter("autocompletefilter")
						{
							MaxShingleSize = 5,
							MinShingleSize = 2
						},
						new StopTokenFilter("stopwords"),
						new EdgeNGramTokenFilter("edge_ngram_filter")
						{
							MaxGram = 20,
							MinGram = 2
						}
					}
				},
				Analyzer =
				{
					Analyzers = new List<AnalyzerBase>
					{
						new CustomAnalyzer("edge_ngram_search")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "edge_ngram_filter"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						},
						new CustomAnalyzer("autocomplete")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "autocompletefilter", "stopwords", "stemmer"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						},
						new CustomAnalyzer("default")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "stopwords", "stemmer"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						}
						
					   
					}
				}
			}
		},
	};
}

The PersonCity is used to add and search for documents in Elasticsearch. The default index and type for this class using ElasticsearchCrud is personcitys and personcity.

public class PersonCity
{
	public long Id { get; set; }
	public string Name { get; set; }
	public string FamilyName { get; set; }
	public string Info { get; set; }
	public string CityCountry { get; set; }
	public string Metadata { get; set; }
	public string Web { get; set; }
	public string Github { get; set; }
	public string Twitter { get; set; }
	public string Mvp { get; set; }
}

A PersonCityMapping class is defined so that required mapping from the PersonCityMappingDto mapping class can be defined for the personcitys index and the personcity type. This class overrides the ElasticsearchMapping to define the index and type.

using System;
using ElasticsearchCRUD;

namespace SearchComponent
{
    public class PersonCityMapping : ElasticsearchMapping
    {
        public override string GetIndexForType(Type type)
        {
            return "personcitys";
        }

        public override string GetDocumentType(Type type)
        {
            return "personcity";
        }
    }
}

The PersonCityMapping class is then used to map the C# type PersonCityMappingDto to the default index from the PersonCity class using the PersonCityMapping. The PersonCityMapping maps to the default index of the PersonCity class.

public PersonCitySearchProvider()
{
	_elasticsearchMappingResolver.AddElasticSearchMappingForEntityType(typeof(PersonCityMappingDto), new PersonCityMapping());
	_context = new ElasticsearchContext(ConnectionString, new ElasticsearchSerializerConfiguration(_elasticsearchMappingResolver))
	{
		TraceProvider = new ConsoleTraceProvider()
	};
}

A specific mapping DTO class is used to define the mapping in Elasticsearch. This class is required, if a non default mapping is required in Elasticsearch. The class uses the ElasticsearchString attribute to define a copy mapping. The field in Elasticsearch should be copied to the autocomplete and the searchfield field when adding a new document. The searchfield and the autocomplete field uses the two analyzers which where defined in the index when adding data. This class is only used to define the type mapping in Elasticsearch.

using ElasticsearchCRUD.ContextAddDeleteUpdate.CoreTypeAttributes;

namespace SearchComponent
{
    public class PersonCityMappingDto
    {
        public long Id { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Name { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string FamilyName { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Info { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string CityCountry { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Metadata { get; set; }

        public string Web { get; set; }

        public string Github { get; set; }

        public string Twitter { get; set; }

        public string Mvp { get; set; }

        [ElasticsearchString(Analyzer = "edge_ngram_search", SearchAnalyzer = "standard", TermVector = TermVector.yes)]
        public string searchfield { get; set; }

        [ElasticsearchString(Analyzer = "autocomplete")]
        public string autocomplete { get; set; }
    }
}

The IndexCreate method creates a new index and mapping in elasticsearch.

public void CreateIndex()
{			
	_context.IndexCreate<PersonCityMappingDto>(CreateNewIndexDefinition());
}

The Elasticsearch settings can be viewed using the HTTP GET:

http://localhost:9200/_settings

{
	"personcitys": {
		"settings": {
			"index": {
				"creation_date": "1477642409728",
				"analysis": {
					"filter": {
						"stemmer": {
							"type": "stemmer"
						},
						"autocompletefilter": {
							"max_shingle_size": "5",
							"min_shingle_size": "2",
							"type": "shingle"
						},
						"stopwords": {
							"type": "stop"
						},
						"edge_ngram_filter": {
							"type": "edgeNGram",
							"min_gram": "2",
							"max_gram": "20"
						}
					},
					"analyzer": {
						"edge_ngram_search": {
							"filter": ["lowercase",
							"edge_ngram_filter"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						},
						"autocomplete": {
							"filter": ["lowercase",
							"autocompletefilter",
							"stopwords",
							"stemmer"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						},
						"default": {
							"filter": ["lowercase",
							"stopwords",
							"stemmer"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						}
					}
				},
				"number_of_shards": "5",
				"number_of_replicas": "1",
				"uuid": "TxS9hdy7SmGPr4FSSNaPiQ",
				"version": {
					"created": "2040199"
				}
			}
		}
	}
}

The Elasticsearch mapping can be viewed using the HTTP GET:

http://localhost:9200/_mapping

{
	"personcitys": {
		"mappings": {
			"personcity": {
				"properties": {
					"autocomplete": {
						"type": "string",
						"analyzer": "autocomplete"
					},
					"citycountry": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"familyname": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"github": {
						"type": "string"
					},
					"id": {
						"type": "long"
					},
					"info": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"metadata": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"mvp": {
						"type": "string"
					},
					"name": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"searchfield": {
						"type": "string",
						"term_vector": "yes",
						"analyzer": "edge_ngram_search",
						"search_analyzer": "standard"
					},
					"twitter": {
						"type": "string"
					},
					"web": {
						"type": "string"
					}
				}
			}
		}
	}
}

Now documents can be added using the PersonCity class which has no Elasticsearch definitions.

Autocomplete search

A terms aggregation search is used for the autocomplete request. The terms aggregation uses the autocomplete field which only exists in Elasticsearch. A list of strings is returned to the user from this request.

public IEnumerable<string> AutocompleteSearch(string term)
{
	var search = new Search
	{
		Size = 0,
		Aggs = new List<IAggs>
		{
			new TermsBucketAggregation("autocomplete", "autocomplete")
			{
				Order= new OrderAgg("_count", OrderEnum.desc),
				Include = new IncludeExpression(term + ".*")
			}
		}
	};

	var items = _context.Search<PersonCity>(search);
	var aggResult = items.PayloadResult.Aggregations.GetComplexValue<TermsBucketAggregationsResult>("autocomplete");
	IEnumerable<string> results = aggResult.Buckets.Select(t =>  t.Key.ToString());
	return results;
}

The request is sent to Elasticsearch as follows:

POST http://localhost:9200/personcitys/personcity/_search HTTP/1.1
Content-Type: application/json
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 124
Host: localhost:9200

{
	"size": 0,
	"aggs": {
		"autocomplete": {
			"terms": {
				"field": "autocomplete",
				"order": {
					"_count": "desc"
				},
				"include": {
					"pattern": "as.*"
				}
			}
		}
	}
}

Search Query

When an autocomplete string is selected, a search request is sent to Elasticsearch using a Match Query on the searchfield field which returns 10 hits from the 0 document. If the paging request is sent, the from value is a multiple of 10 depending on the page.

public PersonCitySearchResult Search(string term, int from)
{
	var personCitySearchResult = new PersonCitySearchResult();
	var search = new Search
	{
		Size = 10,
		From = from,
		Query = new Query(new MatchQuery("did_you_mean", term))
	};

	var results = _context.Search<PersonCity>(search);

	personCitySearchResult.PersonCities = results.PayloadResult.Hits.HitsResult.Select(t => t.Source);
	personCitySearchResult.Hits = results.PayloadResult.Hits.Total;
	personCitySearchResult.Took = results.PayloadResult.Took;
	return personCitySearchResult;
}

The search query as sent as follows:

POST http://localhost:9200/personcitys/personcity/_search HTTP/1.1
Content-Type: application/json
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 74
Host: localhost:9200

{
	"from": 0,
	"size": 10,
	"query": {
		"match": {
			"searchfield": {
				"query": "asp.net"
			}
		}
	}
}

Angular 2 client side search

The Angular 2 client uses an autocomplete input control and then uses a ngFor to display all the search results. Bootstrap paging is used if more than 10 results are found for the search term.

<div class="panel-group">

    <personcitysearch 
      *ngIf="IndexExists"
      (onTermSelectedEvent)="onTermSelectedEvent($event)"
      [disableAutocomplete]="!IndexExists">
    </personcitysearch>
    
    <em *ngIf="PersonCitySearchData.took > 0" style="font-size:smaller; color:lightgray;">
      <span>Hits: {{PersonCitySearchData.hits}}</span>
    </em><br /> 
    <br />

    <div  *ngFor="let personCity of PersonCitySearchData.personCities">  
        <b><span>{{personCity.name}} {{personCity.familyName}} </span></b> 
        <a *ngIf="personCity.twitter"  href="{{personCity.twitter}}">
          <img src="assets/socialTwitter.png" />
        </a>
        <a *ngIf="personCity.github" href="{{personCity.github}}">
          <img src="assets/github.png" />
        </a>
        <a *ngIf="personCity.mvp" href="{{personCity.mvp}}">
          <img src="assets/mvp.png" width="24" />
        </a><br />

        <em style="font-size:large"><a href="{{personCity.web}}">{{personCity.web}}</a></em><br />  
        <em><span>{{personCity.metadata}}</span></em><br />      
        <span>{{personCity.info}}</span><br />
        <br />
        <br />

    </div>

    <ul class="pagination" *ngIf="ShowPaging">
        <li><a (click)="PreviousPage()" >&laquo;</a></li>

        <li><a *ngFor="let page of Pages" (click)="LoadDataForPage(page)">{{page}}</a></li>

        <li><a (click)="NextPage()">&raquo;</a></li>
    </ul>
</div>

The personcitysearch Angular 2 component implements the autocomplete functionality using the ng2-completer component. When a char is entered into the input, a HTTP request is sent to the server which in turns sends a request to the Elasticsearch server.

import { Component, Inject, EventEmitter, Input, Output, OnInit, AfterViewInit, ElementRef } from '@angular/core';
import { Http, Response } from "@angular/http";

import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { Router } from  '@angular/router';

import { Configuration } from '../app.constants';
import { PersoncityautocompleteDataService } from './personcityautocompleteService';
import { PersonCity } from '../model/personCity';

import { CompleterService, CompleterItem } from 'ng2-completer';

import './personcityautocomplete.component.scss';

@Component({
    selector: 'personcityautocomplete',
  template: `
<ng2-completer [dataService]="dataService" (selected)="onPersonCitySelected($event)" [minSearchLength]="0" [disableInput]="disableAutocomplete"></ng2-completer>

`
})
    
export class PersoncityautocompleteComponent implements OnInit    {

    constructor(private completerService: CompleterService, private http: Http, private _configuration: Configuration) {

        this.dataService = new PersoncityautocompleteDataService(http, _configuration); ////completerService.local("name, info, familyName", 'name');
    }

    @Output() bindModelPersonCityChange = new EventEmitter<PersonCity>();
    @Input() bindModelPersonCity: PersonCity;
    @Input() disableAutocomplete: boolean = false;

    private searchStr: string;
    private dataService: PersoncityautocompleteDataService;

    ngOnInit() {
        console.log("ngOnInit PersoncityautocompleteComponent");
    }

    public onPersonCitySelected(selected: CompleterItem) {
        console.log(selected);
        this.bindModelPersonCityChange.emit(selected.originalObject);
    }
}

And the data service for the CompleterService which is used by the ng2-completer component:

import { Http, Response } from "@angular/http";
import { Subject } from "rxjs/Subject";

import { CompleterData, CompleterItem } from 'ng2-completer';
import { Configuration } from '../app.constants';

export class PersoncityautocompleteDataService extends Subject<CompleterItem[]> implements CompleterData {
    constructor(private http: Http, private _configuration: Configuration) {
        super();

        this.actionUrl = _configuration.Server + 'api/personcity/querystringsearch/';
    }

    private actionUrl: string;

    public search(term: string): void {
        this.http.get(this.actionUrl + term)
            .map((res: Response) => {
                // Convert the result to CompleterItem[]
                let data = res.json();
                let matches: CompleterItem[] = data.map((personcity: any) => {
                    return {
                        title: personcity.name,
                        description: personcity.familyName + ", " + personcity.cityCountry,
                        originalObject: personcity
                    }
                });
                this.next(matches);
            })
            .subscribe();
    }

    public cancel() {
        // Handle cancel
    }
}

The HomeSearchComponent implements the paging for the search results and and also displays the data. The SearchDataService implements the API calls to the MVC ASP.NET Core API service. The paging css uses bootstrap to display the data.

import { Observable } from 'rxjs/Observable';
import { Component, OnInit } from '@angular/core';
import { Http } from '@angular/http';

import { SearchDataService } from '../services/searchDataService';
import { PersonCity } from '../model/personCity';
import { PersonCitySearchResult } from '../model/personCitySearchResult';
import { PersoncitysearchComponent } from '../personcitysearch/personcitysearch.component';

@Component({
    selector: 'homesearchcomponent',
    templateUrl: 'homesearch.component.html',
    providers: [SearchDataService]
})

export class HomeSearchComponent implements OnInit {

    public message: string;
    public PersonCitySearchData: PersonCitySearchResult;
    public SelectedTerm: string;
    public IndexExists: boolean = false;

    constructor(private _dataService: SearchDataService, private _personcitysearchComponent: PersoncitysearchComponent) {
        this.message = "Hello from HomeSearchComponent constructor";
        this.SelectedTerm = "none";
        this.PersonCitySearchData = new PersonCitySearchResult();
    }

    public onTermSelectedEvent(term: string) {
        this.SelectedTerm = term; 
        this.findDataForSearchTerm(term, 0)
    }

    private findDataForSearchTerm(term: string, from: number) {
        console.log("findDataForSearchTerm:" + term);
        this._dataService.FindAllForTerm(term, from)
            .subscribe((data) => {
                console.log(data)
                this.PersonCitySearchData = data;
                this.configurePagingDisplay(this.PersonCitySearchData.hits);
            },
            error => console.log(error),
            () => {
                console.log('PersonCitySearch:findDataForSearchTerm completed');
            }
            );
    }

    ngOnInit() {
        this._dataService
            .IndexExists()
            .subscribe(data => this.IndexExists = data,
            error => console.log(error),
            () => console.log('Get IndexExists complete'));
    }

    public ShowPaging: boolean = false;
    public CurrentPage: number = 0;
    public TotalHits: number = 0;
    public PagesCount: number = 0;
    public Pages: number[] = [];

    public LoadDataForPage(page: number) {
        var from = page * 10;
        this.findDataForSearchTerm(this.SelectedTerm, from)
        this.CurrentPage = page;
    }

    public NextPage() {
        var page = this.CurrentPage;
        console.log("TotalHits" + this.TotalHits + "NextPage: " + ((this.CurrentPage + 1) * 10) + "CurrentPage" + this.CurrentPage );

        if (this.TotalHits > ((this.CurrentPage + 1) * 10)) {
            page = this.CurrentPage + 1;
        }

        this.LoadDataForPage(page);
    }

    public PreviousPage(page: number) {
        var page = this.CurrentPage;

        if (this.CurrentPage > 0) {
            page = this.CurrentPage - 1;
        }

        this.LoadDataForPage(page);
    }

    private configurePagingDisplay(hits: number) {
        this.PagesCount = Math.floor(hits / 10);

        this.Pages = [];
        for (let i = 0; i <= this.PagesCount; i++) {
            this.Pages.push((i));
        }
        
        this.TotalHits = hits;

        if (this.PagesCount <= 1) {
            this.ShowPaging = false;
        } else {
            this.ShowPaging = true;
        }
    }
}

Now when characters are entered into the search input, records are searched for and returned with the amount of hits for the term.

searchaspnetcoreangular2_01

The paging can also be used, to do the server side paging.

searchaspnetcoreangular2_02

The search functions like a web search with which we have come to expect. If different results, searches are required, the server side index creation, query types can be changed as needed. For example, the autocomplete suggestions could be replaced with a fuzzy search, or a query string search.

Links:

https://github.com/oferh/ng2-completer

https://github.com/damienbod/Angular2WebpackVisualStudio

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

https://www.elastic.co/products/elasticsearch

https://www.nuget.org/packages/ElasticsearchCRUD/

https://github.com/damienbod/ElasticsearchCRUD

http://www.bilyachat.com/2015/07/search-like-google-with-elasticsearch.html

http://stackoverflow.com/questions/29753971/elasticsearch-completion-suggest-search-with-multiple-word-inputs

http://rea.tech/implementing-autosuggest-in-elasticsearch/

https://qbox.io/blog/an-introduction-to-ngrams-in-elasticsearch

https://www.elastic.co/guide/en/elasticsearch/guide/current/_ngrams_for_partial_matching.html



Andrew Lock: Accessing services when configuring MvcOptions in ASP.NET Core

Accessing services when configuring MvcOptions in ASP.NET Core

This post is a follow on to an article by Steve Gordon I read the other day on how to HTML encode deserialized JSON content from a request body. It's an interesting post, but it spurred me thinking about a tangential issue - using injected services when configuring MvcOptions.

The setting - Steve's post in brief

I recommend you read Steve's post first, but the key points to this discussion are described below.

Steve wanted to ensure that HTML POSTed inside a JSON string property was automatically HTML encoded, so that potentially malicious script couldn't be stored in the database. This wouldn't necessarily be something you'd always want to do, but it worked for his use case. It ensured that a string such as

{
  "text": "<script>alert('got you!')</script>" 
}

was automatically converted to

{
  "text": "&lt;script&gt;alert(&#x27;got you!&#x27;)&lt;/script&gt;" 
}

by the time it was received in an Action method. He describes creating a custom ContractResolver and ValueProvider to override the CreateProperties method and automatically encode any string properties.

The section I am interested in is where he wires up his new resolver and provider using a small extension method UseHtmlEncodeJsonInputFormatter. This requires providing a number of services in order to correctly create the JsonInputFormatter. I have reproduced his extension method below:

ublic static class MvcOptionsExtensions  
{
    public static void UseHtmlEncodeJsonInputFormatter(this MvcOptions opts, ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        opts.InputFormatters.RemoveType<JsonInputFormatter>();

        var serializerSettings = new JsonSerializerSettings
        {
            ContractResolver = new HtmlEncodeContractResolver()
        };

        var jsonInputFormatter = new JsonInputFormatter(logger, serializerSettings, ArrayPool<char>.Shared, objectPoolProvider);

        opts.InputFormatters.Add(jsonInputFormatter);
    }
}

For the full details of this method, check out his post. For our discussion, all that's necessary is to appreciate that we are modifying the MvcOptions by adding a new JsonInputFormatter, and that to do so we need instances of an ILogger<T> and ObjectPoolProvider.

The need for these services is a little problematic - we will be calling this extension method when we are first configuring MVC, within the ConfigureServices method, but at that point, we don't have an easy method of accessing other configured services.

The approach Steve used was to build a service provider, and then create the required services using it, as shown below:

public void ConfigureServices(IServiceCollection services)  
{
    var sp = services.BuildServiceProvider();
    var logger = sp.GetService<ILoggerFactory>();
    var objectPoolProvider = sp.GetService<ObjectPoolProvider>();

    services
        .AddMvc(options =>
        {
            options.UseHtmlEncodeJsonInputFormatter(
                logger.CreateLogger<MvcOptions>(), 
                objectPoolProvider);
        });
}

This approach works, but it's not the cleanest, and luckily there's a handy alternative!

What does AddMvc actually do?

Before I get into the cleaned up approach, I just want to take a quick diversion into what the AddMvc method does. In particular, I'm interested in the overload that takes an Action<MvcOption> setup action.

Taking a look at the source code, you can see that it is actually pretty simple:

public static IMvcBuilder AddMvc(this IServiceCollection services, Action<MvcOptions> setupAction)  
{
    // precondition checks removed for brevity
    var builder = services.AddMvc();
    builder.Services.Configure(setupAction);

    return builder;
}

This overload calls AddMvc() without an action, which returns an IMvcBuilder. We then call Configure with the Action<> to configure an instance of MvcOptions.

ConfigureOptions to the rescue!

When I saw the Configure call, I immediately thought of a post I wrote previously, about using ConfigureOptions to inject services when configuring IOptions implementations.

Using this technique, we can avoid having to call BuildServiceProvider inside the ConfigureServices method, and can leverage dependency injection instead by creating an instance of IConfigureOptions<MvcOptions>.

Implementing the interface is simply a case of calling our already defined extension method, from within the required Configure method:

public class ConfigureMvcOptions : IConfigureOptions<MvcOptions>  
{
    private readonly ILogger<MvcOptions> _logger;
    private readonly ObjectPoolProvider _objectPoolProvider;
    public ConfigureMvcOptions(ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        _logger = logger;
        _objectPoolProvider = objectPoolProvider;
    }

    public void Configure(MvcOptions options)
    {
        options.UseHtmlEncodeJsonInputFormatter(_logger, _objectPoolProvider);
    }
}

We can then update our configuration method to use the basic AddMvc() method and inject our new configuration class:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();
    services.AddSingleton<IConfigureOptions<MvcOptions>, ConfigureMvcOptions>();
}

With this configuration in place, we have the same behaviour as before, just with some nicer wiring in the Setup class! For a more detailed explanation of why this works, check out my previous post.

Summary

This post was a short follow-up to a post by Steve Gordon in which he showed how to create a custom JsonInputFormatter. I showed how you can use IConfigureOptions<> to use dependency injection when adding MvcOptions as part of your MVC configuration.


Andrew Lock: Resource-based authorisation in ASP.NET Core

Resource-based authorisation in ASP.NET Core

In this next post on authorisation in ASP.NET Core, we look at how you can secure resources based on properties of that resource itself.

In a previous post, we saw how you could create a policy that protects a resource based on properties of the user trying to access it. We used Claims-based identity to verify whether they had the appropriate claim values, and granted or denied access based on these as appropriate.

In some cases, it may not be possible to decide whether access is appropriate based on the current user's claims alone. For example, we may allow users to edit documents that they created, but only access a read-only view of documents created by others. In that case, we not only need an authenticated user, we also need to know who created the document we are attempting to access.

In this post I'll show how we can use the AuthorisationService to take into account the resource we are accessing when determining if a user is authorised to access it.

Previous posts in the authentication/authorisation series:

Resource-based Authorisation

As an example, we will consider the authorisation policy from a previous post, "CanAccessVIPArea", in which we created a policy using a custom AuthorizationRequirement with multiple AuthorizationHandlers to determine if you were allowed access to the protected action.

One of the handlers we created was to satisfy the requirement that employees of the Airline's Lounge were allowed to use the VIP area. In order to be able to verify this, we had to provide a fixed string to our policy when it was initially configured:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.AddRequirements(
                new IsVipRequirement("British Airways"));
    });
}

An obvious problem with this is that our policy only works for a single airline. We now need a separate policy for each new Airline, and the 'Lounge' method must secured by the correct policy. This may be acceptable if there are not many airlines, but it is an obvious source of potential errors.

Instead, it seems like a better solution would be able to take into consideration the Lounge that is being accessed when determining whether a particular employee can access it. This is a perfect use case for resource-based authorisation.

Defining the resource

First of all, we will need to define the 'Lounge' resource that we are attempting to protect:

public class Lounge  
{
    public string AirlineName {get; set;}
    public bool IsOpen {get; set;}
    public int SeatingCapacity {get; set;}
    public int NumberofOccupants {get; set;}
}

This is a fairly self-explanatory example - the Lounge belongs to the single airline defined in AirlineName.

Authorising using IAuthorisationService

Now we have a resource, we need some way of passing it to the authorisation handlers. Previously, we decorated our Controllers and Actions with [Authorize("CanAccessVIPArea")] to declaratively authorise the Action being executed. Unfortunately, we have no way of passing a Lounge object to the AuthoriseAttribute. Instead, we will use imperative authorisation by calling the IAuthorisationService directly.

In the previous post on UI modification I showed how you can inject the IAuthorisationService into your Views, to dynamically authorise a User for the purpose of hiding inaccessible UI elements. We can use a similar technique in our controllers whenever we need to do resource-based authorisation:

public class VIPLoungeControllerController : Controller  
{
    private readonly IAuthorizationService _authorizationService;

    public VIPLoungeControllerController(IAuthorizationService authorizationService)
    {
        _authorizationService = authorizationService;
    }

    [HttpGet]
    public async Task<IActionResult> ViewTheFancySeatsInTheLounge(int loungeId)
    {
       // get the lounge object from somewhere
       var lounge = LoungeRepository.Find(loungeId);

       if (await authorizationService.AuthorizeAsync(User, lounge, "CanAccessVIPArea"))
       {
           return View();
       }
       else
       {
           return new ChallengeResult();
       }
    }

We use dependency injection to inject an instance of the IAuthorizationService into our controller for use in our action method. Next we obtain an instance of our resource from somewhere (e.g. loaded from the database based on an id) and provide the Lounge object as a parameter to AuthorizeAsync, along with the policy we wish to apply. If the authorisation is successful, we display the View, otherwise we return a ChallengeResult. The ChallengeResult can return a 401 or 403 response, depending on the authentication state of the user, which in turn may be captured further down the pipeline and turned into a 302 redirect to the login page. For more details on authentication, check out my previous posts.

Note that we are no longer using the AuthorizeAttribute on our method; the authorisation is a part of the execution of our action, rather than occurring before it can run.

Updating the AuthorizationPolicy

Seeing as how we have switched to resource-based authorisation, we no longer need to define an Airline name on our AuthorizationRequirement, or when we configure the policy. We can simplify our requirement to being a simple marker class:

public class IsVipRequirement : IAuthorizationRequirement  { }  

and update our policy definition accordingly:

services.AddAuthorization(options =>  
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.AddRequirements(
                new IsVipRequirement());
    });

Resource-based Authorisation Handlers

The last things we need to update are our AuthorizationHandlers, which can now make use of the provided resource. The only handler from the previous post that needs updating is the IsAirlineEmployeeAuthorizationHandler, which we can now modify to use the AirlineName defined on our Lounge object, instead of being hardcoded to the AuthorizationRequirement at startup:

public class IsAirlineEmployeeAuthorizationHandler : AuthorizationHandler<IsVipRequirement, Lounge>  
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, 
        IsVipRequirement requirement
        Lounge lounge)
    {
        if (context.User.HasClaim(claim =>
            claim.Type == "EmployeeNumber" && claim.Issuer == lounge.AirlineName))
        {
            context.Succeed(requirement);
        }
        return Task.FromResult(0);
    }
}

Two things have changed here from our previous implementation. First, we are inheriting from AuthorizationHandler<IsVipRequirement, Lounge>, instead of AuthorizationHandler<IsVipRequirement>. This handles extracting the provided resource from the authorisation context. Secondly, the HandleRequirementAsync method now takes a Lounge parameter, which the base AuthorizationHandler<,> automatically provides from the context. We are then free to use the handler in our method to authorise the employee.

Now we have access to the resource object, we could also add handlers to check whether the Lounge is currently open, and whether it has reached seating capacity, but I'll leave that as an exercise for the dedicated!

When to use resource-based authorisation?

Now you have seen two techniques for performing authorisation - declarative, attribute based authorisation, and imperative, IAuthorisationService based authorisation - you may be wondering which approach to use and when. I think the simple answer is really to only use resource-based authorisation when you have to.

In our case, with the attribute-based approach, we had to hard-code the name of each airline into our AuthorizationRequirement, which was not very scalable and in practice meant that couldn't correctly protect our endpoint. In this case resource-based authorisation was pretty much required.

However, moving to the resource-based approach has some downsides. The code in our Action is more complicated and it is less obvious what it is doing. Also, when you use an AuthorizeAttribute, the authorisation code is run right at the beginning of the pipeline, before all other filters and model binding occurs. In contrast, the whole of the pipeline runs when using resource-based auth, even if it turns out the user is ultimately not authorised. This may not be a problem, but it is something to bear in mind and be aware of when choosing your approach.

In general, your application will probably need to use both techniques, it is just a matter of choosing the correct one for each instance.

Summary

In this post I updated an existing authorisation example to use resource-based authorisation. I showed how to call the IAuthorisationService to perform authorisation based on a document or resource that is being protected. Finally I updated an AuthorizationHandler to derive from the generic AuthorizationHandler<,> to access the resource at runtime.


Pedro Félix: Should I PUT or should I POST? (Darling you gotta let me know)

(yes, it doesn’t rhyme however I couldn’t resist the association)

Selecting the proper methods (e.g. GET, POST, PUT, …) to use when designing HTTP based APIS is typically a subject of much debate, and eventually some bike-shedding. In this post I briefly present the rules that I normally follow when presented with this design task.

Don’t go against the HTTP specification

First and foremost, make sure the properties of the chosen methods aren’t violated on the scenario under analysis. The typical offender is using GET for an interaction that requests a state change on the server.
This is because GET is defined to have the safe property, defined as

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.

Another example is choosing PUT for requests that aren’t idempotent, such as appending an item to a collection.
The idempotent property is defined by RFC 7231 as

A request method is considered “idempotent” if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.

Violating these properties is harmful because there may exist system components whose correct behavior depends on them being true. An example is a crawler program that freely follows all GET links in a document, assuming that no state change will be performed by these requests, and that ends up changing the system state.

Another example is an intermediary (e.g. reverse proxy) that automatically retries any failed PUT request (e.g. timeout), assuming they are idempotent. If the PUT is appending items to a collection (append is not idempotent), and the first PUT request was successfully performed and only the response message was lost, then the retry will end up adding two replicated items to the collection.

This violation can also have security implications. For instance, most server frameworks don’t protect GET requests agains CSRF (Cross-Site Request Forgery) because this method is not supposed to change state and reads are already protected by the same-origin browser policy.

Take advantage of the method properties

After ensuring the correctness concerns, i.e., ensuring requests don’t violate any property of chosen methods, we can revert our analysis and check if there aren’t any methods that best fit the intended functionality. After having ensured correctness, in this stage our main concern is going to be optimization.

For instance, if a request defines the complete state for a resource and is idempotent, perhaps a PUT is a best fit than a POST. This is not because a POST will produce incorrect behavior but because using a PUT may induce better system properties. For instance, an intermediary (e.g. reverse proxy or framework middleware) may automatically retry failed requests, and by this provide some fault recovery.

When nothing else fits, use POST

Contrary to some HTTP myths, the POST is not solely intended to create resources. In fact, the new RFC 7231 states

The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics

The “according to the resource’s own specific semantics” effectively allows us to use POST for requests with any semantics. However the fact that it allows us doesn’t mean that we always should. Again, if another method (e.g. GET or PUT) best fits the request purpose, not choosing it may mean throwing away interesting properties, such as caching or fault recovery.

Does my API look RESTful in this method?

One thing that I always avoid is deciding based on the apparent “RESTfullness” of the method – For instance, an API doesn’t have to use PUT to be RESTful.

First and foremost we should think in terms of system properties and use HTTP accordingly. That implies:

  • Not violating its rules – what can go wrong if I choose PUT for this request?
  • Taking advantage of its benefits – what do I loose if I don’t choose PUT for this request?

Hope this helps.
Cheers.



Andrew Lock: Modifying the UI based on user authorisation in ASP.NET Core

Modifying the UI based on user authorisation in ASP.NET Core

This post is the next in the series on authentication and authorisation in ASP.NET Core. It shows how to modify the UI you present based on the authorisation level of the current user. This allows you to hide links to pages the user is not authorised to access, for example.

While important from a user-experience point of view, note that this technique does not provide security per se. You should always use it in combination with the authorisation policy techniques described previously.

Posts in the series:

The default user experience

In my previous post on authorisation in ASP.NET Core, I showed how to create custom authorisation policies and how to apply these to your MVC actions. I finished the post by decorating one of the controller actions with an AuthorizeAttribute to restrict access to only VIPs.

public class VIPLoungeController : Controller  
{
    [Authorize("CanAccessVIPArea")]
    public IActionResult ViewTheFancySeatsInTheLounge()
    {
       return View();
    }
}

This secured the endpoint, so any unauthenticated users trying to access it would be redirected to the login screen. Any user that was already logged in who didn't satisfy the authorisation requirements would receive a 403 - Forbidden response.

This satisfies the security aspect of our requirements but it does not necessarily provide a great user experience out of the box. For example, imagine we have a link in our web page which takes you to the MVC action described above. If the user is logged in, and does not have permission, they will probably be presented with a message similar to this:

Modifying the UI based on user authorisation in ASP.NET Core

or possibly even this:

Modifying the UI based on user authorisation in ASP.NET Core

If the user was not going to be allowed to access the page, they really should not have been shown the link to let them try!

The next question is how to go about conditionally hiding the link to this inaccessible page. So far we have only seen how to apply authorisation policies using attributes at the Controller or Action level (as shown above), or alternatively at a global level.

Before we see exactly how to update the UI, we'll take a slight detour to look at the key enabler service, the IAuthorisationService.

The IAuthorisationService

In my previous introduction to authorisation I described the process that occurs when you decorate your MVC Actions and Controllers with the AuthorizeAttribute. Under the hood, the MVC pipeline is injected with an AuthorizeFilter, which is responsible for authenticating the current user, and verifying whether they have access based on the applicable AuthorizationPolicy.

In order to perform the authorisation, the AuthorizeFilter retrieves an instance of IAuthorizationService from the application's dependency injection container. This service is responsible for actually evaluating the applicable AuthorizationHandler<T>s for the current policy being evaluated. It implements two related methods:

public interface IAuthorizationService  
{
    Task<bool> AuthorizeAsync(ClaimsPrincipal user, object resource, IEnumerable<IAuthorizationRequirement> requirements);

    Task<bool> AuthorizeAsync(ClaimsPrincipal user, object resource, string policyName);
}

The IAuthorizationService has just one job to do - it takes in a ClaimsPrincipal (User), an IEnumerable<IAuthorizationRequirement> (or a policy name) and an optional resource object (we'll come back to this in a later post), and determines whether all the requirements are satisfied. After checking each of the associated AuthorizationHandler<T>s, it returns a boolean indicating whether authorisation was successful.

Using the IAuthorisationService in your views

In the same way that MVC uses the IAuthorisationService to verify a user's authorisation, we can use it directly in our Views by injecting the service using dependency injection. Simply add an @inject directive at the top of your view page:

@inject IAuthorizationService AuthorizationService

The page will automatically resolve an instance of the IAuthorizationService from the DI container, and make it available as an AuthorizationService property in the view.

We can now use the service to conditionally hide parts of our UI based on the result of a call to AuthorizeAsync. We wrap the link to ViewTheFancySeatsInTheLounge in a call to the AuthorizationService using the User property and the name of the same policy we used previously in the AuthorizeAttribute:

@if (await AuthorizationService.AuthorizeAsync(User, "CanAccessVIPArea"))
{
  <li>
    <a asp-area="" 
       asp-controller="VIPLounge" asp-action="ViewTheFancySeatsInTheLounge"
    >View seats in the lounge</a>
  </li>
}

Now when we view our site as someone who is authorised to call the Action, the link will be available for us:

Modifying the UI based on user authorisation in ASP.NET Core

But when we are unauthenticated, or are not authorised to view the link, it won't be available at all:

Modifying the UI based on user authorisation in ASP.NET Core

Summary

In this post, we introduced the IAuthorisationService and how it can be used for imperative authorisation. We showed how you can inject it into a View, allowing you to hide or modify portions of the UI which the current user will not be authorised to access, providing a smoother user experience.


Damien Bowden: Angular2 autocomplete with ASP.NET Core and Elasticsearch

This article shows how autocomplete could be implemented in Angular 2 using ASP.NET Core MVC as a data service. The API uses Elasticsearch to query the data requests. ng2-completer is used to implement the Angular 2 autocomplete functionality.

Code: https://github.com/damienbod/Angular2AutoCompleteAspNetCoreElasticsearch

To use autocomplete in the Angular 2 application, the ng2-completer package needs to be added to the dependencies in the npm packages.json file.

"ng2-completer": "^0.2.2"

This project uses Webpack to build the Angular 2 application and all vendor packages are added to the vendor.ts which can then be used throughout the application. The ng2-completer package is added to the vendor.ts file which is then built using Webpack.

import '@angular/platform-browser-dynamic';
import '@angular/platform-browser';
import '@angular/core';
import '@angular/http';
import '@angular/router';

import 'ng2-completer';

import 'bootstrap/dist/js/bootstrap';

import './css/bootstrap.css';
import './css/bootstrap-theme.css';

PersonCity is used as the data model for the autocomplete. The server side of the application uses the PersonCity model to store and search for data.

export class PersonCity {
    public id: number;
    public name: string;
    public info: string;
    public familyName: string;
}

The ng2-completer autocomplete is used within the PersonCityAutocompleteSearchComponent. This component returns a PersonCity object to the using component. When a new search request is finished, the @Output bindModelPersonCityChange is updated. The @Output is chained to the onPersonCitySelected event from ng2-completer.

A custom CompleterService, PersoncityautocompleteDataService, is used to request the data from the server.

import { Component, Inject, EventEmitter, Input, Output, OnInit, AfterViewInit, ElementRef } from '@angular/core';
import { Http, Response } from "@angular/http";

import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { Router } from  '@angular/router';

import { Configuration } from '../app.constants';
import { PersoncityautocompleteDataService } from './personcityautocompleteService';
import { PersonCity } from '../model/personCity';

import { CompleterService, CompleterItem } from 'ng2-completer';

import './personcityautocomplete.component.scss';

@Component({
    selector: 'personcityautocomplete',
  template: `
<ng2-completer [dataService]="dataService" (selected)="onPersonCitySelected($event)" [minSearchLength]="0" [disableInput]="disableAutocomplete"></ng2-completer>

`
})
    
export class PersoncityautocompleteComponent implements OnInit    {

    constructor(private completerService: CompleterService, private http: Http, private _configuration: Configuration) {

        this.dataService = new PersoncityautocompleteDataService(http, _configuration); ////completerService.local("name, info, familyName", 'name');
    }

    @Output() bindModelPersonCityChange = new EventEmitter<PersonCity>();
    @Input() bindModelPersonCity: PersonCity;
    @Input() disableAutocomplete: boolean = false;

    private searchStr: string;
    private dataService: PersoncityautocompleteDataService;

    ngOnInit() {
        console.log("ngOnInit PersoncityautocompleteComponent");
    }

    public onPersonCitySelected(selected: CompleterItem) {
        console.log(selected);
        this.bindModelPersonCityChange.emit(selected.originalObject);
    }
}


The PersonCityDataService extends the CompleterItem and implements the CompleterData as described in the ng-completer documentation. When PersonCity items are returned from the service, the results are mapped to CompleterItem items as required. This could also be done on the server and then the default remote service could be used. By using the custom service, it can easily be extended to add the security headers for the data service as required.

import { Http, Response } from "@angular/http";
import { Subject } from "rxjs/Subject";

import { CompleterData, CompleterItem } from 'ng2-completer';
import { Configuration } from '../app.constants';

export class PersoncityautocompleteDataService extends Subject<CompleterItem[]> implements CompleterData {
    constructor(private http: Http, private _configuration: Configuration) {
        super();

        this.actionUrl = _configuration.Server + 'api/personcity/querystringsearch/';
    }

    private actionUrl: string;

    public search(term: string): void {
        this.http.get(this.actionUrl + term)
            .map((res: Response) => {
                // Convert the result to CompleterItem[]
                let data = res.json();
                let matches: CompleterItem[] = data.map((personcity: any) => {
                    return {
                        title: personcity.name,
                        description: personcity.familyName + ", " + personcity.cityCountry,
                        originalObject: personcity
                    }
                });
                this.next(matches);
            })
            .subscribe();
    }

    public cancel() {
        // Handle cancel
    }
}

The PersonCityAutocompleteSearchComponent also implemented the specific styles using the personcityautocomplete.componentscss file. The ng-completer components comes with css classes which can be extended or overwritten.


.completer-input {
    width: 500px;
    display: block;
    height: 34px;
    padding: 6px 12px;
    font-size: 14px;
    line-height: 1.42857143;
    color: #555;
    background-color: #fff;
    background-image: none;
    border: 1px solid #ccc;
    border-radius: 4px;
  -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
          box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
  -webkit-transition: border-color ease-in-out .15s, -webkit-box-shadow ease-in-out .15s;
       -o-transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s;
          transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s;
}

.completer-dropdown {
    width: 480px !important;
}

ASP.NET Core MVC API

The PersonCityController MVC Controller implements the service which is used by the Angular 2 application. This service implements the Search action method which uses the IPersonCitySearchProvider to search for the data. Helper methods to create and add some documents to Elasticsearch are also implemented so that the search service can be tested.

using Microsoft.AspNetCore.Mvc;

namespace Angular2AutoCompleteAspNetCoreElasticsearch.Controllers
{
    [Route("api/[controller]")]
    public class PersonCityController : Controller
    {
        private readonly IPersonCitySearchProvider _personCitySearchProvider;

        public PersonCityController(IPersonCitySearchProvider personCitySearchProvider)
        {
            _personCitySearchProvider = personCitySearchProvider;
        }

        [HttpGet("search/{searchtext}")]
        public IActionResult Search(string searchtext)
        {
            return Ok(_personCitySearchProvider.QueryString(searchtext));
        }

        [HttpGet("createindex")]
        public IActionResult CreateIndex()
        {
            _personCitySearchProvider.CreateIndex();
            return Created("http://localhost:5000/api/PersonCity/createindex/", "index created");
        }

        [HttpGet("createtestdata")]
        public IActionResult CreateTestData()
        {
            _personCitySearchProvider.CreateTestData();
            return Created("http://localhost:5000/api/PersonCity/createtestdata/", "test data created");
        }

        [HttpGet("indexexists")]
        public IActionResult GetElasticsearchStatus()
        {
            return Ok(_personCitySearchProvider.GetStatus());
        }
    }
}

The ElasticsearchCrud Nuget package is used to access Elasticsearch. The PersonCitySearchProvider implements this logic. Nest could also be used, only the PersonCitySearchProvider implementation needs to be changed to support this.

"ElasticsearchCRUD":  "2.4.1.1"

The PersonCitySearchProvider class implements the IPersonCitySearchProvider interface which is used in the MVC controller. The IPersonCitySearchProvider needs to be added to the services in the Startup class. The search uses a QueryStringQuery search with wildcards. Any other query, aggregation could be used here, depending on the search requirements.

using System.Collections.Generic;
using System.Linq;
using ElasticsearchCRUD;
using ElasticsearchCRUD.ContextAddDeleteUpdate.IndexModel.SettingsModel;
using ElasticsearchCRUD.Model.SearchModel;
using ElasticsearchCRUD.Model.SearchModel.Queries;
using ElasticsearchCRUD.Tracing;

namespace Angular2AutoCompleteAspNetCoreElasticsearch
{
    public class PersonCitySearchProvider : IPersonCitySearchProvider
    {
        private readonly IElasticsearchMappingResolver _elasticsearchMappingResolver = new ElasticsearchMappingResolver();
        private const string ConnectionString = "http://localhost:9200";
        private readonly ElasticsearchContext _context;

        public PersonCitySearchProvider()
        {
            _context = new ElasticsearchContext(ConnectionString, new ElasticsearchSerializerConfiguration(_elasticsearchMappingResolver))
            {
                TraceProvider = new ConsoleTraceProvider()
            };
        }

        public IEnumerable<PersonCity> QueryString(string term)
        {
            var results = _context.Search<PersonCity>(BuildQueryStringSearch(term));

            return results.PayloadResult.Hits.HitsResult.Select(t => t.Source);
        }

        /// <summary>
        /// TODO protect against injection!
        /// </summary>
        /// <param name="term"></param>
        /// <returns></returns>
        private Search BuildQueryStringSearch(string term)
        {
            var names = "";
            if (term != null)
            {
                names = term.Replace("+", " OR *");
            }

            var search = new Search
            {
                Query = new Query(new QueryStringQuery(names + "*"))
            };

            return search;
        }

        public bool GetStatus()
        {
            return _context.IndexExists<PersonCity>();
        }

        public void CreateIndex()
        {
            _context.IndexCreate<PersonCity>(new IndexDefinition());
        }

        public void CreateTestData()
        {
            PersonCityData.CreateTestData();

            foreach (var item in PersonCityData.Data)
            {
                _context.AddUpdateDocument(item, item.Id);
            }

            _context.SaveChanges();
        }
    }
}

When the application is started, the autocomplete is deactivated as no index exists.

angular2autocompleteaspnetcoreelasticsearch_01

Once the index exists, data can be added to the Elasticsearch index.
angular2autocompleteaspnetcoreelasticsearch_02

And the autocomplete can be used.

angular2autocompleteaspnetcoreelasticsearch_03

Links:

https://github.com/oferh/ng2-completer

https://github.com/damienbod/Angular2WebpackVisualStudio

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

https://www.elastic.co/products/elasticsearch

https://www.nuget.org/packages/ElasticsearchCRUD/

https://github.com/damienbod/ElasticsearchCRUD



Damien Bowden: Using SASS with Webpack, Angular2 and Visual Studio

This post shows how to use SASS with Webpack and Angular 2 in Visual Studio. I had various problems trying to get this to work from Visual Studio using a Webpack build. The following is a solution which works, but not the only one.

Code: https://github.com/damienbod/Angular2WebpackVisualStudio

Install node-sass globally, so that the package is available everywhere. The latest installed node-sass will then be available on the path.

npm install node-sass -g

Add the SASS packages as required in the project npm packages.json file.

 "devDependencies": {
        "node-sass": "3.10.1",
        "sass-loader": "^3.1.2",
        "style-loader": "^0.13.0",
        ...
}

The SASS configuration can then be added to the Webpack config file(s). The SASS scss files are built as part of the Webpack build.

{
  test: /\.scss$/,
  loaders: ["style-loader", "css-loader", "sass-loader"]
},

Now a SASS file can be created and appended to any Angular 2 component.

body {
    padding-top: 50px;
}

.starter-template {
    padding: 40px 15px;
    text-align: center;
}

.navigationLinkButton:hover {
    cursor: pointer;
}

a {
    color: #03A9F4;
}

The scss file or files can be used in the Angular 2 component typescript file using the @Component. The styles property defines an array of strings so each scss require method, needs to be converted to a string, otherwise it will not work. Thanks to Jackie Gleason for this solution.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';

import './app.component.scss';
import '../style/app.scss';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html',
})

export class AppComponent {

    constructor(private router: Router) {
    }
}

If you have the following error message somewhere in your Webpack build: binding.node. Try reinstalling `node-sass`, try the following fixes:

1: Reinstall node-sass

npm install node-sass -g

2: Edit the Visual Studio project and settings

You can use the node from your path, and not the installed Visual Studio node by changing the Tools > Options > Projects and Solutions > External Web Tools. Move the path option to the top. This solution is from Mads Kristensen and explained here: solution

m1les gave this solution in Stack overflow.

You can then view the css styles created from SASS Visual Studio Webpack build.

Links

http://stackoverflow.com/questions/31301582/task-runner-explorer-cant-load-tasks/31444245

https://github.com/webpack/style-loader/issues/123

Customize external web tools in Visual Studio 2015

http://sass-lang.com/

https://www.bensmithett.com/smarter-css-builds-with-webpack/

https://github.com/jtangelder/sass-loader

http://eng.localytics.com/faster-sass-builds-with-webpack/



Dominick Baier: New in IdentityServer4: Multiple allowed Grant Types

In OAuth 2 some grant type combinations are insecure, that’s why we decided for IdentityServer3 that we’ll be defensive and allow only a single grant type per client.

During the last two years of implementing OAuth 2, it turned out that certain combinations of grant types actually do make sense and we adjusted IdentityServer3 to accommodate a couple of those scenarios. But there were still some common cases that either required you to create multiple client configurations for the same logical client – or configuration became a bit messy.

We fixed that in IdentityServer4 – we now allow almost all combinations of grant types for a single client – including the standard ones and extension grants that you add yourself.

We still check that the combination you choose will not result in a security problem – so we haven’t compromised security. Just made the configuration more flexible and easier to use.

See all the details here.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: Custom authorisation policies and requirements in ASP.NET Core

Custom authorisation policies and requirements in ASP.NET Core

This post is the next in a series of posts on the authentication and authorisation infrastructure in ASP.NET Core . In the previous post we showed the basic framework for authorisation in ASP.NET Core i.e. restricting access to parts of your application depending on the current authenticated user. We introduced the concept of Policies, to decouple your authorisation logic from the underlying roles and claims of users. Finally, we showed how to create simple policies that verify the existence of a single claim or role.

In this post we look at creating more complex policies with multiple requirements, creating a custom requirement, and applying an authorisation policy to your entire application.

Policies with multiple requirements

In the previous post, I showed how we could create a simple policy, named CanAccessVIPArea to verify whether a user is allowed to access VIP related methods. This policy tested for a single claim on the User, and authorised the user if the policy was satisfied. For completeness, this is how we configured it in our Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.RequireClaim("VIPNumber"));
    });
}

Imagine now that the original requirements have changed. For example, consider this policy as being applied to the VIP lounge at an airport. In the current implementation, you would be allowed to enter, only if you have a VIP number. However, we now want to ensure that employees of the airline are also allowed to use the VIP lounge, as well as the CEO of the airport.

When you first consider the problem, you might see the policyBuilder object above, notice that it provides a fluent interface, and be tempted to chain additional RequireClaim() calls to it, something like

policyBuilder => policyBuilder  
    .RequireClaim("VIPNumber")
    .RequireClaim("EmployeeNumber")
    .RequireRole("CEO"));

Unfortunately this won't produce the desired behaviour. Each of the requirements that make up the policy must be satisfied, i.e. they are combined using AND whereas we have an OR requirement. To pass the policy in this current state, you would need to have a VIPNumber, an EmployeeNumber and also be a CEO!

Creating a custom policy using a Func

There are a number of different approaches available to satisfy our business requirement, but as the policy is simple to express in this case, we will simply use a Func<AuthorizationHandlerContext, bool> provided to the PolicyBuilder.RequireAssertion method:

services.AddAuthorization(options =>  
{
    options.AddPolicy(
        "CanAccessVIPArea",
        policyBuilder => policyBuilder.RequireAssertion(
            context => context.User.HasClaim(claim => 
                           claim.Type == "VIPNumber" 
                           || claim.Type == "EmployeeNumber")
                        || context.User.IsInRole("CEO"))
        );
});

To satisfy this requirement we are returning a simple bool to indicate whether a user is authorised based on the policy. We are provided an AuthorizationHandlerContext which provides us access to the current ClaimsPrincipal via the User property. This allows us to verify the claims and role of the user.

As you can see from our logic, our "CanAccessVIPArea" policy will now authorise if any of our original business requirements are met, which provides multiple ways to authorise a user.

Creating a custom requirement

While the above approach works for the simplest requirements, it's easy to see that as the rules become more complicated, your policy code could quickly become unmanageable. Additionally, you may need access to other services via dependency injection. In these cases, it's worth considering creating custom requirements and handlers.

Before I jump into the code, a quick recap on the terminology used here:

  • We have a Resource that needs to be protected (e.g. an MVC Action) so that only some users may be authorised to access it,
  • A resource may be protected by one or more Policies (e.g. CanAccessVIPArea). All policies must be satisfied in order for access to the resource to be granted.
  • Each Policy has one or more Requirements (e.g. IsVIP, IsBookedOnToFlight). All requirements must be satisfied on a policy for the overall policy to be satisfied.
  • Each Requirement has one or more Handlers. A requirement is satisfied, if any of them return a Success result, and none of them return an explicit Fail result.

With this in mind, we will redesign our VIP policy above to use a custom requirement, and create some handlers for it.

The Requirement

A requirement in ASP.NET Core is a simple class that implements the empty marker interface IAuthorizationRequirement. You can also use it to store any additional parameters for use later. We have extended our basic VIP requirement described previously to also provide an Airline, so that we only allow employees of the given airline to access the VIP lounge:

public class IsVipRequirement : IAuthorizationRequirement  
{
    public IsVipRequirement(string airline)
    {
        Airline = airline;
    }

    public string Airline { get; }
}

The Authorisation Handlers

The authorisation handler is where all the work of authorising a requirement takes place. To implement a handler you inherit from AuthorizationHandler<T>, and implement the HandleRequirementAsync() method. As mentioned previously, a requirement can have multiple handlers, and only one of these needs to succeed for the requirement to be satisfied.

In our business requirement, we have three handlers corresponding to the three different ways to satisfy the requirement. Each of these are presented and explained below. I will also add an additional handler which checks whether a user has been banned from the VIP lounge previously, so shouldn't be let in again!

The simplest handler is the 'CEO' handler. This simply checks if the current authenticated user is in the role "CEO". If they are, then the handler calls Succeed on the underlying requirement. A default task is returned at the end of the method as the method is asynchronous. Note that in the case that the requirement is not fulfilled, we do nothing with the context; if cannot fulfill it with the current handler, we leave it for the next handler to deal with.

public class IsCEOAuthorizationHandler : AuthorizationHandler<IsVipRequirement>  
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsVipRequirement requirement)
    {
        if (context.User.IsInRole("CEO"))
        {
            context.Succeed(requirement);
        }
        return Task.FromResult(0);
    }
}

The VIP number handler is much the same, it performs a simple check that the current ClaimsPrincipalcontains a claim of type "VIPNumber", and if so, satisfies the requirements.

public class HasVIPNumberAuthorizationHandler : AuthorizationHandler<IsVipRequirement>  
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsVipRequirement requirement)
    {
        if (context.User.HasClaim(claim => claim.Type == "VIPNumber"))
        {
            context.Succeed(requirement);
        }
        return Task.FromResult(0);
    }
}

Our next handler is the 'employee' handler. This verifies that the authenticated user has a claim of type 'EmployeeNumber', and also that this claim was issued by the given Airline. We will see shortly where the requirement object passed in comes from, but you can see that we can access its Airline property and use that within our handler:

public class IsAirlineEmployeeAuthorizationHandler : AuthorizationHandler<IsVipRequirement>  
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsVipRequirement requirement)
    {
        if (context.User.HasClaim(claim =>
            claim.Type == "EmployeeNumber" && claim.Issuer == requirement.Airline))
        {
            context.Succeed(requirement);
        }
        return Task.FromResult(0);
    }
}

Our final handler deals with the case that a user has been banned from being a VIP (maybe they stole too many tiny tubes of toothpaste, or had one two many Laphroaigs). Even if other requirements are met, we don't want to grant the authenticated user VIP status. So even if the user is a CEO, has a VIP Number and is an employee - if they are banned, they can't come in.

We can code this business requirement by calling the context.Fail() method as appropriate within the HandleRequirementAsync method:

public class IsBannedAuthorizationHandler : AuthorizationHandler<IsVipRequirement>  
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsVipRequirement requirement)
    {
        if (context.User.HasClaim(claim => claim.Type == "IsBannedFromVIP"))
        {
            context.Fail();
        }
        return Task.FromResult(0);
    }
}

Calling Fail() overrides any other Success() calls for a requirement. Note that whether a handler calls Success or Fail, all of the registered handlers will be called. This ensures that any side effects (such as logging etc) will always be executed, no matter the order in which the handlers run.

Wiring it all up

Now we have all the pieces we need, we just need to wire up out policy and handlers. We modify the configuration of our AddAuthorization call to use our IsVipRequirement, and also register our handlers with the dependency injection container. We can use singletons here as we are not injecting any dependencies.

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.AddRequirements(
                new IsVipRequirement("British Airways"));
    });

   services.AddSingleton<IAuthorizationHandler, IsCEOAuthorizationHandler>();
   services.AddSingleton<IAuthorizationHandler, HasVIPNumberAuthorizationHandler>();
   services.AddSingleton<IAuthorizationHandler, IsAirlineEmployeeAuthorizationHandler>();
   services.AddSingleton<IAuthorizationHandler, IsBannedAuthorizationHandler>();
}

An important thing to note here is that we are explicitly creating an instance of the IsVipRequirement to be associated with this policy. That means the "CanAccessVIPArea" policy only applies to "British Airways" employees. If we wanted similar behaviour for "American Airlines" employees, we would need to create a second Policy. It is this IsVipRequirement object which is passed to the HandleRequirementAsync method in our handlers.

With our policy in place, we can easily apply it in multiple locations via the AuthorizeAttribute and protect our Action methods:

public class VIPLoungeControllerController : Controller  
{
    [Authorize("CanAccessVIPArea")]
    public IActionResult ViewTheFancySeatsInTheLounge()
    {
       return View();
    }

Applying a global authorisation requirement

As well as applying the policy to individual Actions or Controllers, you can also apply policies globally to protect all of your MVC endpoints. A classic example of this is that you always want a user to be authenticated to browse your site. You can easily create a policy for this by using the RequireAuthenticatedUser() method on PolicyBuilder, but how do you apply the policy globally?

To do this you need to add an AuthorizeFilter to the global MVC filters as part of your call the AddMvc(), passing in the constructed Policy:

services.AddMvc(config =>  
{
    var policy = new AuthorizationPolicyBuilder()
        .RequireAuthenticatedUser()
        .Build();
    config.Filters.Add(new AuthorizeFilter(policy));
});

As shown in the previous post, the AuthorizeFilter is where the authorisation work happens in an MVC application, and is added wherever an AuthorizeAttribute is used. In this case we are ensuring an additional AuthorizeFilter is added for every request.

Note that as this happens for every Action, you will need to decorate your Login methods etc with the AllowAnonymous attribute so that you can actually authenticate and browse the rest of the site!

Summary

In this post I showed in more detail how authorisation policies, requirements and handlers work in ASP.NET Core. I showed how you could use a Func<> to handle simple policies, and how to create custom requirements and handlers for more complex policies. Finally, I showed how you could apply a policy globally to your whole MVC application.


Pedro Félix: Health check APIs, 500 status codes and media types

A status or health check resource (or endpoint, to use the more popular terminology) is a common way for a system to provide an aggregated representation of its operational status. This status representation typically includes a list with the individual system components or health check points and their individual status (e.g. database connectivity, memory usage threshold, deadlocked threads).

For instance, the popular Dropwizard Java framework already provides an out-of-the-box health check resource, located by default on the /healthcheck URI of the administration port, for this purpose.

The following is an example of such representation, defined by a JSON object containing a field by each health check verification.

{
    "deadlocks":{
        "healthy":true
    },
    "database":{
        "healthy":true
    }
}

Apparently, it is also a common practice for a GET request to these resources to return a 500 status code if any of the internal components reports a problem. For instance, the Dropwizard documentation states

If all health checks report success, a 200 OK is returned. If any fail, a 500 Internal Server   Error is returned with the error messages and exception stack traces (if an exception was  thrown).

In my opinion, this practice goes against the HTTP status code semantics because the server  was indeed capable of processing the request and producing a valid response with a correct resource state representation, that is, a correct representation of the system status. The fact that this status includes the information of an error does not changes that.

So, why is this incorrect practice used so often? My conjecture has two reasons for it.

  • First, an incomplete knowledge of the HTTP status code semantics that may induce the following reasoning: if the response contains an error then a 500 must be used.
  •  Second, and perhaps more important, because this practice really comes in handy when using external monitoring systems (e.g. nagios) to periodically check these statuses. Since these monitoring systems do not commonly understand the healthcheck representation, namely because each API or framework uses a different one, the easier solution is to rely solely on the status code: 200 if everything is apparently working properly, 500 if something is not ok.

Does this difference between a 200 and a 500 matters, or are we just being pedantic here? Well, I do think it really matters: by returning a 500 status code on a correctly handled request, the status resource is hiding errors on its own behaviour. For instance, lets consider the common scenario where the status resource is implemented by a third-party provider. A failure of this provider will be indistinguishable of a failure on the system under checking, because a 500 will be returned in both cases.

This example shows the consequences of the lack of effort on designing and standardizing media types. The availability of a standard media type would allow a many-to-many relation between monitoring systems and health check resources.

  • A health check resource could easily be monitored/queried by any monitoring system.
  • A monitoring system could easily inspect multiple health check resources, implemented over different technologies.

 

monitoring

Also, by a using a media-type, the monitoring result could be much richer than “ok” vs. “not ok”.

To conclude with a call-to-action, we really need to create a media type to represent health check or status outcomes, eventually based on an already existing media type:

  • E.g. building upon the “application/problem+json” (RFC 7807), extended to represent multiple problem status (e.g example).
  • E.g. building upon the “application/status+json” media type proposal.

Comments are welcomed.

 

 

 



Dominick Baier: IdentityServer4 RC2 released

Yesterday we pushed IdentityServer4 RC2 to nuget. There are no big new features this time, but a lot of cleaning up, bug fixing and adding more tests.

We might add one or two more bigger things before RTM – but mainly we are in stabilization-mode right now.

All the docs have been updated, and the release notes give you more details on the changes.

Please go ahead and try it out – and give us feedback on the issue tracker. The more, the better.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel v2 released

IdentityModel is our protocol client library for various OpenID Connect and OAuth 2 endpoints like discovery, userinfo, token, introspection and token revocation. In addition it has some general purpose helpers like generating random numbers, base64 URL encoding, time-constant string comparison and X509 store access.

V1 is a PCL – but V2 now targets netstandard 1.3 (and classic .NET 4.5). Since we have quite a big user base for V1, we didn’t want to break anyone doing that change. This is the reason why v1 and v2 now live in separate repos and can evolve independently if needed.

See the readme for examples what IdentityModel can do – and – as always give us feedback via the issue tracker.


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Andrew Lock: Introduction to Authorisation in ASP.NET Core

Introduction to Authorisation in ASP.NET Core

This is the next in series of posts about authentication and authorisation in ASP.NET Core. In the first post we introduced authentication in ASP.NET Core at a high level, introducing the concept of claims-based authentication. In the next two post, we looked in greater depth at the Cookie and JWT middleware implementations to get a deeper understanding of the authentication process. Finally, we looked at using OAuth 2.0 and OpenID Connect in your ASP.NET Core applications.

In this post we'll learn about the authorisation aspect of ASP.NET Core.

Introduction to Authorisation

Just to recap, authorisation is the process of determining if a given user has the necessary attributes/permissions to access a given resource/section of code. In ASP.NET Core, the user is specified by a ClaimsPrincipal object, which may have one or more associated ClaimsIdentity, which in turn may have any number of Claims. The process of creating the ClaimsPrincipal and assigning it the correct Claims is the process of authentication. Authentication is independent and distinct from authorisation, but must occur before authorisation can take place.

In ASP.NET Core, authorisation can be granted based on a number of different factors. These may be based on the roles of the current user (as was common in previous version of .NET), the claims of the current user, the properties of the resource being accessed, or any other property you to care to think of. In this post we'll cover some of the most common approaches to authorising users in your MVC application.

Authorisation in MVC

Authorisation in MVC all centres around the AuthorizeAttribute. In it's simplest form, applying it to an Action (or controller, or globally) marks that action as requiring an authenticated user. Thinking in terms of ClaimsPrincipal and ClaimsIdentity, that means that the current principal must contain a ClaimsIdentity for which IsAuthenticated=true.

This is the coarsest level of granularity - either you are authenticated, and you have access to the resource, or you aren't, and you do not.

You can use the AllowAnonymousAttribute to ignore an AuthorizeAttribute, so in the following example, only authorised users can call the Manage method, while anyone can call the Logout method:

[Authorize]
public class AccountController: Controller  
{
    public IActionResult Manage()
    {
        return View();
    }

    [AllowAnonymous]
    public IActionResult Logout()
    {
        return View();
    }
}

Under the hood

Before we go any further I'd like to take a minute to dig into what is actually happening under the covers here.

The AuthorizeAttribute applied to your actions and controllers is mostly just a marker attribute, it does not contain any behaviour. Instead, it is the AuthorizeFilter which MVC adds to its filter pipeline when it spots the AuthorizeAttribute applied to an action. This filter implements IAsyncAuthorizationFilter, so that it is called early in the MVC pipeline to verify the request is authorised:

public interface IAsyncAuthorizationFilter : IFilterMetadata  
{
    Task OnAuthorizationAsync(AuthorizationFilterContext context);
}

AuthorizeFilter.OnAuthorizationAsync is called to authorise the request, which undertakes a number of actions. The method is reproduced below with some precondition checks removed for brevity - we'll dissect it in a minute:

public virtual async Task OnAuthorizationAsync(AuthorizationFilterContext context)  
{
    var effectivePolicy = Policy;
    if (effectivePolicy == null)
    {
        effectivePolicy = await AuthorizationPolicy.CombineAsync(PolicyProvider, AuthorizeData);
    }

    if (effectivePolicy == null)
    {
        return;
    }

    // Build a ClaimsPrincipal with the Policy's required authentication types
    if (effectivePolicy.AuthenticationSchemes != null && effectivePolicy.AuthenticationSchemes.Count > 0)
    {
        ClaimsPrincipal newPrincipal = null;
        for (var i = 0; i < effectivePolicy.AuthenticationSchemes.Count; i++)
        {
            var scheme = effectivePolicy.AuthenticationSchemes[i];
            var result = await context.HttpContext.Authentication.AuthenticateAsync(scheme);
            if (result != null)
            {
                newPrincipal = SecurityHelper.MergeUserPrincipal(newPrincipal, result);
            }
        }
        // If all schemes failed authentication, provide a default identity anyways
        if (newPrincipal == null)
        {
            newPrincipal = new ClaimsPrincipal(new ClaimsIdentity());
        }
        context.HttpContext.User = newPrincipal;
    }

    // Allow Anonymous skips all authorization
    if (context.Filters.Any(item => item is IAllowAnonymousFilter))
    {
        return;
    }

    var httpContext = context.HttpContext;
    var authService = httpContext.RequestServices.GetRequiredService<IAuthorizationService>();

    // Note: Default Anonymous User is new ClaimsPrincipal(new ClaimsIdentity())
    if (!await authService.AuthorizeAsync(httpContext.User, context, effectivePolicy))
    {
        context.Result = new ChallengeResult(effectivePolicy.AuthenticationSchemes.ToArray());
    }
}

First, it calculates the applicable AuthorizationPolicy for the request. This sets the requirements that must be met for the request to be authorised. The next step is to attempt to authenticate the request by calling AuthenticateAsync(scheme) on the AuthenticationManager found at HttpContext.Authentication. This will run through the authentication process I have discussed in previous posts, and if successful, returns an authenticated ClaimsPrincipal back to the filter.

Once an authenticated principal has been obtained, the authorisation process can begin. First, the method is checked to see if it has an IAllowAnonymousFilter applied (added when an AllowAnonymousAttribute is used), and if it does, returns successfully without any further processing.

If authorisation is required, then the filter requests an instance of IAuthorizationService from the HttpContext. This service neatly encapsulates all the logic for deciding whether a ClaimsPrinciapl meets the requirements of the particular AuthorizationPolicy. A call to IAuthorizationService.AuthorizeAsync() returns a boolean, indicating if the result was successful.

If the IAuthorizationService indicates the user was not successful, the AuthorizationFilter returns a ChallengeResult, bypassing the remainder of the MVC pipeline. When executed, this result calls ChallengeAsync on the AuthenticationManager, which in turn calls HandleUnauthorizedAsync or HandleForbiddenAsync on the underlying AuthenticationHandler as covered previously.

The end result will be either a 403 indicating the user does not have permission, or a 401 indicating they are not logged in, which will generally be captured and converted to a redirect to the login page.

The details of how the AuthorizationFilter works is rather tangential to this introduction in general, but it highlights the separation of concerns and abstractions used to facilitate easier testing, and the use of dumb marker attributes to act has hooks for other more complex services.

Authorising based on claims

Now that detour is over and we understand more of how authorisation works in MVC, we can look at creating some specific authorisation requirements, more than just 'you logged in'.

As I discussed in the introduction to authentication, identity in ASP.NET Core is really entirely focussed around Claims. Given that fact, one of the most obvious modes of authentication is to check that a user has a given claim. For example, there may be a section of your site which is only available to VIPs. In order to authorise requests you could create a CanAccessVIPArea policy, or more specifically an AuthorizationPolicy.

To create a new policy, we configure them as part of the service configuration in the ConfigureServices method of your Startup class using an AuthorizationPolicyBuilder. We provide a name for the policy, "CanAccessVIPArea", and add a requirement that the user has the VIPNumber claim:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.RequireClaim("VIPNumber"));
    });
}

This requirement ensures only that the ClaimsPrincipal has the VIPNumber claim, it does not make any requirements on the value of the claim. If we required the claim to have specific values, we can pass those to the RequireClaimMethod:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.RequireClaim("VIPNumber", "1", "2"));
    });
}

With our policy configured, we can now apply it to our actions or controllers to protect them from the proletariat:

[Authorize(Policy = "CanAccessVIPArea")]
public class ImportantController: Controller  
{
    public IActionResult FancyMethod()
    {
        return View();
    }
}

Note that if you have multiple AuthorizeAttributes applied to an action then all of the policies much be satisfied for the request to be authorised.

Authorising based on roles

Before claims based authentication was embraced, authorisation by role was a common approach. As shown previously, ClaimsPrincipal still has an IsInRole(string role) method that you can use if needed. In particular, you can specify required roles on AuthorizeAttributes, which will then verify the user is in the correct role before authorising the user:

[Authorize(Roles = "HRManager, CEO")]
public class AccountController: Controller  
{
    public IActionResult ViewUsers()
    {
        return View();
    }
}

However, other than for simplicity in porting from ASP.NET 4.X, I wouldn't recommend using the Roles property on the AuthorizeAttribute. Instead, it is far better to use the same AuthorizationPolicy infrastructure as for Claim requirements. This provides far more flexibility than the previous approach, making it simpler to update when policies change, or they need to be dynamically loaded for example.

Configuring a role-based policy is much the same as for Claims and allows you to specify multiple roles; membership in any of these will satisfy the policy requirement.

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanViewAllUsers",
            policy => policy. RequireRole("HRManager", "CEO"));
    });
}

We can now update the previous method to use our new policy:

[Authorize(Policy = "CanViewAllUsers")]
public class AccountController: Controller  
{
    public IActionResult ViewUsers()
    {
        return View();
    }
}

Later on, if we decide to take a claims based approach to our authorisation, we can just update the policies as appropriate, rather than having to hunt through all the Controllers in our solution to find usages of the magic role strings.

Behind the scenes, the roles of a ClaimsPrincipal are actually just claims create with a type of ClaimsIdentity.RoleClaimType. By default, this is given by ClaimType.Role, which is the string http://schemas.microsoft.com/ws/2008/06/identity/claims. When a user is authenticated appropriate claims are added for their roles which can be found later as required.

It's worth bearing this in mind if you have difficult with AuthorizeAttributes not working. Most external identity providers will use a different set of claims representing role, name etc that do not marry up with the values used by Microsoft in the ClaimType class. As Dominick Baier discusses on his blog, this can lead to situations where claims are not translated and so users can appear to not be in a given role. If you run into issues where your authorisation does not appear to working correctly, I strongly recommend you check out his post for all the details.

Generally speaking, unless you have legacy requirements, I would recommend against using roles - they are essentially just a subset of the Claims approach, and provide limited additional value.

Summary

This post provided an introduction to authorisation in ASP.NET Core MVC, using the AuthorizeAttribute. We touched on three simple ways you can authorise users - based on whether they are authenticated, by policy, and by role. We also went under the covers briefly to see how the AuthorisationFilter works when called as part of the MVC pipeline.

In the next post we will explore policies further, looking at how you can create custom policies and custom requirements.


Damien Bowden: IdentityServer4, Web API and Angular2 in a single ASP.NET Core project

This article shows how IdentityServer4 with Identity, a data Web API, and an Angular 2 SPA could be setup inside a single ASP.NET Core project. The application uses the OpenID Connect Implicit Flow with reference tokens to access the API. The Angular 2 application uses webpack to build.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Other posts in this series:

Step 1: Create app and add IdentityServer4

Use the Quickstart6 AspNetIdentity from IdentityServer 4 to setup the application. Then edit the project json file to add your packages as required. I added the Microsoft.AspNetCore.Authentication.JwtBearer package and also the IdentityServer4.AccessTokenValidation package. The buildOptions have to be extended to ignore the node_modules folder.

{
  "userSecretsId": "aspnet-IdentityServerWithAspNetIdentity-1e7bf5d8-6c32-4dd3-b77d-2d7d2e0f5099",

  "dependencies": {
    "IdentityServer4": "1.0.0-rc3",
    "IdentityServer4.AspNetIdentity": "1.0.0-rc3",

    "Microsoft.NETCore.App": {
      "version": "1.0.1",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Authentication.Cookies": "1.0.0",
    "Microsoft.AspNetCore.Diagnostics": "1.0.0",
    "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.0",
    "Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.0",
    "Microsoft.AspNetCore.Mvc": "1.0.1",
    "Microsoft.AspNetCore.Razor.Tools": {
      "version": "1.0.0-preview2-final",
      "type": "build"
    },
    "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.AspNetCore.StaticFiles": "1.0.0",
    "Microsoft.EntityFrameworkCore.Sqlite": "1.0.1",
    "Microsoft.EntityFrameworkCore.Sqlite.Design": {
      "version": "1.0.1",
      "type": "build"
    },
    "Microsoft.EntityFrameworkCore.Tools": {
      "version": "1.0.0-preview2-final",
      "type": "build"
    },
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Configuration.UserSecrets": "1.0.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
    "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0",
    "Microsoft.VisualStudio.Web.CodeGeneration.Tools": {
      "version": "1.0.0-preview2-final",
      "type": "build"
    },
    "Microsoft.VisualStudio.Web.CodeGenerators.Mvc": {
      "version": "1.0.0-preview2-final",
      "type": "build"
    }
  },

  "tools": {
    "BundlerMinifier.Core": "2.0.238",
    "Microsoft.AspNetCore.Razor.Tools": "1.0.0-preview2-final",
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final",
    "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final",
    "Microsoft.Extensions.SecretManager.Tools": "1.0.0-preview2-final",
    "Microsoft.VisualStudio.Web.CodeGeneration.Tools": {
      "version": "1.0.0-preview2-final",
      "imports": [
        "portable-net45+win8"
      ]
    }
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": [
        "dotnet5.6",
        "portable-net45+win8"
      ]
    }
  },

  "buildOptions": {
    "emitEntryPoint": true,
    "preserveCompilationContext": true
  },

  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },

  "publishOptions": {
    "include": [
      "wwwroot",
      "Views",
      "Areas/**/Views",
      "appsettings.json",
      "web.config"
    ]
  },

  "scripts": {
    "prepublish": [ "bower install", "dotnet bundle" ],
    "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
  }
}

The IProfileService interface is implemented to add your user claims to the tokens. The IdentityWithAdditionalClaimsProfileService class implements the IProfileService interface in this example and is added to the services in the Startup class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using IdentityModel;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using IdentityServerWithAspNetIdentity.Models;
using Microsoft.AspNetCore.Identity;

namespace ResourceWithIdentityServerWithClient
{
    public class IdentityWithAdditionalClaimsProfileService : IProfileService
    {
        private readonly IUserClaimsPrincipalFactory<ApplicationUser> _claimsFactory;
        private readonly UserManager<ApplicationUser> _userManager;

        public IdentityWithAdditionalClaimsProfileService(UserManager<ApplicationUser> userManager,  IUserClaimsPrincipalFactory<ApplicationUser> claimsFactory)
        {
            _userManager = userManager;
            _claimsFactory = claimsFactory;
        }

        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();

            var user = await _userManager.FindByIdAsync(sub);
            var principal = await _claimsFactory.CreateAsync(user);

            var claims = principal.Claims.ToList();
            if (!context.AllClaimsRequested)
            {
                claims = claims.Where(claim => context.RequestedClaimTypes.Contains(claim.Type)).ToList();
            }

            claims.Add(new Claim(JwtClaimTypes.GivenName, user.UserName));
            //new Claim(JwtClaimTypes.Role, "admin"),
            //new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"),
            //new Claim(JwtClaimTypes.Role, "dataEventRecords.user"),
            //new Claim(JwtClaimTypes.Role, "dataEventRecords"),
            //new Claim(JwtClaimTypes.Role, "securedFiles.user"),
            //new Claim(JwtClaimTypes.Role, "securedFiles.admin"),
            //new Claim(JwtClaimTypes.Role, "securedFiles")

            if (user.IsAdmin)
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "admin"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "user"));
            }

            if (user.DataEventRecordsRole == "dataEventRecords.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
            }

            if (user.SecuredFilesRole == "securedFiles.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
            }

            claims.Add(new System.Security.Claims.Claim(StandardScopes.Email.Name, user.Email));
            

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            context.IsActive = user != null;
        }
    }
}

Step 2: Add the Web API for the resource data

The MVC Controller DataEventRecordsController is used for CRUD API requests. This is just a dummy implementation. I would implement all resource server logic in a separate project. The Authorize attribute is used with and without policies. The policies are configured in the Startup class.

using ResourceWithIdentityServerWithClient.Model;

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using System;

namespace ResourceWithIdentityServerWithClient.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class DataEventRecordsController : Controller
    {
        [Authorize("dataEventRecordsUser")]
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new List<DataEventRecord> { new DataEventRecord { Id =1, Description= "Fake", Name="myname", Timestamp= DateTime.UtcNow } });
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpGet("{id}")]
        public IActionResult Get(long id)
        {
            return Ok(new DataEventRecord { Id = 1, Description = "Fake", Name = "myname", Timestamp = DateTime.UtcNow });
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPost]
        public void Post([FromBody]DataEventRecord value)
        {
            
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPut("{id}")]
        public void Put(long id, [FromBody]DataEventRecord value)
        {
            
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpDelete("{id}")]
        public void Delete(long id)
        {
            
        }
    }
}

Step 3: Add client Angular 2 client API

The Angular 2 client part of the application is setup and using the ASP.NET Core, Angular2 with Webpack and Visual Studio article. Webpack is then used to build the client application.

Any SPA client can be used which supports the OpenID Connect Implicit Flow. IdentityServer4 (IdentityModel) also have good examples using the OIDC javascript client.

Step 4: Configure application host URL

The URL host is the same for both the client and the server. This is configured in the Config class as a static property HOST_URL and used throughout the server side of the application.

public class Config
{
        public static string HOST_URL =  "https://localhost:44363";

The client application reads the configuration from the app.constants.ts provider.

import { Injectable } from '@angular/core';

@Injectable()
export class Configuration {
    public Server: string = "https://localhost:44363";
}

IIS Express is configured to run with HTTPS and matches these configurations. If a different port is used, you need to change these two code configurations. In a production environment, the data should be configurable pro deployment.

Step 5: Deactivate the consent view

The consent view is deactivated because the client is the only client to use this data resource and always requires the same consent. To improve the user experience, the consent view is removed from the flow. This is done by setting the RequireConsent property to false in the client configuration.

public static IEnumerable<Client> GetClients()
{
	// client credentials client
	return new List<Client>
	{
		new Client
		{
			ClientName = "singleapp",
			ClientId = "singleapp",
			RequireConsent = false,
			AccessTokenType = AccessTokenType.Reference,
			//AccessTokenLifetime = 600, // 10 minutes, default 60 minutes
			AllowedGrantTypes = GrantTypes.Implicit,
			AllowAccessTokensViaBrowser = true,
			RedirectUris = new List<string>
			{
				HOST_URL

			},
			PostLogoutRedirectUris = new List<string>
			{
				HOST_URL + "/Unauthorized"
			},
			AllowedCorsOrigins = new List<string>
			{
				HOST_URL
			},
			AllowedScopes = new List<string>
			{
				"openid",
				"dataEventRecords"
			}
		}
	};
}

Step 6: Deactivate logout screens

When the Angular 2 client requests a logout, the client is logged out, reference tokens are invalidated for this application and user, and the user is redirected back to the Angular 2 application without the server account logout views. This improves the user experience.

The existing 2 Logout action methods are removed from the AccountController and the following is implemented. The controller requires the IPersistedGrantService to remove the reference tokens.

/// <summary>
/// special logout to skip logout screens
/// </summary>
/// <param name="logoutId"></param>
/// <returns></returns>
[HttpGet]
public async Task<IActionResult> Logout(string logoutId)
{
	var user = HttpContext.User.Identity.Name;
	var subjectId = HttpContext.User.Identity.GetSubjectId();

	// delete authentication cookie
	await HttpContext.Authentication.SignOutAsync();


	// set this so UI rendering sees an anonymous user
	HttpContext.User = new ClaimsPrincipal(new ClaimsIdentity());

	// get context information (client name, post logout redirect URI and iframe for federated signout)
	var logout = await _interaction.GetLogoutContextAsync(logoutId);

	var vm = new LoggedOutViewModel
	{
		PostLogoutRedirectUri = logout?.PostLogoutRedirectUri,
		ClientName = logout?.ClientId,
		SignOutIframeUrl = logout?.SignOutIFrameUrl
	};


	await _persistedGrantService.RemoveAllGrantsAsync(subjectId, "singleapp");

	return Redirect(Config.HOST_URL + "/Unauthorized");
}

Step 7: Configure Startup to use all three application parts

The Startup class configures all three application parts to run together. The Angular 2 application requires that its client routes are routed on the client and not the server. Middleware is added so that the server does not handle the client routes.

The API service needs to check the reference token and validate. Policies are added for this and also the extension method UseIdentityServerAuthentication is used to check the reference tokens for each request.

IdentityServer4 is setup to use Identity with a SQLite database.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using IdentityServerWithAspNetIdentity.Data;
using IdentityServerWithAspNetIdentity.Models;
using IdentityServerWithAspNetIdentity.Services;
using QuickstartIdentityServer;
using IdentityServer4.Services;
using System.Security.Cryptography.X509Certificates;
using System.IO;
using System.Linq;
using Microsoft.AspNetCore.Http;
using System.Collections.Generic;
using System.IdentityModel.Tokens.Jwt;
using System;
using Microsoft.AspNetCore.Authorization;
using IdentityServer4.AccessTokenValidation;

namespace ResourceWithIdentityServerWithClient
{
    public class Startup
    {
        private readonly IHostingEnvironment _environment;

        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

            if (env.IsDevelopment())
            {
                builder.AddUserSecrets();
            }

            _environment = env;

            builder.AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            var cert = new X509Certificate2(Path.Combine(_environment.ContentRootPath, "damienbodserver.pfx"), "");

            services.AddDbContext<ApplicationDbContext>(options =>
                options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

            services.AddIdentity<ApplicationUser, IdentityRole>()
            .AddEntityFrameworkStores<ApplicationDbContext>()
            .AddDefaultTokenProviders();

            var guestPolicy = new AuthorizationPolicyBuilder()
            .RequireAuthenticatedUser()
            .RequireClaim("scope", "dataEventRecords")
            .Build();

            services.AddAuthorization(options =>
            {
                options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
                {
                    policyAdmin.RequireClaim("role", "dataEventRecords.admin");
                });
                options.AddPolicy("dataEventRecordsUser", policyUser =>
                {
                    policyUser.RequireClaim("role", "dataEventRecords.user");
                });

            });

            services.AddMvc();

            services.AddTransient<IProfileService, IdentityWithAdditionalClaimsProfileService>();

            services.AddTransient<IEmailSender, AuthMessageSender>();
            services.AddTransient<ISmsSender, AuthMessageSender>();

            services.AddDeveloperIdentityServer()
                .SetSigningCredential(cert)
                .AddInMemoryScopes(Config.GetScopes())
                .AddInMemoryClients(Config.GetClients())
                .AddAspNetIdentity<ApplicationUser>()
                .AddProfileService<IdentityWithAdditionalClaimsProfileService>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                "/Unauthorized",
                "/Forbidden",
                "/home",
                "/dataeventrecords/",
                "/dataeventrecords/create",
                "/dataeventrecords/edit/",
                "/dataeventrecords/list",
                };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseDefaultFiles();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseDatabaseErrorPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseIdentity();
            app.UseIdentityServer();

            app.UseStaticFiles();

            JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

            IdentityServerAuthenticationOptions identityServerValidationOptions = new IdentityServerAuthenticationOptions
            {
                Authority = Config.HOST_URL + "/",
                ScopeName = "dataEventRecords",
                ScopeSecret = "dataEventRecordsSecret",
                AutomaticAuthenticate = true,
                SupportedTokens = SupportedTokens.Both,
                // TokenRetriever = _tokenRetriever,
                // required if you want to return a 403 and not a 401 for forbidden responses
                AutomaticChallenge = true,
            };

            app.UseIdentityServerAuthentication(identityServerValidationOptions);

            app.UseMvcWithDefaultRoute();
        }
    }
}

The application can then be run and tested. To test, right click the project and debug.

Links

https://github.com/IdentityServer/IdentityServer4

http://docs.identityserver.io/en/dev/

https://github.com/IdentityServer/IdentityServer4.Samples

https://docs.asp.net/en/latest/security/authentication/identity.html

https://github.com/IdentityServer/IdentityServer4/issues/349

ASP.NET Core, Angular2 with Webpack and Visual Studio



Andrew Lock: Injecting services into ValidationAttributes in ASP.NET Core

Injecting services into ValidationAttributes in ASP.NET Core

I was battling the other day writing a custom DataAnnotations ValidationAttribute, where I needed access to a service class to perform the validation. The documentation on creating custom attributes is excellent, covering both server side and client side validation, but it doesn't mention this, presumably relatively common, requirement. This post describes how to use dependency injection with ValidationAttributes in ASP.NET Core, and the process I took in trying to figure out how!

Injecting services into attributes in general has always been somewhat problematic as you can't use constructor injection for anything that's not a constant. This often leads to implementations requiring some sort of service locator pattern when external services are required, or a factory pattern to create the attributes.

tl;dr; ValidationAttribute.IsValid() provides a ValidationContext parameter you can use to retrieve services from the DI container by calling GetService().

Injecting services into ActionFilters

In ASP.NET Core MVC, as well having simple 'normal' IFilter attributes that can be used to decorate your actions, there are ServiceFilter and TypeFilter attributes. These implement the IFilterFactory interface, which, as the name suggests, acts as a factory for IFilters!

These two filter types allow you to use classes with constructor dependencies as attributes. For example, we can create an IFilter implementation that has external dependencies:

public class FilterClass : ActionFilterAttribute  
{
  public FilterClass(IDependency1 dependency1, IDependency2 dependency2)
  {
    // ...use dependencies
  }
}

We can then decorate our controller actions to use FilterClass by using the ServiceFilter or TypeFilter:

public class HomeController: Controller  
{
    [TypeFilter(typeof(FilterClass))]
    [ServiceFilter(typeof(FilterClass))]
    public IActionResult Index()
    {
        return View();
    }
}

Both of these attributes will return an instance of the FilterClass to the MVC Pipeline when requested, as though the FilterClass was an attribute applied directly to the Action. The difference between them lies in how they create an instance of the FilterClass.

The ServiceFilter will attempt to resolve an instance of FilterClass directly from the IoC container, so the FilterClass and its dependencies must be registered with the IoC container.

The Typefilter attribute also creates an instance of the FilterClass but only its dependencies are resolved from the IoC Container, rather than FilterClass.

For more details on using TypeFilter and ServiceFilter see the documentation or this post.

How ValidationAttributes are resolved

For my CustomValidationAttribute I needed access to an external service to perform the validation:

public class CustomValidationAttribute: ValidationAttribute  
{
  protected override ValidationResult IsValid(object value, ValidationContext validationContext)
    {
        // ... need access to external service here
    }
}

In my first attempt to inject a service I thought I would have to take a similar approach to the ServiceFilter and TypeFilter attributes. Optimistically, I created a TypeFilter, passed in my CustomValidationAttribute, applied it to the model property and crossed my fingers.

It didn't work.

The mechanism by which DataAnnotation ValidationAttributes are applied to your model is completely different to the IFilter and IFilterFactory attributes used by the MVC infrastructure to build a pipeline.

The default implementation of IModelValidatorProvider used by the Microsoft.AspNetCore.Mvc.DataAnnotations library (cunningly called DataAnnotationsModelValidatorProvider) is responsible for creating the IModelValidator instances in the method CreateValidators. The IModelValidator is responsible for performing the actual validation of a decorated property.

I thought about creating a custom IModelValidatorProvider and creating the validators myself using an ObjectFactory, similar to the way the ServiceFilter and TypeFilter work.

Inside the DataAnnotationsModelValidatorProvider.CreateValidators method is this section of code, which creates a DataAnnotationsModelValidator object from a ValidationAttribute (see here for the full code):

var attribute = validatorItem.ValidatorMetadata as ValidationAttribute;  
if (attribute == null)  
{
    continue;
}

var validator = new DataAnnotationsModelValidator(  
    _validationAttributeAdapterProvider,
    attribute,
    stringLocalizer);

As you can see, the attributes are already created at this point, and exist as ValidatorMetadata on the ModelValidatorProviderContext passed to the function. In order to be able to use a TypeFilter-like approach, we would have to hook in much further up the stack.

At this point I decided that I must be missing something, as it couldn't possibly be this difficult…

The solution

Sure enough, the final answer was simple!

When creating a custom validation attribute you need to override the Validate method:

public class CustomValidationAttribute : ValidationAttribute  
{
    protected override ValidationResult IsValid(object value, ValidationContext validationContext)
    {
        // ... validation logic
    }
}

As you can see, you are provided a ValidationContext as part of the method call. The context object contains a number of properties related to the object currently being validated, and also this handy number:

public object GetService(Type serviceType);  

This hooks into the IoC IServiceProvider to allow retrieving services in your ValidationAttributes:

protected override ValidationResult IsValid(object value, ValidationContext validationContext)  
{
    var service = validationContext.GetService(typeof(IExternalService));
    // use service
}

So in the end, nice and easy, no need for the complex re-implementations route I was eyeing up.

Happy validating!


Dominick Baier: New in IdentityServer4: Resource Owner Password Validation

Not completely new, but re-designed.

In IdentityServer3, we used the user service for both interactive as well as non-interactive authentication. In IdentityServer4, the interactive authentication is done by the UI.

OAuth 2 resource owner password validation is disabled by default – but you can add support for it by implementing and registering the IResourceOwnerPasswordValidator interface.

This gives you more flexibility as in IdentityServer3 since you get access to the raw request and you have more control over the token response via the new GrantValidationResult.


Filed under: ASP.NET, IdentityServer, OAuth, WebAPI


Andrew Lock: Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core

Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core

This post follows on from my previous post about localising an ASP.NET Core application. At the end of that article, we had localised our application so that the user could choose their culture, which would update the page title and the validation attributes with the appropriate translation, but not the form labels. In this post, we cover some of the problems you may run into when localising your application and approaches to deal with them.

Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core

Brief Recap

Just so we're all on the same page, I'll briefly recap how localisation works in ASP.NET Core. If you would like a more detailed description, check out my previous post or the documentation.

Localisation is handled in ASP.NET Core through two main abstractions IStringLocalizer and IStringLocalizer<T>. These allow you to retrieve the localised version of a key by passing in a string; if the key does not exist for that resource, or you are using the default culture, the key itself is returned as the resource:

public class ExampleClass  
{
    public ExampleClass(IStringLocalizer<ExampleClass> localizer)
    {
        // If the resource exists, this returns the localised string
        var localisedString1 = _localizer["I exist"]; // "J'existe"

        // If the resource does not exist, the key itself  is returned
        var localisedString2 = _localizer["I don't exist"]; // "I don't exist"
    }
}

Resources are stored in .resx files that are named according to the class they are localising. So for example, the IStringLocalizer<ExampleClass> localiser would look for a file named (something similar to) ExampleClass.fr-FR.resx. Microsoft recommends that the resource keys/names in the .resx files are the localised values in the default culture. That way you can write your application without having to create any resource files - the supplied string will be used as the resource.

As well as arbitrary strings like this, DataAnnotations which derive from ValidationAttribute also have their ErrorMessage property localised automatically. However the DisplayAttribute and other non-ValidationAttributes are not localised.

Finally, you can localise your Views, either providing whole replacements for your View by using filenames of the form Index.fr-FR.cshtml, or by localising specific strings in your view with another abstraction, the IViewLocalizer, which acts as a view-specific wrapper around IStringLocalizer.

Some of the pitfalls

There are two significant issues I personally find with the current state of localisation;

  1. Magic strings everywhere
  2. Can't localise the DisplayAttribute

The first of these is a design decision by Microsoft, to reduce the ceremony of localising an application. Instead of having to worry about extracting all your hard coded strings out of the code and into .resx files, you can just wrap it in a call to the IStringLocalizer and worry about localising other languages down the line.

While the attempt to improve productivity is a noble goal, it comes with a risk. The problem is that the string values embedded in your code ("I exist" and "I don't exist" in the code above) are serving a dual purpose, both as a string resource for the default culture, and as a key into a resource dictionary.

Inevitably, at some point you will introduce a typo into one of your string resources, it's just a matter of time. You better be sure whoever spots it understands the implications of changing it however, as fixing your typo will cause every other localised language to break. The default resource which is embedded in your code can only be changed if you ensure that every other resource file changes at the same time. That coupling is incredibly fragile, and it will not necessarily be obvious to the person correcting the typo that anything has broken. It is only obvious if they explicitly change culture and notice that the string is no longer localised.

The second issue related to the DisplayAttribute seems like a fairly obvious omission - by it's nature it contains values which are normally highly visible (used as labels for a form) and will pretty much always need to be localised. As I'll show shortly there are workarounds for this, but currently they are rather clumsy.

It may be that these issues either don't bother you or are not a big deal, but I wanted to work out how to deal with them in a way that made me more comfortable. In the next sections I show how I did that.

Removing the magic strings

Removing the magic strings is something that I tend to do in any new project. MVC typically uses strings for any sort of dictionary storage, for example Session storage, ViewData, AuthorizationPolicy names, the list goes on. I've been bitten too many times by subtle typos causing unexpected behaviour that I like to pull these strings into utility classes with names like ViewDataKeys and PolicyNames:

public static class ViewDataKeys  
{
    public const string Title = "Title";
}

That way, I can use the strongly typed Title property whenever I'm accessing ViewData - I get intellisense, avoid typos and get renaming safely. This is a pretty common approach, and it can be applied just as easily with our localisation problem.

public static class ResourceKeys  
{
    public const string HomePage = "HomePage";
    public const string Required = "Required";
    public const string NotAValidEmail = "NotAValidEmail";
    public const string YourEmail = "YourEmail";
}

Simply create a static class to hold your string key names, and instead of using the resource in the default culture as the key, use the appropriate strongly typed member:

public class HomeViewModel  
{
    [Required(ErrorMessage = ResourceKeys.Required)]
    [EmailAddress(ErrorMessage = ResourceKeys.NotAValidEmail)]
    [Display(Name = "Your Email")]
    public string Email { get; set; }
}

Here you can see the ErrorMessage properties of our ValidationAttributes reference the static properties instead of the resource in the default culture.

The final step is to add a .resx file for each localised class for the default language (without a culture suffix on the file name). This is the downside to this approach that Microsoft were trying to avoid with their design, and I admit, it is a bit of a drag. But at least you can fix typos in your strings without breaking all your other languages!

How to Localise DisplayAttribute

Now we have the magic strings fixed, we just need to try and localise the DisplayAttribute. As of right now, the only way I have found to localise the display attribute is to use the legacy localisation capabilities which still reside in the DataAnnotation attributes, namely the ResourceType property.

This property is a Type, and allows you to specify a class in your solution that contains a static property corresponding to the value provided in the Name of the DisplayAttribute. This allows us to use the Visual Studio resource file designer to auto-generate a backing class with the required properties to act as hooks for the localisation.

Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core

If you create a .resx file in Visual Studio without a culture suffix, it will automatically create a .designer.cs file for you. With the new localisation features of ASP.NET Core, this can typically be deleted, but in this case we need it. Generating the above resource file in Visual Studio will generate a backing class similar to the following:

public class ViewModels_HomeViewModel {

    private static global::System.Resources.ResourceManager resourceMan;
    private static global::System.Globalization.CultureInfo resourceCulture;

    // details hidden for brevity

    public static string NotAValidEmail {
        get {
            return ResourceManager.GetString("NotAValidEmail", resourceCulture);
        }
    }

    public static string Required {
        get {
            return ResourceManager.GetString("Required", resourceCulture);
        }
    }

    public static string YourEmail {
        get {
            return ResourceManager.GetString("YourEmail", resourceCulture);
        }
    }

We can now update our display attribute to use the generated resource, and everything will work as expected. We'll also remove the magic string from the Name attribute at this point and move the resource into our .resx file:

public class HomeViewModel  
{
    [Required(ErrorMessage = ResourceKeys.Required)]
    [EmailAddress(ErrorMessage = ResourceKeys.NotAValidEmail)]
    [Display(Name = ResourceKeys.YourEmail, ResourceType = typeof(Resources.ViewModels_HomeViewModel))]
    public string Email { get; set; }
}

If we run our application again, you can see that the display attribute is now localised to say 'Votre Email' - lovely!

Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core

How to localise DisplayAttribute in the future

If that seems like a lot of work to get a localised DisplayAttribute then you're not wrong. That's especially true if you're not using Visual Studio, and so don't have the resx-auto-generation process.

Unfortunately it's a tricky problem to work around currently, in that it's just fundamentally not supported in the current version of MVC. The localisation of the ValidationAttribute.ErrorMessage happens deep in the inner workings of the MVC pipeline (in the DataAnnotationsMetadataProvider) and this is ideally where the localisation of the DisplayAttribute should be happening.

Luckily, this has already been fixed and is currently on the development branch of the ASP.NET Core repo. Theoretically that means it should appear in the 1.1.0 release when that happens, but we are at very early days at the moment!

Still, I wanted to give the current implementation a test, and luckily this is pretty simple to setup, as all the ASP.NET Core packages produced as part of the normal development workflow are pushed to various public MyGet feeds. I decided to use the 'aspnetcore-dev' feed, and updated my application to pull NuGet packages from it.

Be aware that pulling packages from this feed should not be something you do in a production app. Things are likely to change and break, so stick to the release NuGet feed unless you are experimenting or you know what you're doing!

Adding a pre-release MVC package

First, add a nuget.config file to your project and configure it to point to the aspnetcore-dev feed:

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="AspNetCore" value="https://dotnet.myget.org/F/aspnetcore-dev/api/v3/index.json" />
    <add key="NuGet" value="https://api.nuget.org/v3/index.json" />
  </packageSources>
</configuration>  

Next, update the MVC package in your project.json to pull down the latest package, as of writing this was version 1.1.0-alpha1-22152, and run a dotnet restore.

{
  "dependencies": {
    ...
    "Microsoft.AspNetCore.Mvc": "1.1.0-alpha1-22152",
    ...
  }
}

And that's it! We can remove the ugly ResourceType property from a DisplayAttribute, delete our resource .designer.cs file and everything just works as you would expect. If you are using the magic string approach, that just works, or you can use the approach I described above with ResourceKeys.

public class HomeViewModel  
{
    [Required(ErrorMessage = ResourceKeys.Required)]
    [EmailAddress(ErrorMessage = ResourceKeys.NotAValidEmail)]
    [Display(Name = ResourceKeys.YourEmail)]
    public string Email { get; set; }
}

As already mentioned, this is early pre-release days, so it will be a while until this capability is generally available, but it's heartening to see it ready and waiting!

Loading all resources from a single file

The final slight bugbear I have with the current localisation implementation is the resource file naming. As described in the previous post, each localised class or view gets its own embedded resource file that has to match the file name. I was toying with the idea of having a a single .resx file for each culture which contains all the required strings instead, with the resource key prefixed by the type name, but I couldn't see any way of doing this out of the box.

You can get close to this out of the box, by using a 'Shared resource' as the type parameter in injected IStringLocalizer<T>, so that all the resources using it will, by default, be found in a single .resx file. Unfortunately that only goes part of the way, as you are still left with the DataAnnotations and IViewLocalizer which will use the default implementations, and expect different files per class.

As far as I can see, in order to achieve this, we need to replace the IStringLocalizer and IStringLocalizerFactory services with our own implementations that will load the strings from a single file. Given this small change, I looked at just overriding the default ResourceManagerStringLocalizerFactory implementation, however the methods that would need changing are not virtual, which leaves us re-implementing the whole class again.

The code is a little long and tortuous, and this post is already long enough, so I won't post it here, but you can find the approach I took on GitHub. It is in a somewhat incomplete but working state, so if anyone is interested in using it then it should provide a good starting point for a proper implementation.

For my part, and given the difficulty of working with .resx files outside of Visual Studio, I have started to look at alternative storage formats. Thanks to the use of abstractions like IStringLocalizerFactory in ASP.NET Core, it is perfectly possible to load resources from other sources.

In particular, Damien has a great post with source code on GitHub on loading resources from the database using Entity Framework Core. Alternatively, Ronald Wildenberg has built a JsonLocalizer which is available on GitHub.

Summary

In this post I described a couple of the pitfalls of the current localisation framework in ASP.NET Core. I showed how magic strings could be the source of bugs and how to replace them with a static helper class.

I also showed how to localise the DisplayAttribute using the ResourceType property as required in the current 1.0.0 release of ASP.NET Core, and showed how it will work in the (hopefully near) future.

Finally I linked to an example project that stores all resources in a single file per culture, instead of a file per resource type.


Damien Bowden: Setting the NLog database connection string in the ASP.NET Core appsettings.json

This article shows how the NLog connection string for the DatabaseTarget can be configured in the appsettings.json in an ASP.NET Core project and not the XML nlog.config file. All the NLog target properties can be configured in code if required and not just in the NLog XML configuration file.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json

The XML nlog.config file is the same as in the previous post, with no database connection string configured.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">

           
  <targets>

    <target name="database" xsi:type="Database" >

<!-- THIS is not required, read from the appsettings.json

<connectionString>
        Data Source=N275\MSSQLSERVER2014;Initial Catalog=Nlogs;Integrated Security=True;
</connectionString>
-->

<!--
  Remarks:
    The appsetting layouts require the NLog.Extended assembly.
    The aspnet-* layouts require the NLog.Web assembly.
    The Application value is determined by an AppName appSetting in Web.config.
    The "NLogDb" connection string determines the database that NLog write to.
    The create dbo.Log script in the comment below must be manually executed.

  Script for creating the dbo.Log table.

  SET ANSI_NULLS ON
  SET QUOTED_IDENTIFIER ON
  CREATE TABLE [dbo].[Log] (
      [Id] [int] IDENTITY(1,1) NOT NULL,
      [Application] [nvarchar](50) NOT NULL,
      [Logged] [datetime] NOT NULL,
      [Level] [nvarchar](50) NOT NULL,
      [Message] [nvarchar](max) NOT NULL,
      [Logger] [nvarchar](250) NULL,
      [Callsite] [nvarchar](max) NULL,
      [Exception] [nvarchar](max) NULL,
    CONSTRAINT [PK_dbo.Log] PRIMARY KEY CLUSTERED ([Id] ASC)
      WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
  ) ON [PRIMARY]
-->

          <commandText>
              insert into dbo.Log (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <logger name="*" minlevel="Trace" writeTo="database" />
      
  </rules>
</nlog>

The NLog DatabaseTarget connectionstring is configured in the appsettings.json as described in the ASP.NET Core configuration docs.

{
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    },
    "ElasticsearchUrl": "http://localhost:9200",
    "ConnectionStrings": {
        "NLogDb": "Data Source=N275\\MSSQLSERVER2014;Initial Catalog=Nlogs;Integrated Security=True;"
    }
}

The configuration is then read in the Startup constructor.

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddEnvironmentVariables();
	Configuration = builder.Build();
}

The Nlog DatabaseTagert is then configured to use the connection string from the app settings and sets all the DatabaseTarget instances for NLog to use this. All target properties can be configured in this way if required.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	foreach (DatabaseTarget target in LogManager.Configuration.AllTargets.Where(t => t is DatabaseTarget))
	{
		target.ConnectionString = Configuration.GetConnectionString("NLogDb");
	}
	
	LogManager.ReconfigExistingLoggers();
	
	app.UseMvc();
}

Links

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://github.com/NLog/NLog/blob/38aef000f916bd5ffd8b80a5576afa2423192e84/examples/targets/Configuration%20API/Database/MSSQL/Example.cs

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://github.com/nlog/NLog/wiki/Database-target

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: How to use machine-specific configuration with ASP.NET Core

How to use machine-specific configuration with ASP.NET Core

In this quick post I'll show how to easily setup machine-specific configuration in your ASP.NET Core applications. This allows you to use different settings depending on the name of the machine you are using.

The tl;dr; version is to add a json file to your project containing your computer's name, e.g. appsettings.MACHINENAME.json, and update your ConfigurationBuilder in Startup with the following line:

.AddJsonFile($"appsettings.{Environment.MachineName}.json", optional: true)

Background

Why would you want to do this? Well, it depends.

When working on an application with multiple people, you will often run into a situation where you need different configuration settings for each developer's machine. Typically, we find that file paths and sometimes connection strings need to customised per developer.

In ASP.NET 4.x we found this somewhat of an ordeal to manage. Typically, we would create a connection string for each developer's machine, and create appsettings of the form MACHINENAME_APPSETTINGNAME. For example,

<configuration>  
  <connectionStrings>
    <add name="DAVES-MACBOOK" connectionString="Data Source=DAVES-MACBOOK;Initial Catalog=TestApp; Trusted_Connection=True;" />
    <add name="JON-PC" connectionString="Data Source=JON-PC;Initial Catalog=TestAppDb; Trusted_Connection=True;" />
  </connectionStrings>
  <appSettings>
    <add key="DAVES-MACBOOK_StoragePath" value="D:\" />
    <add key="JON-PC_StoragePath" value="C:\Dump" />
  </appSettings>
</configuration>  

So in this case, for the two developer machines named DAVES-MACBOOK and JON-PC, we have a different connection string for each machine, as well as different values for each of the StoragePath application settings.

This requires a bunch of wrapper classes around accessing appsettings which, while a good idea generally, is a bit of an annoyance and ends up polluting web.config.

The new way in ASP.NET Core

With ASP.NET Core, the updated configuration system allows for a much cleaner replacement of settings depending on the environment.

For example, in the default configuration for a web application, you can have environment specific appsettings files such as appsettings.Production.json which will override the default values in the appropriate environment:

public Startup(IHostingEnvironment env)  
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

Similarly, environment variables and UserSecrets can be used to override the default values. It's likely that in the majority of cases, these are perfect for the situation described above - they apply only to the single machine, and can override the default values provided.

In larger teams and projects this approach will almost certainly be the correct one - each individual machine contains the the specific settings for just that machine, and the repo isn't polluted with 101 different versions of the same setting.

However, it may be desirable in some cases, particularly in smaller teams, to actually store these values in the repo. Environment variables can be overwritten, UserSecrets can be deleted etc etc. With the .NET Core configuration system this alternative approach simple to achieve with a single additional line:

.AddJsonFile($"appsettings.{Environment.MachineName}.json", optional: true)

This uses string interpolation to insert the current machine name in the file path. The Environment class contains a number of environment-specific static properties like ProcessorCount, NewLine and luckily for us, MachineName. Using this approach, we can add a configuration file for each user with their machine-specific values e.g.

appsettings.DAVES-MACBOOK.json:

{
  "ConnectionStrings": {
    "DefaultConnection": "Data Source=DAVES-MACBOOK;Initial Catalog=TestApp; Trusted_Connection=True;"
  },
  "StoragePath": "D:\"
}

appsettings.JON-PC.json:

{
  "ConnectionStrings": {
    "DefaultConnection": "Data Source=JON-PC;Initial Catalog=TestAppDb; Trusted_Connection=True;"
  },
  "StoragePath": "C:\Dump"
}

Finally, if you want to deploy your machine-specific json files (you quite feasibly may not want to), then be sure to update the publishOptions section of your project.json:

{
  "publishOptions": {
    "include": [
      "wwwroot",
      "Views",
      "appsettings.json",
      "appsettings.*.json",
      "web.config"
    ]
  }
}

Note that you can use a wildcard in the appsettings name to publish all of your machine sepecific appsettings files, but be aware this will also publish all files of the format appsettings.Development.json etc too.

So there you have it, no more wrappers around the built in app configuration, per-machine settings stored in the application repo, and a nice clean interface, all courtesy of the cleanly designed .NET Core configuration system!


Darrel Miller: RPC vs REST is not in the URL

 

In Phil Sturgeon’s article Understanding REST and RPC for HTTP APIs, he makes the assertion that the following URL is not technically RESTful.

POST /trips/123/start

I have made a habit of countering these assertions of “non-restfulness” with the following question: 

Can you point to the REST constraint that is violated and the negative system effects due to the violation of that constraint?

I don’t believe any REST constraint is being violated by that URL.

Machines don’t care what your URL says

As far as I understand, clients in RESTful systems treat URLs as opaque identifiers and therefore there are absolutely no negative system effects of a verb appearing in a URL. 

Obviously having a URL that contains a verb that contradicts the method used, would be confusing to a developer, but that is a documentation issue.  e.g.
 
GET /deleteThing

that safely returns a thing could be RESTful but is very misleading. But,

GET /deleteThing

that deletes an object would violate the uniform interface constraint for HTTP requests and is therefore not RESTful.  The server’s behaviour is inconsistent with the GET HTTP method.  That’s bad.  The word delete in the URL is unfortunate.

RPC is a client thing

My understanding of RPC based on James White’s original definition in RFC 707, is that it is a way of presenting a remote invocation as if it were a local invocation.  The details of how its called and therefore the contents of the URL are irrelevant.  What is relevant is that a local call, in most languages, has the distinct characteristic of have exactly two outcomes.  You either get back the type you were expecting, or you get some kind of exception defined by your programing language of choice.  This is the limitation of RPC that makes it a poor choice for building reliable distributed systems.  If only Steve Vinoski’s talk, RPC and its Offspring: Convenient, Yet Fundamentally Flawed were still available.  He does a much better job than I of explaining the issues.

REST requires flexibility in response handling

The REST uniform interface guarantees that when you make a request to resource, you get a self-descriptive representation that describes the outcome of the request.  This could be one of many different outcomes but uses standardized metadata to convey the meaning of the response. It is the highly descriptive nature of the response that allows systems to be built that are resilient of failure and resilient to change.  That’s what makes something RESTful.

The URL is a name, nothing more


There is no reason that,

POST /SendUserMessage


could not be a completely RESTful interaction.  POST is defined in RFC 7231 as being something that can send data to a processing resource.  From the opposite perspective, there is no reason why,

POST /users/501/messages


could not be wrapped in a client library that exposes the method:

Message CreateMessage(int userId)


which is a RPC method.

Names are important to humans

We are in the unfortunate situation that the term REST has been brutally misused by our industry.  It makes learning what REST is, really difficult.  However, to a large extent, Phil did a really good job of positioning the various distributed architectural approaches.  But I don’t believe the distinction he is making between RPC and REST is correct.   Hence, my post.


Andrew Lock: Adding Localisation to an ASP.NET Core application

Adding Localisation to an ASP.NET Core application

In this post I'll walk through the process of adding localisation to an ASP.NET Core application using the recommended approach with resx resource files.

Introduction to Localisation

Localisation in ASP.NET Core is broadly similar to the way it works in the ASP.NET 4.X. By default you would define a number of .resx resource files in your application, one for each culture you support. You then reference resources via a key, and depending on the current culture, the appropriate value is selected from the closest matching resource file.

While the concept of a .resx file per culture remains in ASP.NET Core, the way resources are used has changed quite significantly. In the previous version, when you added a .resx file to your solution, a designer file would be created, providing static strongly typed access to your resources through calls such as Resources.MyTitleString.

In ASP.NET Core, resources are accessed through two abstractions, IStringLocalizer and IStringLocalizer<T>, which are typically injected where needed via dependency injection. These interfaces have an indexer, that allows you to access resources by a string key. If no resource exists for the key (i.e. you haven't created an appropriate .resx file containing the key), then the key itself is used as the resource.

Consider the following example:

using Microsoft.AspNet.Mvc;  
using Microsoft.Extensions.Localization;

public class ExampleClass  
{
    private readonly IStringLocalizer<ExampleClass> _localizer;
    public ExampleClass(IStringLocalizer<ExampleClass> localizer)
    {
        _localizer = localizer;
    }

    public string GetLocalizedString()
    {
        return _localizer["My localized string"];
    }
}

In this example, calling GetLocalizedString() will cause the IStringLocalizer<T> to check the current culture, and see if we have an appropriate resource file for ExampleClass containing a resource with the name/key "My localized string". If it finds one, it returns the localised version, otherwise, it returns "My Localized string".

The idea behind this approach is to allow you to design your app from the beginning to use localisation, without having to do up front work to support it by creating the default/fallback .resx file. Instead, you can just write the default values, then add the resources in later.

Personally, I'm not sold on this approach - it makes me slightly twitchy to see all those magic strings around which are essentially keys into a dictionary. Any changes to the keys may have unintended consequences, as I'll show later in the post.

Adding localisation to your application

For now, I'm going to ignore that concern, and dive in using Microsoft's recommended approach. I've started from the default ASP.NET Core Web application without authentication - you can find all the code on GitHub.

The first step is to add the localisation services in your application. As we are building an MVC application, we'll also configure View localisation and DataAnnotations localisation. The localisation packages are already referenced indirectly by the Microsoft.AspNetCore.MVC package, so you should be able to add the services and middleware directly in your Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddLocalization(opts => { opts.ResourcesPath = "Resources"; });

    services.AddMvc()
        .AddViewLocalization(
            LanguageViewLocationExpanderFormat.Suffix,
            opts => { opts.ResourcesPath = "Resources"; })
        .AddDataAnnotationsLocalization();
}

These services allow you to inject the IStringLocalizer service into your classes. They also allow you to have localised View files (so you can have Views with names like MyView.fr.cshtml) and inject the IViewLocalizer, to allow you to use localisation in your view files. Calling AddDataAnnotationsLocalization configures the Validation attributes to retrieve resources via an IStringLocalizer.

The ResourcePath parameter on the Options object specifies the folder of our application in which resources can be found. So if the root of our application is found at ExampleProject, we have specified that our resources will be stored in the folder ExampleProject/Resources.

Configuring these classes is all that is required to allow you to use the localisation services in your application. However you will typically also need some way to select what the current culture is for a given request.

To do this, we use the RequestLocalizationMiddleware. This middleware uses a number of different providers to try and determine the current culture. To configure it with the default providers, we need to decide which cultures we support, and which is the default culture.

Note that the configuration example in the documentation didn't work for me, though the Localization.StarterWeb project they reference did, and is reproduced below.

public void ConfigureServices(IServiceCollection services)  
{
    // ... previous configuration not shown

    services.Configure<RequestLocalizationOptions>(
        opts =>
        {
            var supportedCultures = new[]
            {
                new CultureInfo("en-GB"),
                new CultureInfo("en-US"),
                new CultureInfo("en"),
                new CultureInfo("fr-FR"),
                new CultureInfo("fr"),
            };

            opts.DefaultRequestCulture = new RequestCulture("en-GB");
            // Formatting numbers, dates, etc.
            opts.SupportedCultures = supportedCultures;
            // UI strings that we have localized.
            opts.SupportedUICultures = supportedCultures;
        });
}

public void Configure(IApplicationBuilder app)  
{
    app.UseStaticFiles();

    var options = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
    app.UseRequestLocalization(options.Value);

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Using localisation in your classes

We now have most of the pieces in place to start adding localisation to our application. We don't yet have a way for users to select which culture they want to use, but we'll come to that shortly. For now, lets look at how we go about retrieving a localised string.

Controllers and services

Whenever you want to access a localised string in your services or controllers, you can inject an IStringLocalizer<T> and use its indexer property. For example, imagine you want to localise a string in a controller:

public class HomeController: Controller  
{
    private readonly IStringLocalizer<HomeController> _localizer;

    public HomeController(IStringLocalizer<HomeController> localizer)
    {
        _localizer = localizer;
    }

    public IActionResult Index()
    {
        ViewData["MyTitle"] = _localizer["The localised title of my app!"];
        return View(new HomeViewModel());
    }
}

Calling _localizer[] will lookup the provided string based on the current culture, and the type HomeController. Assuming we have configured our application as discussed previously, the HomeController resides in the ExampleProject.Controllers namespace, and we are currently using the fr culture, then the localizer will look for either of the following resource files:

  • Resources/Controller.HomeController.fr.resx
  • Resources/Controller/HomeController.fr.resx

If a resource exists in one of these files with the key "The localised title of my app!" then it will be used, otherwise the key itself will be used as the resource. This means you don't need to add any resource files to get started with localisation - you can just use the default language string as your key and come back to add .resx files later.

Views

There are two kinds of localisation of views. As described previously, you can localise the whole view, duplicating it and editing as appropriate, and providing a culture suffix. This is useful if the views need to differ significantly between different cultures.

You can also localise strings in a similar way to that shown for the HomeController. Instead of an IStringLocalizer<T>, you inject an IViewLocalizer into the view. This handles HTML encoding a little differently, in that it allows you to store HTML in the resource and it won't be encoded before being output. Generally you'll want to avoid that however, and only localise strings, not HTML.

The IViewLocaliser uses the name of the View file to find the associated resources, so for the HomeController's Index.cshtml view, with the fr culture, the localiser will look for:

  • Resources/Views.Home.Index.fr.resx
  • Resources/Views/Home/Index.fr.resx

The IViewLocalizer is used in a similar way to IStringLocalizer<T> - pass in the string in the default language as the key for the resource:

@using Microsoft.AspNetCore.Mvc.Localization
@model AddingLocalization.ViewModels.HomeViewModel
@inject IViewLocalizer Localizer
@{
    ViewData["Title"] = Localizer["Home Page"];
}
<h2>@ViewData["MyTitle"]</h2>  

DataAnnotations

One final common area that needs localisation is DataAnnotations. These attributes can be used to provide validation, naming and UI hints of your models to the MVC infrastructure. When used, they provide a lot of additional declarative metadata to the MVC pipeline, allowing selection of appropriate controls for editing the property etc.

Error messages for DataAnnotation validation attributes all pass through an IStringLocalizer<T> if you configure your MVC services using AddDataAnnotationsLocalization(). As before, this allows you to specify the error message for an attribute in your default language in code, and use that as the key to other resources later.

public class HomeViewModel  
{
    [Required(ErrorMessage = "Required")]
    [EmailAddress(ErrorMessage = "The Email field is not a valid e-mail address")]
    [Display(Name = "Your Email")]
    public string Email { get; set; }
}

Here you can see we have three DataAnnotation attributes, two of which are ValidationAttributes, and the DisplayAttribute, which is not. The ErrorMessage specified for each ValidationAttribute is used as a key to lookup the appropriate resource using an IStringLocalizer<HomeViewModel>. Again, the files searched for will be something like:

  • Resources/ViewModels.HomeViewModel.fr.resx
  • Resources/ViewModels/HomeViewModel.fr.resx

A key thing to be aware of is that the DisplayAttribute is not localised using the IStringLocalizer<T>. This is far from ideal, but I'll address it in my next post on localisation.

Allowing users to select a culture

With all this localisation in place, the final piece of the puzzle is to actually allow users to select their culture. The RequestLocalizationMiddleware uses an extensible provider mechanism for choosing the current culture of a request, but it comes with three providers built in

  • QueryStringRequestCultureProvider
  • AcceptLanguageHeaderRequestCultureProvider
  • CookieRequestCultureProvider

These allow you to specify a culture in the querystring (e.g ?culture=fr-FR), via the Accept-Language header in a request, or via a cookie. Of the three approaches, using a cookie is the least intrusive, as it will obviously seamlessly be sent with every request, and does not require the user to set the Accept-Language header in their browser, or require adding to the querystring with every request.

Again, the Localization.StarterWeb sample project provides a handy implementation that shows how you can add a select box to the footer of your project to allow the user to set the language. Their choice is stored in a cookie, which is handled by the CookieRequestCultureProvider for each request. The provider then sets the CurrentCulture and CurrentUICulture of the thread for the request to the user's selection.

To add the selector to your application, create a partial view _SelectLanguagePartial.cshtml in the Shared folder of your Views:

@using System.Threading.Tasks
@using Microsoft.AspNetCore.Builder
@using Microsoft.AspNetCore.Localization
@using Microsoft.AspNetCore.Mvc.Localization
@using Microsoft.Extensions.Options

@inject IViewLocalizer Localizer
@inject IOptions<RequestLocalizationOptions> LocOptions

@{
    var requestCulture = Context.Features.Get<IRequestCultureFeature>();
    var cultureItems = LocOptions.Value.SupportedUICultures
        .Select(c => new SelectListItem { Value = c.Name, Text = c.DisplayName })
        .ToList();
}

<div title="@Localizer["Request culture provider:"] @requestCulture?.Provider?.GetType().Name">  
    <form id="selectLanguage" asp-controller="Home"
          asp-action="SetLanguage" asp-route-returnUrl="@Context.Request.Path"
          method="post" class="form-horizontal" role="form">
        @Localizer["Language:"] <select name="culture"
                                        asp-for="@requestCulture.RequestCulture.UICulture.Name" asp-items="cultureItems"></select>
        <button type="submit" class="btn btn-default btn-xs">Save</button>

    </form>
</div>  

We want to display this partial on every page, so update the footer of your _Layout.cshtml to reference it:

<footer>  
    <div class="row">
        <div class="col-sm-6">
            <p>&copy; 2016 - Adding Localization</p>
        </div>
        <div class="col-sm-6 text-right">
            @await Html.PartialAsync("_SelectLanguagePartial")
        </div>
    </div>
</footer>  

Finally, we need to add the controller code to handle the user's selection. This currently maps to the SetLanguage action in the HomeController:

[HttpPost]
public IActionResult SetLanguage(string culture, string returnUrl)  
{
    Response.Cookies.Append(
        CookieRequestCultureProvider.DefaultCookieName,
        CookieRequestCultureProvider.MakeCookieValue(new RequestCulture(culture)),
        new CookieOptions { Expires = DateTimeOffset.UtcNow.AddYears(1) }
    );

    return LocalRedirect(returnUrl);
}

And that's it! If we fire up the home page of our application, you can see the culture selector in the bottom right corner. At this stage, I have not added any resource files, but if I trigger a validation error, you can see that the resource key is used for the resource itself:

Adding Localisation to an ASP.NET Core application

My development flow is not interrupted by having to go and mess with resource files, I can just develop the application using the default language and add resx files later in development. If I later add appropriate resource files for the fr culture, and a user changes their culture via the selector, I can see the effect of localisation in the validation attributes and other localised strings:

Adding Localisation to an ASP.NET Core application

As you can see, the validation attributes and page title are localised, but the label field 'Your Email' has not, as that is set in the DisplayAttribute. (Apologies to any french speakers - totally Google translate's fault if it's gibberish!)

Summary

In this post I showed how to add localisation to your ASP.NET Core application using the recommended approach of providing resources for the default language as keys, and only adding additional resources as required later.

In summary, the steps to localise your application are roughly as follows:

  1. Add the required localisation services
  2. Configure the localisation middleware and if necessary a culture provider
  3. Inject IStringLocalizer<T> into your controllers and services to localise strings
  4. Inject IViewLocalizer into your views to localise strings in views
  5. Add resource files for non-default cultures
  6. Add a mechanism for users to choose their culture

In the next post, I'll address some of the problems I've run into adding localisation to an application, namely the vulnerability of 'magic strings' to typos, and localising the DisplayAttribute.


Dominick Baier: New in IdentityServer4: Support for Extension Grants

Well – this is not completely new, but we redesigned it a bit.

Extension grants are used to add support for non-standard token issuance scenarios to the token endpoint, e.g. translating between token types, delegation, federation, custom input or output parameters.

One of the common questions we got was how to implement identity delegation – instead of repeating myself here – I wrote proper documentation on the topic, and how to use IdentityServer4 to implement it.

Get the details here.


Filed under: ASP.NET, IdentityServer, OAuth, WebAPI


Damien Bowden: Full Server logout with IdentityServer4 and OpenID Connect Implicit Flow

The article shows how to fully logout from IdentityServer4 using an OpenID Connect Implicit Flow. Per design when using an access token to use protected data from a resource server, even if the client has logged out from the server, the access token can be used so long it is valid (AccessTokenLifetime) as it is a consent. This it the normal use case.

Sometimes, it is required that once a user logs out from IdentityServer4, no client with the same user can continue to use the protected data without logging in again. Reference tokens can be used to implement this. With reference tokens, you have full control over the lifecycle.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Other posts in this series:

To use reference tokens in IdentityServer4, the client can be defined with the AccessTokenType property set to AccessTokenType.Reference. When a user and the client successfully login, a reference token as well as an id_token is returned to the client and not an access token and an id_token. (response_type: id_token token)

public static IEnumerable<Client> GetClients()
{
	// client credentials client
	return new List<Client>
	{
		new Client
		{
			ClientName = "angular2client",
			ClientId = "angular2client",
			AccessTokenType = AccessTokenType.Reference,
			AllowedGrantTypes = GrantTypes.Implicit,
			AllowAccessTokensViaBrowser = true,
			RedirectUris = new List<string>
			{
				"https://localhost:44311"

			},
			PostLogoutRedirectUris = new List<string>
			{
				"https://localhost:44311/Unauthorized"
			},
			AllowedCorsOrigins = new List<string>
			{
				"https://localhost:44311",
				"http://localhost:44311"
			},
			AllowedScopes = new List<string>
			{
				"openid",
				"dataEventRecords",
				"securedFiles"
			}
		}
	};
}

In IdentityServer4, when a user decides to logout, the IPersistedGrantService can be used to remove reference tokens for this user and client. The RemoveAllGrantsAsync method from the IPersistedGrantService uses the Identity subject and the client id to delete all of the corresponding grants. The GetSubjectId method is an IdentityServer4 extension method for the Identity. The HttpContext.User can be used to get this. The client id must match the client from the configuration.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Rendering;
using Microsoft.Extensions.Logging;
using IdentityServerWithAspNetIdentity.Models;
using IdentityServerWithAspNetIdentity.Models.AccountViewModels;
using IdentityServerWithAspNetIdentity.Services;
using IdentityServer4.Services;
using IdentityServer4.Quickstart.UI.Models;
using Microsoft.AspNetCore.Http.Authentication;
using IdentityServer4.Extensions;

namespace IdentityServerWithAspNetIdentity.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        private readonly UserManager<ApplicationUser> _userManager;
        private readonly SignInManager<ApplicationUser> _signInManager;
        private readonly IEmailSender _emailSender;
        private readonly ISmsSender _smsSender;
        private readonly ILogger _logger;
        private readonly IIdentityServerInteractionService _interaction;
        private readonly IPersistedGrantService _persistedGrantService;

        public AccountController(
            IIdentityServerInteractionService interaction,
            IPersistedGrantService persistedGrantService,
            UserManager<ApplicationUser> userManager,
            SignInManager<ApplicationUser> signInManager,
            IEmailSender emailSender,
            ISmsSender smsSender,
            ILoggerFactory loggerFactory)
        {
            _interaction = interaction;
            _persistedGrantService = persistedGrantService;
            _userManager = userManager;
            _signInManager = signInManager;
            _emailSender = emailSender;
            _smsSender = smsSender;
            _logger = loggerFactory.CreateLogger<AccountController>();
        }
		
        /// <summary>
        /// Handle logout page postback
        /// </summary>
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Logout(LogoutViewModel model)
        {
            var subjectId = HttpContext.User.Identity.GetSubjectId();

            // delete authentication cookie
            await HttpContext.Authentication.SignOutAsync();

          
            // set this so UI rendering sees an anonymous user
            HttpContext.User = new ClaimsPrincipal(new ClaimsIdentity());
            
            // get context information (client name, post logout redirect URI and iframe for federated signout)
            var logout = await _interaction.GetLogoutContextAsync(model.LogoutId);
            
            var vm = new LoggedOutViewModel
            {
                PostLogoutRedirectUri = logout?.PostLogoutRedirectUri,
                ClientName = logout?.ClientId,
                SignOutIframeUrl = logout?.SignOutIFrameUrl
            };


            await _persistedGrantService.RemoveAllGrantsAsync(subjectId, "angular2client");

            return View("LoggedOut", vm);
        }

The IdentityServer4.AccessTokenValidation NuGet package is used on the resource server to validate the reference token sent from the client. The IdentityServerAuthenticationOptions options are configured as required.

"IdentityServer4.AccessTokenValidation": "1.0.1-rc1"

This package is configured in the Startup class in the Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole();
	loggerFactory.AddDebug();

	app.UseExceptionHandler("/Home/Error");
	app.UseCors("corsGlobalPolicy");
	app.UseStaticFiles();

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	IdentityServerAuthenticationOptions identityServerValidationOptions = new IdentityServerAuthenticationOptions
	{
		Authority = "https://localhost:44318/",
		ScopeName = "dataEventRecords",
		ScopeSecret = "dataEventRecordsSecret",
		AutomaticAuthenticate = true,
		SupportedTokens = SupportedTokens.Both,
		// required if you want to return a 403 and not a 401 for forbidden responses

		AutomaticChallenge = true,
	};

	app.UseIdentityServerAuthentication(identityServerValidationOptions);

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The SPA client can then be used to login, logout from the server. If 2 or more clients with the same user are logged in, once the user logs out from the server, none will have access to the protected data. All existing reference tokens for this user and client can no longer be used to access the protected data.

By using reference tokens, you have full control over the access lifecycle to the protected data. Caution should be taken when using long running access tokens.

Another strategy would be to use short lived access tokens and make the client refresh this regularly. This reduces to time which an access token lives after a logout, but the access token can still be used to access the private data until it has timed out.

Links

http://openid.net/specs/openid-connect-core-1_0.html

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/IdentityServer/IdentityServer4/issues/313#issuecomment-247589782

https://github.com/IdentityServer/IdentityServer4

https://leastprivilege.com

https://github.com/IdentityServer/IdentityServer4/issues/313

https://github.com/IdentityServer/IdentityServer4/issues/310



Dominick Baier: New in IdentityServer4: Default Scopes

Another small thing people have been asking for.

The scope parameter is optional in OAuth 2 – but we made the decision that clients always have to explicitly ask for the scopes they want to access.

We relaxed this requirement a bit in IdentityServer4. At the token endpoint, scope is now optional (IOW for client credentials, resource owner and extension grants requests). If no scope is specified – the client will automatically get a token that contains all explicitly allowed scopes (that’s a per client setting).

This makes it easier, especially for server to server type communication to provision new APIs without having to change the token requests in the clients.

Endpoint documentation here – Client settings here.

 


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Identity & Access Control for ASP.NET Core Deep Dive

Once a year Brock and I do our three day version of the Identity & Access Control workshop in London.

This year it will be all about .NET Core and ASP.NET Core – and a full day on the new IdentityModel2 & IdentityServer4.

You can find the details and sign-up here – and there is an early bird ’til the 23rd September.

Really looking forward to this, since the extra day gives us so much more time for labs and going even deeper on the mechanics are architecture of modern identity and applications.

See you there!


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Clients without Secrets

Over the next weeks I will do short blog posts about new features in IdentityServer4. The primary intention is to highlight a new feature and then defer to our docs for the details (which will also force me to write some proper docs).

Clients without secrets
Many people asked for this. The OAuth 2 token endpoint does not require authentication for so called “public clients”. We always ignored that and always mandated some sort of secret (and not treating it as really secret for public clients).

In IdentityServer4 there is a new RequireClientSecret flag on the Client class where you can enable/disable the client secret requirement.

You can read about client settings here, and about secrets in general here.


Filed under: IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: ASP.NET Core Action Arguments Validation using an ActionFilter

This article shows how to use an ActionFilter to validate the model from a HTTP POST request in an ASP.NET Core MVC application.

Code: https://github.com/damienbod/Angular2AutoSaveCommands

Other articles in this series:

  1. Implementing UNDO, REDO in ASP.NET Core
  2. Angular 2 Auto Save, Undo and Redo
  3. ASP.NET Core Action Arguments Validation using an ActionFilter

In an ASP.NET Core MVC application, custom validation logic can be implemented in an ActionFilter. Because the ActionFilter is processed after the model binding in the action execution, the model and action parameters can be used in an ActionFilter without having to read from the Request Body, or the URL.

The model can be accessed using the context.ActionArguments dictionary. The key for the property has to match the parameter name in the MVC Controller action method. Ryan Nowak also explained in this issue, that the context.ActionDescriptor.Parameters can also be used to access the request payload data.

If the model is invalid, the context status code is set to 400 (bad request) and the reason is added to the context result using a ContentResult object. The request is then no longer processed but short circuited using the terminology from the ASP.NET Core documentation.

using System;
using System.IO;
using System.Text;
using Angular2AutoSaveCommands.Models;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.Extensions.Logging;

namespace Angular2AutoSaveCommands.ActionFilters
{
    public class ValidateCommandDtoFilter : ActionFilterAttribute
    {
        private readonly ILogger _logger;

        public ValidateCommandDtoFilter(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger("ValidatePayloadTypeFilter");
        }

        public override void OnActionExecuting(ActionExecutingContext context)
        {
            var commandDto = context.ActionArguments["commandDto"] as CommandDto;
            if (commandDto == null)
            {
                context.HttpContext.Response.StatusCode = 400;
                context.Result = new ContentResult()
                {
                    Content = "The body is not a CommandDto type"
                };
                return;
            }

            _logger.LogDebug("validating CommandType");
            if (!CommandTypes.AllowedTypes.Contains(commandDto.CommandType))
            {
                context.HttpContext.Response.StatusCode = 400;
                context.Result = new ContentResult()
                {
                    Content = "CommandTypes not allowed"
                };
                return;
            }

            _logger.LogDebug("validating PayloadType");
            if (!PayloadTypes.AllowedTypes.Contains(commandDto.PayloadType))
            {
                context.HttpContext.Response.StatusCode = 400;
                context.Result = new ContentResult()
                {
                    Content = "PayloadType not allowed"
                };
                return;
            }

            base.OnActionExecuting(context);
        }
    }
}

The ActionFilter is added to the services in the Startup class. This is not needed if the ActionFilter is used directly in the MVC Controller.

services.AddScoped<ValidateCommandDtoFilter>();

The filter can then be used in the MVC Controller using the ServiceFilter attribute. If the commandDto model is invalid, a BadRequest response is returned without processing the business in the action method.

[ServiceFilter(typeof(ValidateCommandDtoFilter))]
[HttpPost]
[Route("Execute")]
public IActionResult Post([FromBody]CommandDto commandDto)
{
	_commandHandler.Execute(commandDto);
	return Ok(commandDto);
}

Links

https://docs.asp.net/en/latest/mvc/controllers/filters.html

https://github.com/aspnet/Mvc/issues/5260#issuecomment-245936046

https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Abstractions/Filters/ActionExecutingContext.cs



Damien Bowden: Angular 2 Auto Save, Undo and Redo

This article shows how to implement auto save, Undo and Redo commands in an Angular 2 SPA. The Undo and the Redo commands work for the whole application and not just for single components. The Angular 2 app uses an ASP.NET Core service implemented in the previous blog.

Code: https://github.com/damienbod/Angular2AutoSaveCommands

2016.08.19 Updated to Angular 2 release, ASP.NET Core 1.0.1

Other articles in this series:

  1. Implementing UNDO, REDO in ASP.NET Core
  2. Angular 2 Auto Save, Undo and Redo
  3. ASP.NET Core Action Arguments Validation using an ActionFilter

The CommandDto class is used for all create, update and delete HTTP requests to the server. This class is used in the different components and so the payload is always different. The CommandType defines the type of command to be executed. Possible values supported by the server are ADD, UPDATE, DELETE, UNDO, REDO. The PayloadType defines the type of object used in the Payload. The PayloadType is used by the server to convert the Payload object to a c# specific class object. The ActualClientRoute is used for the Undo, Redo functions. When an Undo command is executed, or a Redo, the next client path is returned in the CommandDto response. As this is an Angular 2 application, the Angular 2 routing value is used.

export class CommandDto {
    constructor(commandType: string, 
		 payloadType: string, 
		 payload: any, 
		 actualClientRoute: string) {
		 
        this.CommandType = commandType;
        this.PayloadType = payloadType;
        this.Payload = payload;
        this.ActualClientRoute = actualClientRoute;
    }

    CommandType: string;
    PayloadType: string;
    Payload: any;
    ActualClientRoute: string;
}

The CommandService is used to access the ASP.NET Core API implemented in the CommandController class. The service implements the Execute, Undo and Redo HTTP POST requests to the server using the CommandDto as the body. The service also implements an EventEmitter output which can be used to update child components, if an Undo command or a Redo command has been executed. When the function UndoRedoUpdate is called, the event is sent to all listeners.

import { Injectable, EventEmitter, Output } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { CommandDto } from './CommandDto';

@Injectable()
export class CommandService {

    @Output() OnUndoRedo = new EventEmitter<string>();

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration) {

        this.actionUrl = `${_configuration.Server}api/command/`;

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    public Execute = (command: CommandDto): Observable<CommandDto> => {
        let url = `${this.actionUrl}execute`;
        return this._http.post(url, command, { headers: this.headers }).map(res => res.json());
    }

    public Undo = (): Observable<CommandDto> => {
        let url = `${this.actionUrl}undo`;
        return this._http.post(url, '', { headers: this.headers }).map(res => res.json());
    }

    public Redo = (): Observable<CommandDto> => {
        let url = `${this.actionUrl}redo`;
        return this._http.post(url, '', { headers: this.headers }).map(res => res.json());
    }

    public GetAll = (): Observable<any> => {
        return this._http.get(this.actionUrl).map((response: Response) => <any>response.json());
    }
    
    public UndoRedoUpdate = (payloadType: string) => {
        this.OnUndoRedo.emit(payloadType);
    }
}

The app.component implements the Undo and the Redo user interface.

<div class="container" style="margin-top: 15px;">

    <nav class="navbar navbar-inverse">
        <div class="container-fluid">
            <div class="navbar-header">
                <a class="navbar-brand" [routerLink]="['/commands']">Commands</a>
            </div>
            <ul class="nav navbar-nav">
                <li><a [routerLink]="['/home']">Home</a></li>
                <li><a [routerLink]="['/about']">About</a></li>
                <li><a [routerLink]="['/httprequests']">HTTP API Requests</a></li>
            </ul>
            <ul class="nav navbar-nav navbar-right">
                <li><a (click)="Undo()">Undo</a></li>
                <li><a (click)="Redo()">Redo</a></li>
                <li><a href="https://twitter.com/damien_bod"><img src="assets/damienbod.jpg" height="40" style="margin-top: -10px;" /></a></li>               

            </ul>
        </div>
    </nav>

    <router-outlet></router-outlet>

    <footer>
        <p>
            <a href="https://twitter.com/damien_bod">twitter(damienbod)</a>&nbsp; <a href="https://damienbod.com/">damienbod.com</a>
            &copy; 2016
        </p>
    </footer>
</div>

The Undo method uses the _commandService to execute an Undo HTTP POST request. If successful, the UndoRedoUpdate function from the _commandService is executed, which broadcasts an update event in the client app, and then the application navigates to the route returned in the Undo commandDto response using the ActualClientRoute.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { CommandService } from './services/commandService';
import { CommandDto } from './services/commandDto';

@Component({
    selector: 'my-app',
    template: require('./app.component.html'),
    styles: [require('./app.component.scss'), require('../style/app.scss')]
})

export class AppComponent {

    constructor(private router: Router, private _commandService: CommandService) {
    }

    public Undo() {
        let resultCommand: CommandDto;

        this._commandService.Undo()
            .subscribe(
                data => resultCommand = data,
                error => console.log(error),
                () => {
                    this._commandService.UndoRedoUpdate(resultCommand.PayloadType);
                    this.router.navigate(['/' + resultCommand.ActualClientRoute]);
                }
            );
    }

    public Redo() {
        let resultCommand: CommandDto;

        this._commandService.Redo()
            .subscribe(
                data => resultCommand = data,
                error => console.log(error),
                () => {
                    this._commandService.UndoRedoUpdate(resultCommand.PayloadType);
                    this.router.navigate(['/' + resultCommand.ActualClientRoute]);
                }
            );
    }
}

The HomeComponent is used to implement the ADD, UPDATE, DELETE for the HomeData object. A simple form is used to add, or update the different items with an auto save implemented on the input element using the keyup event. A list of existing HomeData items are displayed in a table which can be updated or deleted.

<div class="container">
    <div class="col-lg-12">
        <h1>Selected Item: {{model.Id}}</h1>
        <form *ngIf="active" (ngSubmit)="onSubmit()" #homeItemForm="ngForm">

            <input type="hidden" class="form-control" id="id" [(ngModel)]="model.Id" name="id" #id="ngModel">
            <input type="hidden" class="form-control" id="deleted" [(ngModel)]="model.Deleted" name="deleted" #id="ngModel">

            <div class="form-group">
                <label for="name">Name</label>
                <input type="text" class="form-control" id="name" required  (keyup)="createCommand($event)" [(ngModel)]="model.Name" name="name" #name="ngModel">
                <div [hidden]="name.valid || name.pristine" class="alert alert-danger">
                    Name is required
                </div>
            </div>

            <button type="button" class="btn btn-default" (click)="newHomeData()">New Home</button>

        </form>
    </div>
</div>

<hr />

<div>

    <table class="table">
        <thead>
            <tr>
                <th>Id</th>
                <th>Name</th>
                <th></th>
                <th></th>
            </tr>
        </thead>
        <tbody>
            <tr style="height:20px;" *ngFor="let homeItem of HomeDataItems">
                <td>{{homeItem.Id}}</td>
                <td>{{homeItem.Name}}</td>
                <td>
                    <button class="btn btn-default" (click)="Edit(homeItem)">Edit</button>
                </td>
                <td>
                    <button class="btn btn-default" (click)="Delete(homeItem)">Delete</button>
                </td>
            </tr>
        </tbody>
    </table>

</div>

The HomeDataService is used to selected all the HomeData items using the ASP.NET Core service implemented in rhe HomeController class.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';

@Injectable()
export class HomeDataService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration) {

        this.actionUrl = `${_configuration.Server}api/home/`;

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    public GetAll = (): Observable<any> => {
        return this._http.get(this.actionUrl).map((response: Response) => <any>response.json());
    }
 
}

The HomeComponent implements the different CUD operations and also the listeners for Undo, Redo events, which are relevant for its display. When a keyup is received, the createCommand is executed. This function adds the data to the keyDownEvents subject. A deboucedInput Observable is used together with debounceTime, so that only when the user has not entered any inputs for more than a second, a command is sent to the server using the OnSumbit function.

The component also subscribes to the OnUndoRedo event sent from the _commandservice. When this event is received, the OnUndoRedoRecieved is called. The function updates the table with the actual data if the undo, redo command has changed data displayed in this component.

import { Component, OnInit } from '@angular/core';
import { FormControl } from '@angular/forms';
import { Http } from '@angular/http';
import { HomeData } from './HomeData';
import { CommandService } from '../services/commandService';
import { CommandDto } from '../services/commandDto';
import { HomeDataService } from '../services/homeDataService';

import { Observable } from 'rxjs/Observable';
import { Subject } from 'rxjs/Subject';

import 'rxjs/add/observable/of';
import 'rxjs/add/observable/throw';

// Observable operators
import 'rxjs/add/operator/catch';
import 'rxjs/add/operator/debounceTime';
import 'rxjs/add/operator/distinctUntilChanged';
import 'rxjs/add/operator/do';
import 'rxjs/add/operator/filter';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/switchMap';

@Component({
    selector: 'homecomponent',
    template: require('./home.component.html')
})

export class HomeComponent implements OnInit {

    public message: string;
    public model: HomeData;
    public submitted: boolean;
    public active: boolean;
    public HomeDataItems: HomeData[];

    private deboucedInput: Observable<string>;
    private keyDownEvents = new Subject<string>();

    constructor(private _commandService: CommandService, private _homeDataService: HomeDataService) {
        this.message = "Hello from Home";
        this._commandService.OnUndoRedo.subscribe(item => this.OnUndoRedoRecieved(item));
    }

    ngOnInit() {
        this.model = new HomeData(0, 'name', false);
        this.submitted = false;
        this.active = true;
        this.GetHomeDataItems();

        this.deboucedInput = this.keyDownEvents;
        this.deboucedInput
            .debounceTime(1000)       
            .distinctUntilChanged()   
            .subscribe((filter: string) => {
                this.onSubmit();
            });
    }

    public GetHomeDataItems() {
        console.log('HomeComponent starting...');
        this._homeDataService.GetAll()
            .subscribe((data) => {
                this.HomeDataItems = data;
            },
            error => console.log(error),
            () => {
                console.log('HomeDataService:GetAll completed');
            }
        );
    }

    public Edit(aboutItem: HomeData) {
        this.model.Name = aboutItem.Name;
        this.model.Id = aboutItem.Id;
    }

    // TODO remove the get All request and update the list using the return item
    public Delete(homeItem: HomeData) {
        let myCommand = new CommandDto("DELETE", "HOME", homeItem, "home");

        console.log(myCommand);
        this._commandService.Execute(myCommand)
            .subscribe(
            data => this.GetHomeDataItems(),
            error => console.log(error),
            () => {
                if (this.model.Id === homeItem.Id) {
                    this.newHomeData();
                }
            }   
            );
    }

    public createCommand(evt: any) {
        this.keyDownEvents.next(this.model.Name);
    }

    // TODO remove the get All request and update the list using the return item
    public onSubmit() {
        if (this.model.Name != "") {
            this.submitted = true;
            let myCommand = new CommandDto("ADD", "HOME", this.model, "home");

            if (this.model.Id > 0) {
                myCommand.CommandType = "UPDATE";
            }

            console.log(myCommand);
            this._commandService.Execute(myCommand)
                .subscribe(
                data => {
                    this.model.Id = data.Payload.Id;
                    this.GetHomeDataItems();
                },
                error => console.log(error),
                () => console.log('Command executed')
                );
        }       
    }

    public newHomeData() {
        this.model = new HomeData(0, 'add a new name', false);
        this.active = false;
        setTimeout(() => this.active = true, 0);
    }

    private OnUndoRedoRecieved(payloadType) {
        if (payloadType === "HOME") {
            this.GetHomeDataItems();
           // this.newHomeData();
            console.log("OnUndoRedoRecieved Home");
            console.log(payloadType);
        }       
    }
}

When the application is built (both server and client) and started, the items can be added, updated or deleted using the commands.

angular2autosaveundoredo_01

The executed commands can be viewed using the commands tab in the Angular 2 application.

angular2autosaveundoredo_03

And the commands or the data can also be viewed in the SQL database.

angular2autosaveundoredo_02

Links

http://blog.thoughtram.io/angular/2016/02/22/angular-2-change-detection-explained.html

https://angular.io/docs/ts/latest/guide/forms.html



Dominick Baier: IdentityServer4 RC1

Wow – we’re done! Brock and I spent the last two weeks 14h/day refactoring, polishing, testing and refining IdentityServer for ASP.NET Core…and I must say it’s the best STS we’ve written so far…

We kept the same approach as before, that IdentityServer takes care of all the hard things like protocol handling, validation, token generation, data management and security – while you only need to model your application architecture via scopes, clients and users. But at the same time we give you much more flexibility for handling custom scenarios, workflows and user interactions. We also made it easier to get started.

There are too many new features to talk about all of them in this post – but to give you an overview:

  • integration in ASP.NET Core’s pipeline, DI system, configuration, logging and authentication handling
  • complete separation of protocol handling and UI thus allowing you to easily modify the UI in any way you want
  • simplified persistence layer
  • improved key material handling enabling automatic key rotation and remote signing scenarios
  • allowing multiple grant types per client
  • revamped support for extension grants and custom protocol responses
  • seamless integration into ASP.NET Core Identity (while retaining the ability to use arbitrary other data sources for your user management)
  • support for public clients (clients that don’t need a client secret to use the token endpoint)
  • support for default scopes when requesting tokens
  • support for ASP.NET Core authentication middleware for external authentication
  • improved session management and authentication cookie handling
  • revamped and improved support for CORS
  • re-worked middleware for JWT and reference token validation
  • tons of internal cleanup

We will have separate posts detailing those changes in the coming weeks.

Where to start?
Our new website https://identityserver.io will bring you to all the relevant sites: documentation, github repo and our new website for commercial support options.

Add the IdentityServer package to you project.json:

“IdentityServer4”: “1.0.0-rc1”

and start coding ;)

We also added a number of quickstart tutorials that walk you through common scenarios:

Everything is still work in progress, but we have the feeling we are really close to how we want the final code to look and feel.

Give it a try – and give us feedback on the issue tracker. Release notes can be found here.

Have fun!


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Ben Foster: Automatic post-registration sign-in with Identity Server

Identity Server is an open source framework that allows implementing Single sign-on and supports a number of modern authentication protocols such as OpenID Connect and OAuth2.

Identity Server was created by the guys at Thinktecture and has now become the Microsoft recommended approach for providing centralised authentication and access-control in ASP.NET.

A few months ago I started to investigate replacing our hand-rolled auth system with Identity Server. We had a number of services in our platform and were already making use of OAuth2 to authenticate client applications in our API. We were using a domain level authentication cookie to share authenticated sessions between 2 of our apps but as more services were introduced, each with their own set of authentication requirements, this was no longer a viable solution.

Registering Users

Identity Server does not perform user registration so the typical flow when registering users is:

  1. User registers on your web site (store user in DB)
  2. After registration user is redirected to Identity Server to sign in
  3. User is redirected back to your web site

Identity Server provides support for ASP.NET Identity and Membership reboot and if you're not using one of these frameworks, you can provide your own custom services.

In our system we wanted a slightly different flow, whereby our customers were not required to sign in again following registration:

  1. User registers on our marketing site
  2. User is automatically signed in to Identity Server
  3. User is redirected to our Dashboard and automatically signed in

Once the user is signed into Identity Server we can transparently sign the user into the Dashboard application by disabling the IdSrv consent screen. Here's our client configuration:

return new List<Client>
{
    new Client
    {
        ClientName = "Marketing",
        ClientId = "marketing",
        Enabled = true,
        Flow = Flows.Implicit,
        AccessTokenType = AccessTokenType.Reference,
        RedirectUris = new List<string>
        {
            "http://localhost:51962/",
        },
        AllowAccessToAllScopes = true,
        RequireConsent = false
    },
    new Client
    {
        ClientName = "Dashboard",
        ClientId = "dashboard",
        Enabled = true,
        Flow = Flows.Implicit, 
        AccessTokenType = AccessTokenType.Reference,
        RedirectUris = new List<string>
        {
            "http://localhost:49902/"
        },
        PostLogoutRedirectUris = new List<string>
        {
            "http://localhost:49902/"
        },
        AccessTokenLifetime = 36000, // 10 hours
        AllowAccessToAllScopes = true,
        RequireConsent = false
    }
}

Implementing automatic sign-in

To implement automatic sign-in we need to do the following:

  1. During registration generate a One-Time-Access-Code (OTAC) and store this against our new user along with an expiry date.
  2. Redirect the user to the Dashboard including the OTAC in the URL (if you want to sign-in to the same app you can skip this step).
  3. Authenticate the user (redirects to Identity Server) sending the OTAC in the acr_values parameter (more info).
  4. Identity Server validates the token and signs the user in transparently (no consent screen).
  5. User is redirected back to the dashboard.

Generating the OTAC

I'm using the default ASP.NET MVC template with ASP.NET Identity and have updated my Register action as below:

public async Task<ActionResult> Register(RegisterViewModel model)
{
    if (ModelState.IsValid)
    {
        var user = new ApplicationUser { UserName = model.Email, Email = model.Email };
        var result = await UserManager.CreateAsync(user, model.Password);
        if (result.Succeeded)
        {
            var otac = user.GenerateOTAC(TimeSpan.FromMinutes(1));
            UserManager.Update(user);

            // Redirect to dashboard providing OTAC
            return Redirect("http://localhost:49902/auth/login?otac=" + Url.Encode(otac));
        }

        AddErrors(result);
    }

    // If we got this far, something failed, redisplay form
    return View(model);
}

Here we create the new user and set the OTAC. The OTAC generation is handled directly inside my user class:

public class ApplicationUser : IdentityUser
{
    public string OTAC { get; set; }
    public DateTime? OTACExpires { get; set; }

    public string GenerateOTAC(TimeSpan validFor)
    {
        var otac = CryptoRandom.CreateUniqueId();
        var hashed = Crypto.Hash(otac);
        OTAC = hashed;
        OTACExpires = DateTime.UtcNow.Add(validFor);

        return otac;
    }

    // ... ommitted for brevity
}

This makes use of some of the helpers from the IdentityModel package to generate a unique identifier and hash the value before it is stored. The unhashed value is returned to our controller and passed in the URL when redirecting to the dashboard.

Sending the OTAC to Identity Server

Identity Server provides the acr_values parameter to provide additional authentication information to the user service. We'll use this to send our OTAC.

After registration the user is redirected to the Dashboard login page. Here we check to see if an OTAC is provided and if so, add it to the OWIN context. This will be later retrieved before sending the authentication request to Identity Server:

public void LogIn(string otac = null)
{
    var ctx = HttpContext.GetOwinContext();

    if (!string.IsNullOrEmpty(otac))
    {
        ctx.Set("otac", otac);
    }

    var properties = new AuthenticationProperties
    {
        RedirectUri = Url.Action("index", "home", null, Request.Url.Scheme)
    };

    ctx.Authentication.Challenge(properties);
}

To set the acr_values parameter we need to hook into the RedirectToIdentityProvider notification hook provided by the Open ID Connect middleware. In startup.cs:

app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
{
    Authority = "http://localhost:49788/",
    ClientId = "dashboard",
    RedirectUri = "http://localhost:49902/",
    ResponseType = "id_token token",
    Scope = "openid profile email api.read api.write",
    SignInAsAuthenticationType = "Cookies",
    PostLogoutRedirectUri = "http://localhost:49902/",

    Notifications = new OpenIdConnectAuthenticationNotifications
    {
        RedirectToIdentityProvider = n =>
        {
            if (n.ProtocolMessage.RequestType == OpenIdConnectRequestType.AuthenticationRequest)
            {
                var otac = n.OwinContext.Get<string>("otac");
                if (otac != null)
                {
                    n.ProtocolMessage.AcrValues = otac;
                }
            }

            return Task.FromResult(0);
        }
    }
});

RedirectToIdentityProvider is invoked just before we redirect to Identity Server. This is where we are able to customise the request. In the above code we retrieve the OTAC from the Owin Context and set the AcrValues property.

Validating the token and signing the user in

The next step involves customising the default authentication behaviour of Identity Server. Normal authentication requests should work as before, but in the case of post-registration requests, we need to jump in before the default authentication behaviour is executed.

Identity Server defines the IUserService interface to abstract the underlying identity management system being used for users. Rather than implementing this from scratch, and since we're using ASP.NET Identity, we can instead create a class that derives from AspNetIdentityUserService<TUser, TKey>.

To change the default login behaviour we need to override PreAuthenticateAsync:

This method is called before the login page is shown. This allows the user service to determine if the user is already authenticated by some out of band mechanism (e.g. client certificates or trusted headers) and prevent the login page from being shown.

Here is my complete implementation:

public class UserService : AspNetIdentityUserService<ApplicationUser, string>
{
    public UserService(UserManager userManager) : base(userManager)
    {
    }

    public override async Task PreAuthenticateAsync(PreAuthenticationContext context)
    {
        var otac = context.SignInMessage.AcrValues.FirstOrDefault();
        if (otac != null && context.SignInMessage.ClientId == "dashboard")
        {
            var hashed = Crypto.Hash(otac);
            var user = FindUserByOTAC(hashed);

            if (user != null && user.ValidateOTAC(hashed))
            {
                var claims = await GetClaimsFromAccount(user);
                context.AuthenticateResult = new AuthenticateResult(user.Id, user.UserName, claims: claims, authenticationMethod: "oidc");

                // Revoke token
                user.RevokeOTAC();
                await userManager.UpdateAsync(user);

                return;
            }
        }


        await base.PreAuthenticateAsync(context);
    }

    protected async override Task<IEnumerable<Claim>> GetClaimsFromAccount(ApplicationUser user)
    {
        var claims = (await base.GetClaimsFromAccount(user)).ToList();

        if (!string.IsNullOrWhiteSpace(user.UserName))
        {
            claims.Add(new System.Security.Claims.Claim("name", user.UserName));
        }

        return claims;
    }

    private ApplicationUser FindUserByOTAC(string otac)
    {
        return userManager.Users.FirstOrDefault(u => u.OTAC.Equals(otac));
    }
}

In the PreAuthenticateAsync method we check to see if an OTAC is provided and whether the request came from our dashboard. We then attempt to load the user with the provided OTAC and if the code is valid, revoke it, set the AuthenticateResult and short-circuit the request.

OTAC validation and revocation is handled by our User class:

public bool ValidateOTAC(string otac)
{
    if (string.IsNullOrEmpty(otac) || string.IsNullOrEmpty(OTAC))
    {
        return false;
    }

    return OTAC.Equals(otac)
        && OTACExpires != null
        && OTACExpires > DateTime.UtcNow;
}

public void RevokeOTAC()
{
    OTAC = null;
    OTACExpires = null;
}

In order for Identity Server to use our custom user service we need to register it with the service factory. In startup.cs:

var factory = new IdentityServerServiceFactory()
.UseInMemoryClients(Clients.Get())
.UseInMemoryScopes(Scopes.Get());

// Wire up ASP.NET Identity 
factory.Register(new Registration<UserManager>());
factory.Register(new Registration<UserStore>());
factory.Register(new Registration<ApplicationDbContext>());

// Custom User Service
factory.UserService = new Registration<IUserService, UserService>();

The user is transparently signed-in and redirected back to the dashboard.

Demo

To prove that everything is working as described, here's a short demo I recorded. It demonstrates the normal login flow to the dashboard, registration with consent screen disabled and registration with consent screen enabled (just so the flow is more obvious).

Thanks

Special thanks to Dominick Baier, who helped significantly with the above implementation. Sorry it took so long for the blog post!


Taiseer Joudeh: Integrate Azure AD B2C with ASP.NET MVC Web App – Part 3

This is the third part of the tutorial which will cover Using Azure AD B2C tenant with ASP.NET Web API 2 and various front-end clients.

The source code for this tutorial is available on GitHub.

The MVC Web App has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net/)

I promise you that I won’t share your information with anyone, feel free to try the experience 🙂

Integrate Azure AD B2C with ASP.NET MVC Web App

In the previous post, we have configured our Web API to rely on our Azure AD B2C IdP to secure it so only calls which contain a token issued by our IdP will be accepted by our Web API.

In this post we will build our first front-end application (ASP.NET MVC 5 Web App) which will consume the API endpoints by sending a valid token obtained from the Azure AD b2C tenant, as well it will allow anonymous users to create profiles, and sign in against the Azure B2C tenant. The MVC Web app itself will be protected as well by the same Azure AD B2C tenant as we will share the same tenant Id between the Web API and MVC Web app.

So let’s start building the MVC Web App.

Step 1: Creating the MVC Web App Project

Let’s add a new ASP.NET Web application named “AADB2C.WebClientMvc” to the solution named “WebApiAzureAcitveDirectoryB2C.sln”, then add new MVC ASP.NET Web application, the selected template for the project will be “MVC”, and do not forget to change the “Authentication Mode” to “No Authentication” check the image below:

Azure B2C Web Mvc Template

Once the project has been created, click on it’s properties and set “SSL Enabled” to “True”, copy the “SSL URL” value and right lick on project, select “Properties”, then select the “Web” tab from the left side and paste the “SSL URL” value in the “Project Url” text field and click “Save”. We need to allow https scheme locally once we debug the application. Check the image below:

MvcWebSSLEnable

Step 2: Install the needed NuGet Packages to Configure the MVC App

We need to add bunch of NuGet packages, so Open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.Owin.Security.OpenIdConnect -Version 3.0.1
Install-Package Microsoft.Owin.Security.Cookies -Version 3.0.1
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.1
Update-package Microsoft.IdentityModel.Protocol.Extensions

The package “Microsoft.Owin.Security.OpenIdConnect” contains the middleware used to protect web apps with OpenId Connect, this package contains the logic for the heavy lifting happens when our MVC App will talk with Azure B2C tenant to request tokens and validate them.

The package “Microsoft.IdentityModel.Protocol.Extension” contains classes which represent OpenID Connect constants and messages, lastly the package “Microsoft.Owin.Security.Cookies” will be used to create a cookie based session after obtaining a valid token from our Azure AD B2C tenant. This cookie will be sent from the browser to the server with each subsequent request and get validate by the cookie middleware.

Step 3: Configure Web App to use Azure AD B2C tenant IDs and Policies

Now we need to modify the web.config for our MVC App  by adding the below keys, so open Web.config and add the below AppSettings keys:

<add key="ida:Tenant" value="BitofTechDemo.onmicrosoft.com" />
    <add key="ida:ClientId" value="bc348057-3c44-42fc-b4df-7ef14b926b78" />
    <add key="ida:AadInstance" value="https://login.microsoftonline.com/{0}/v2.0/.well-known/openid-configuration?p={1}" />
    <add key="ida:SignUpPolicyId" value="B2C_1_Signup" />
    <add key="ida:SignInPolicyId" value="B2C_1_Signin" />
    <add key="ida:UserProfilePolicyId" value="B2C_1_Editprofile" />
    <add key="ida:RedirectUri" value="https://localhost:44315/" />
    <add key="api:OrdersApiUrl" value="https://localhost:44339/" />

The usage for the each setting has been outlined in the previous post, the only 2 new settings keys are: “ida:RedirectUri” which will be used to set the OpenID connect “redirect_uri” property The value of this URI should be registered in Azure AD B2C tenant (we will do this next), this redirect URI will be used by the OpenID Connect middleware to return token responses or failures after authentication process, as well after the sign out process. The second setting key “api:OrdersApiUrl” will be used as a base URI for our Web API.

Now let’s register the new Redirect URI in Azure B2C tenant, to do so login to Azure Portal and navigate to the App “Bit of Tech Demo App” we already registered in the previous post, then add the value “https://localhost:44315/” in the Reply URL settings as the image below, note that I already published the MVC web App to Azure App Services to the URL (https://aadb2cmvcapp.azurewebsites.net/) so I’ve included this URL too.

B2C Mvc Reply URL

Step 4: Add Owin “Startup” Class

The default MVC template comes without a “Startup” class, but we need to configure our OWIN OpenID Connect middleware at the start of our Web App, so add a new class named “Startup” and paste the code below, there is a lot of code here so jump to the next paragraph as I will do my best to explain what we have included in this class.

public class Startup
    {
        // App config settings
        private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"];
        private static string aadInstance = ConfigurationManager.AppSettings["ida:AadInstance"];
        private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"];
        private static string redirectUri = ConfigurationManager.AppSettings["ida:RedirectUri"];

        // B2C policy identifiers
        public static string SignUpPolicyId = ConfigurationManager.AppSettings["ida:SignUpPolicyId"];
        public static string SignInPolicyId = ConfigurationManager.AppSettings["ida:SignInPolicyId"];
        public static string ProfilePolicyId = ConfigurationManager.AppSettings["ida:UserProfilePolicyId"];

        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }

        public void ConfigureAuth(IAppBuilder app)
        {
            app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

            app.UseCookieAuthentication(new CookieAuthenticationOptions() );

            // Configure OpenID Connect middleware for each policy
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(SignUpPolicyId));
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(ProfilePolicyId));
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(SignInPolicyId));
        }

        // Used for avoiding yellow-screen-of-death
        private Task AuthenticationFailed(AuthenticationFailedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification)
        {
            notification.HandleResponse();
            if (notification.Exception.Message == "access_denied")
            {
                notification.Response.Redirect("/");
            }
            else
            {
                notification.Response.Redirect("/Home/Error?message=" + notification.Exception.Message);
            }

            return Task.FromResult(0);
        }

        private OpenIdConnectAuthenticationOptions CreateOptionsFromPolicy(string policy)
        {
            return new OpenIdConnectAuthenticationOptions
            {
                // For each policy, give OWIN the policy-specific metadata address, and
                // set the authentication type to the id of the policy
                MetadataAddress = String.Format(aadInstance, tenant, policy),
                AuthenticationType = policy,
              
                // These are standard OpenID Connect parameters, with values pulled from web.config
                ClientId = clientId,
                RedirectUri = redirectUri,
                PostLogoutRedirectUri = redirectUri,
                Notifications = new OpenIdConnectAuthenticationNotifications
                {
                    AuthenticationFailed = AuthenticationFailed
                },
                Scope = "openid",
                ResponseType = "id_token",

                // This piece is optional - it is used for displaying the user's name in the navigation bar.
                TokenValidationParameters = new TokenValidationParameters
                {
                    NameClaimType = "name",
                    SaveSigninToken = true //important to save the token in boostrapcontext
                }
            };
        }
    }

What we have implemented here is the following:

  • From line 4-12 we have read the app settings for the keys we have included in MVC App web.config where they represent Azure AD B2C tenant and policy names, note that policy names access modifiers are set to public as it will be referenced in another class.
  • Inside the method “ConfigureAuth” we have done different things as the following:
    • Line 
      app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType)
       will configure the OWIN security pipeline and inform the OpenID connect middleware that the default authentication type we will use is”Cookies”, and this means that the “Claims” encoded in the token we will receive from Azure AD B2C tenant will be stored in a Cookie (Session for the authenticated user).
    • Line 
      app.UseCookieAuthentication(new CookieAuthenticationOptions());
       will register a cookie authentication middleware instance with default options, this means that Authentication type here is equivalent to the same authentication type we set in the previous step. it will be “Cookies” too.
    • Lines 
      app.UseOpenIdConnectAuthentication
       are used to configure the OWIN security pipeline to use the authentication provider (Azure AD B2C) per policy, in our case, there will be 3 different policies we already defined.
  • The method 
    CreateOptionsFromPolicy
     will take the Policy name as input parameter and will return an object of type “OpenIdConnectAuthenticationOptions”, This object is responsible for controlling the OpenID Connect middleware. The properties we used to configure the instance of “OpenIdConnectAuthenticationOptions” as the below:
    • The
      MetadataAddress
       property will accept the address of the discovery document endpoint for our Azure AD B2C tenant per policy, so for example, the discovery endpoint for policy “B2C_1_Signup” will be “https://login.microsoftonline.com/BitofTechDemo.onmicrosoft.com/v2.0/.well-known/openid-configuration?p=B2C_1_Signup”. This discovery document will be used to get information from Azure AD B2C on how to generate authentication requests and validated incoming token responses.
    • The 
      AuthenticationType
       property will inform the middleware that authentication operation used is the policies we already defined, so for example if you defined a forth policy and you didn’t register it with the OpenID connect middleware, the tokens issues by this policy will be rejected.
    • The 
      ClientId
       property will tell Azure AD B2C which ID to use to match the requests originating from the Web App. This will represent the Azure AD B2C tenant we defined earlier in the previous posts.
    • The 
      RedirectUri
       property will inform the Azure AD B2C where your app wants the requested token response to be returned to, the value of this URL should be registered previously in the “ReplyURLs” values in Azure AD B2C App we defined earlier.
    • The 
      PostLogoutRedirectUri
       property will inform Azure AD B2C where to redirect the browser after a sign out operation completed successfully.
    • The 
      Scope
       property will be used to inform our Azure AD B2C tenant that our web app needs to use “OpenId Connect” protocol for authentication.
    • The 
      ResponseType
       property will indicate what our Web App needs from Azure AD B2C tenant after this authentication process, in our case, we only need an 
      id_token
    • The 
      TokenValidationParameters
       is used to store the information needed to validate the tokens, we only need to change 2 settings here, the 
      NameClaimType
       and the 
      SaveSigninToken
       . Setting the “NameClaimType” value to “name” will allow us to read the display name of the user by calling 
      User.Identity.Name
       , and setting the “SaveSigninToken” to “true” will allow us to save the token we received from the authentication process in the claims created (Inside the session cookie), this will be useful to retrieve the token from the claims when we want to call the Web API. Keep in mind that the cookie size will get larger as we are storing the token inside it.
    • Lastly, the property 
      Notifications
       will allow us to inject our custom code during certain phases of the authentication process, the phase we are interested in here is the 
      AuthenticationFailed
       phase, in this phase we want to redirect the user to the root directory of the Web App in case he/she clicked cancel on the sign on or sign in forms, and we need to redirect to the error view if we received any other exception during the authentication process.

This was the most complicated part in configuring our Web App to use our Azure AD B2C tenant. Now the next steps should be simpler and we will modify some views and add some new actions to issue requests to our Web API and call the Azure AD B2C polices.

Step 5: Call the Azure B2C Polices

Now we need to configure out Web App to invoke the policies we created, to do so we need to add a new controller named “AccountController”, so add it and paste the code below:

public class AccountController : Controller
    {
        public void SignIn()
        {
            if (!Request.IsAuthenticated)
            {
                // To execute a policy, you simply need to trigger an OWIN challenge.
                // You can indicate which policy to use by specifying the policy id as the AuthenticationType
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.SignInPolicyId);
            }
        }

        public void SignUp()
        {
            if (!Request.IsAuthenticated)
            {
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.SignUpPolicyId);
            }
        }

        public void Profile()
        {
            if (Request.IsAuthenticated)
            {
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.ProfilePolicyId);
            }
        }

        public void SignOut()
        {
            // To sign out the user, you should issue an OpenIDConnect sign out request
            if (Request.IsAuthenticated)
            {
                IEnumerable<AuthenticationDescription> authTypes = HttpContext.GetOwinContext().Authentication.GetAuthenticationTypes();
                HttpContext.GetOwinContext().Authentication.SignOut(authTypes.Select(t => t.AuthenticationType).ToArray());
            }
        }
    }

What we have implemented here is simple, and it is the same for actions 

SignIn
 , 
SignUp
 , and 
Profile
 , what we have done is a call to the 
Challenge
 method and specify the related Policy name for each action.

The “Challenge” method in the OWIN pipeline accepts an instance of the object

AuthenticationProperties()
  which is used to set the settings of the action we want to do (Sign in, Sign up, Edit Profile). We only set the “RedirectUri” here to the root path of our Web App, taking into consideration that this “RedirectUri” has nothing to do with the “RedirectUri” we have defined in Azure AD B2C. This can be a different URI where you want the browser to redirect the user only after a successful operation takes place.

Regarding the 

SignOut
 action, we need to Signout the user from different places, one by removing the app local session we created using the “Cookies” authentication and the other one by informing the OpenID connect middleware to send a Sign out request message to our Azure AD B2C tenant so the user is signed out from there too, that’s why we are retrieving all the Auth types available for our Web App and then we pass those different authentication types to the the “SignOut” method.

Now let’s add a partial view which renders the links to call those actions, so add a new partial view named “_LoginPartial.cshtml” under the “Shared” folder and paste the code below:

@if (Request.IsAuthenticated)
{
    <text>
        <ul class="nav navbar-nav navbar-right">
            <li>
                <a id="profile-link">@User.Identity.Name</a>
                <div id="profile-options" class="nav navbar-nav navbar-right">
                    <ul class="profile-links">
                        <li class="profile-link">
                            @Html.ActionLink("Edit Profile", "Profile", "Account")
                        </li>
                    </ul>
                </div>
            </li>
            <li>
                @Html.ActionLink("Sign out", "SignOut", "Account")
            </li>
        </ul>
    </text>
}
else
{
    <ul class="nav navbar-nav navbar-right">
        <li>@Html.ActionLink("Sign up", "SignUp", "Account", routeValues: null, htmlAttributes: new { id = "signUpLink" })</li>
        <li>@Html.ActionLink("Sign in", "SignIn", "Account", routeValues: null, htmlAttributes: new { id = "loginLink" })</li>
    </ul>
}

Notice that part of partial view will be rendered only if the user is authenticated and notice how we are displaying the user “Display Name” from the claim named “name” by only calling 

@User.Identity.Name

Now we need to reference this partial view in the “_Layout.cshtml” view, we need just to replace the last Div in the body section with the below section:

<div class="navbar-collapse collapse">
	<ul class="nav navbar-nav">
		<li>@Html.ActionLink("Home", "Index", "Home")</li>
		<li>@Html.ActionLink("Orders List", "Index", "Orders")</li>
	</ul>
	@Html.Partial("_LoginPartial")
</div>

Step 6: Call the Web API from the MVC App

Now we want to add actions to start invoking the protected API we’ve created by passing the token obtained from Azure AD B2C tenant in the “Authorization” header for each protected request. We will add support for creating a new order and listing all the orders related to the authenticated user. If you recall from the previous post, we will depend on the claim named “objectidentifer” to read the User ID value encoded in the token as a claim.

To do so we will add a new controller named “OrdersController” under folder “Controllers” and will add 2 actions methods named “Index” and “Create”, add the file and paste the code below:

[Authorize]
    public class OrdersController : Controller
    {
        private static string serviceUrl = ConfigurationManager.AppSettings["api:OrdersApiUrl"];

        // GET: Orders
        public async Task<ActionResult> Index()
        {
            try
            {

                var bootstrapContext = ClaimsPrincipal.Current.Identities.First().BootstrapContext as System.IdentityModel.Tokens.BootstrapContext;

                HttpClient client = new HttpClient();

                client.BaseAddress = new Uri(serviceUrl);

                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bootstrapContext.Token);

                HttpResponseMessage response = await client.GetAsync("api/orders");

                if (response.IsSuccessStatusCode)
                {

                    var orders = await response.Content.ReadAsAsync<List<OrderModel>>();

                    return View(orders);
                }
                else
                {
                    // If the call failed with access denied, show the user an error indicating they might need to sign-in again.
                    if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                    {
                        return new RedirectResult("/Error?message=Error: " + response.ReasonPhrase + " You might need to sign in again.");
                    }
                }

                return new RedirectResult("/Error?message=An Error Occurred Reading Orders List: " + response.StatusCode);
            }
            catch (Exception ex)
            {
                return new RedirectResult("/Error?message=An Error Occurred Reading Orders List: " + ex.Message);
            }
        }

        public ActionResult Create()
        {
            return View();
        }

        [HttpPost]
        public async Task<ActionResult> Create([Bind(Include = "ShipperName,ShipperCity")]OrderModel order)
        {

            try
            {
                var bootstrapContext = ClaimsPrincipal.Current.Identities.First().BootstrapContext as System.IdentityModel.Tokens.BootstrapContext;

                HttpClient client = new HttpClient();

                client.BaseAddress = new Uri(serviceUrl);

                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bootstrapContext.Token);

                HttpResponseMessage response = await client.PostAsJsonAsync("api/orders", order);

                if (response.IsSuccessStatusCode)
                {
                    return RedirectToAction("Index");
                }
                else
                {
                    // If the call failed with access denied, show the user an error indicating they might need to sign-in again.
                    if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                    {
                        return new RedirectResult("/Error?message=Error: " + response.ReasonPhrase + " You might need to sign in again.");
                    }
                }

                return new RedirectResult("/Error?message=An Error Occurred Creating Order: " + response.StatusCode);
            }
            catch (Exception ex)
            {
                return new RedirectResult("/Error?message=An Error Occurred Creating Order: " + ex.Message);
            }

        }

    }

    public class OrderModel
    {
        public string OrderID { get; set; }
        [Display(Name = "Shipper")]
        public string ShipperName { get; set; }
        [Display(Name = "Shipper City")]
        public string ShipperCity { get; set; }
        public DateTimeOffset TS { get; set; }
    }

What we have implemented here is the following:

  • We have added an 
    [Authorize]
     attribute on the controller so any unauthenticated (anonymous) request (Session cookie doesn’t exist) to any of the actions in this controller will result into a redirect to the Sign in policy we have configured.
  • Notice how we are reading the 
    BootstrapContext
     from the current “ClaimsPrincipal” object, this context will contain a property named “Token” which we will send in the “Authorization” header for the Web API. Note that if you forgot to set the property “SaveSigninToken” of the “TokenValidationParameters” to “true” then this will return “null”.
  • We are using HTTP Client to craft the requests and call the Web API endpoints we defined earlier. There is no need to pay attention to the User ID property in the MVC App as this property is encoded in the token itself, and the Web API will take the responsibility to decode it and store it in the Azure table storage along with order information.

Step 7: Add views for the Orders Controller

I will not dive into details here, as you know we need to add 2 views to support rendering the list of orders and creating a new order, for sake of completeness I will paste the cshtml for each view, so open a new folder named “Orders” under “Views” folder, then add 2 new views named “Index.cshtml” and “Create.cshtml” and paste the code as the below:

@model IEnumerable<AADB2C.WebClientMvc.Controllers.OrderModel>
@{
    ViewBag.Title = "Orders";
}
<h2>Orders</h2>
<br />
<p>
    @Html.ActionLink("Create New", "Create")
</p>

<table class="table table-bordered table-striped table-hover table-condensed" style="table-layout: auto">
    <thead>
        <tr>
            <td>Order Id</td>
            <td>Shipper</td>
            <td>Shipper City</td>
            <td>Date</td>
        </tr>
    </thead>
    @foreach (var item in Model)
    {
        <tr>
            <td>
                @Html.DisplayFor(modelItem => item.OrderID)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.ShipperName)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.ShipperCity)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.TS)
            </td>
        </tr>
    }
</table>

@model AADB2C.WebClientMvc.Controllers.OrderModel
@{
    ViewBag.Title = "New Order";
}
<h2>Create Order</h2>
@using (Html.BeginForm())
{
    <div class="form-horizontal">
        <hr />

        <div class="form-group">
            @Html.LabelFor(model => model.ShipperName, htmlAttributes: new { @class = "control-label col-md-2" })
            <div class="col-md-10">
                @Html.EditorFor(model => model.ShipperName, new { htmlAttributes = new { @class = "form-control" } })
            </div>
        </div>

        <div class="form-group">
            @Html.LabelFor(model => model.ShipperCity, htmlAttributes: new { @class = "control-label col-md-2" })
            <div class="col-md-10">
                @Html.EditorFor(model => model.ShipperCity, new { htmlAttributes = new { @class = "form-control" } })
            </div>
        </div>

        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save Order" class="btn btn-default" />
            </div>
        </div>
    </div>

    <div>
        @Html.ActionLink("Back to Orders", "Index")
    </div>
}

Step 8: Lastly, let’s test out the complete flow

To test this out the user will click on “Orders List” link from the top navigation menu, then he will be redirected to the Azure AD B2C tenant where s/he can enter the app local credentials, if the crednetials provided are valid then a successful authentication will take place and a token will be obtained and stored in the claims identity for the authenticated user, then the orders view are displayed the token is sent in the authorization header to get all orders for this user. It should be something as the animated image below:

Azure AD B2C animation

That’s it for now folks, I hope you find it useful 🙂 In the next post, I will cover how to integrate MSAL with Azure AD B2C and use it in a desktop application. If you find the post useful; then do not forget to share it 🙂

The Source code for this tutorial is available on GitHub.

The MVC Web App has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net/)

Follow me on Twitter @tjoudeh

Resources

The post Integrate Azure AD B2C with ASP.NET MVC Web App – Part 3 appeared first on Bit of Technology.


Damien Bowden: Implementing UNDO, REDO in ASP.NET Core

The article shows how to implement UNDO, REDO functionality in an ASP.NET Core application using EFCore and MS SQL Server.

This is the first blog in a 3 part series. The second blog will implement the UI using Angular 2 and the third article will improve the concurrent stacks with max limits to prevent memory leaks etc.

Code: https://github.com/damienbod/Angular2AutoSaveCommands

2016.08.19 ASP.NET Core 1.0.1

Other articles in this series:

  1. Implementing UNDO, REDO in ASP.NET Core
  2. Angular 2 Auto Save, Undo and Redo
  3. ASP.NET Core Action Arguments Validation using an ActionFilter

The application was created using the ASP.NET Core Web API template. The CommandDto class is used for all commands sent from the UI. The class is used for the create, update and delete requests. The class has 4 properties. The CommandType property defines the types of commands which can be sent. The supported CommandType values are defined as constants in the CommandTypes class. The PayloadType is used to define the type for the Payload JObject. The server application can then use this, to convert the JObject to a C# object. The ActualClientRoute is required to support the UNDO and REDO logic. Once the REDO or UNDO is executed, the client needs to know where to navigate to. The values are strings and are totally controlled by the client SPA application. The server just persists these for each command.

using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Models
{
    public class CommandDto
    {
        public string CommandType { get; set; }
        public string PayloadType { get; set; }
        public JObject Payload { get; set; }
        public string ActualClientRoute { get; set;}
    }
	
    public static  class CommandTypes
    {
        public const string ADD = "ADD";
        public const string UPDATE = "UPDATE";
        public const string DELETE = "DELETE";
        public const string UNDO = "UNDO";
        public const string REDO = "REDO";
    }
	
    public static class PayloadTypes
    {
        public const string Home = "HOME";
        public const string ABOUT = "ABOUT";
        public const string NONE = "NONE";
    }
}

The CommandController is used to provide the Execute, UNDO and REDO support for the UI, or any other client which will use the service. The controller injects the ICommandHandler which implements the logic for the HTTP POST requests.

using Angular2AutoSaveCommands.Models;
using Angular2AutoSaveCommands.Providers;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Controllers
{
    [Route("api/[controller]")]
    public class CommandController : Controller
    {
        private readonly ICommandHandler _commandHandler;
        public CommandController(ICommandHandler commandHandler)
        {
            _commandHandler = commandHandler;
        }

        [ServiceFilter(typeof(ValidateCommandDtoFilter))]
        [HttpPost]
        [Route("Execute")]
        public IActionResult Post([FromBody]CommandDto value)
        {
            _commandHandler.Execute(value);
            return Ok(value);
        }

        [HttpPost]
        [Route("Undo")]
        public IActionResult Undo()
        {
            var commandDto = _commandHandler.Undo();
            return Ok(commandDto);
        }

        [HttpPost]
        [Route("Redo")]
        public IActionResult Redo()
        {
            var commandDto = _commandHandler.Redo();
            return Ok(commandDto);
        }
    }
}

The ICommandHandler has three methods, Execute, Undo and Redo. The Undo and the Redo methods return a CommandDto class. This class contains the actual data and the URL for the client routing.

using Angular2AutoSaveCommands.Models;

namespace Angular2AutoSaveCommands.Providers
{
    public interface ICommandHandler 
    {
        void Execute(CommandDto commandDto);
        CommandDto Undo();
        CommandDto Redo();
    }
}

The CommandHandler class implements the ICommandHandler interface. This class provides the two ConcurrentStack fields for the REDO and the UNDO stack. The stacks are static and so need to be thread safe. The UNDO and the REDO return a CommandDTO which contains the relevant data after the operation which has been executed.

The Execute method just calls the execution depending on the payload. This method then creates the appropriate command, adds the command to the database for the history, executes the logic and adds the command to the UNDO stack.

The undo method pops a command from the undo stack, calls the Unexecute method, adds the command to the redo stack, and saves everything to the database.

The redo method pops a command from the redo stack, calls the Execute method, adds the command to the undo stack, and saves everything to the database.

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using Angular2AutoSaveCommands.Models;
using Angular2AutoSaveCommands.Providers.Commands;
using Microsoft.Extensions.Logging;

namespace Angular2AutoSaveCommands.Providers
{
    public class CommandHandler : ICommandHandler
    {
        private readonly ICommandDataAccessProvider _commandDataAccessProvider;
        private readonly DomainModelMsSqlServerContext _context;
        private readonly ILoggerFactory _loggerFactory;
        private readonly ILogger _logger;

        // TODO remove these and used persistent stacks
        private static ConcurrentStack<ICommand> _undocommands = new ConcurrentStack<ICommand>();
        private static ConcurrentStack<ICommand> _redocommands = new ConcurrentStack<ICommand>();

        public CommandHandler(ICommandDataAccessProvider commandDataAccessProvider, DomainModelMsSqlServerContext context, ILoggerFactory loggerFactory)
        {
            _commandDataAccessProvider = commandDataAccessProvider;
            _context = context;
            _loggerFactory = loggerFactory;
            _logger = loggerFactory.CreateLogger("CommandHandler");
        }

        public void Execute(CommandDto commandDto)
        {
            if (commandDto.PayloadType == PayloadTypes.ABOUT)
            {
                ExecuteAboutDataCommand(commandDto);
                return;
            }

            if (commandDto.PayloadType == PayloadTypes.Home)
            {
                ExecuteHomeDataCommand(commandDto);
                return;
            }

            if (commandDto.PayloadType == PayloadTypes.NONE)
            {
                ExecuteNoDataCommand(commandDto);
                return;
            }
        }

        // TODO add return object for UI
        public CommandDto Undo()
        {  
            var commandDto = new CommandDto();
            commandDto.CommandType = CommandTypes.UNDO;
            commandDto.PayloadType = PayloadTypes.NONE;
            commandDto.ActualClientRoute = "NONE";

            if (_undocommands.Count > 0)
            {
                ICommand command;
                if (_undocommands.TryPop(out command))
                {
                    _redocommands.Push(command);
                    command.UnExecute(_context);
                    commandDto.Payload = command.ActualCommandDtoForNewState(CommandTypes.UNDO).Payload;
                    _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                    _commandDataAccessProvider.Save();
                    return command.ActualCommandDtoForNewState(CommandTypes.UNDO);
                }   
            }

            return commandDto;
        }

        // TODO add return object for UI
        public CommandDto Redo()
        {
            var commandDto = new CommandDto();
            commandDto.CommandType = CommandTypes.REDO;
            commandDto.PayloadType = PayloadTypes.NONE;
            commandDto.ActualClientRoute = "NONE";

            if (_redocommands.Count > 0)
            {
                ICommand command;
                if(_redocommands.TryPop(out command))
                { 
                    _undocommands.Push(command);
                    command.Execute(_context);
                    commandDto.Payload = command.ActualCommandDtoForNewState(CommandTypes.REDO).Payload;
                    _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                    _commandDataAccessProvider.Save();
                    return command.ActualCommandDtoForNewState(CommandTypes.REDO);
                }
            }

            return commandDto;
        }

        private void ExecuteHomeDataCommand(CommandDto commandDto)
        {
            if (commandDto.CommandType == CommandTypes.ADD)
            {
                ICommandAdd command = new AddHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                command.UpdateIdforNewItems();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.UPDATE)
            {
                ICommand command = new UpdateHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.DELETE)
            {
                ICommand command = new DeleteHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }
        }

        private void ExecuteAboutDataCommand(CommandDto commandDto)
        {
            if(commandDto.CommandType == CommandTypes.ADD)
            {
                ICommandAdd command = new AddAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                command.UpdateIdforNewItems();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.UPDATE)
            {
                ICommand command = new UpdateAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.DELETE)
            {
                ICommand command = new DeleteAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }
        }

        private void ExecuteNoDataCommand(CommandDto commandDto)
        {
            _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
            _commandDataAccessProvider.Save();
        }

    }
}

The ICommand interface contains the public methods required for the commands in this application. The DBContext is used as a parameter in the Execute and the Unexecute method because the context from the HTTP request is used, and not the original context from the Execute HTTP request.

using Angular2AutoSaveCommands.Models;

namespace Angular2AutoSaveCommands.Providers.Commands
{
    public interface ICommand
    {
        void Execute(DomainModelMsSqlServerContext context);
        void UnExecute(DomainModelMsSqlServerContext context);

        CommandDto ActualCommandDtoForNewState(string commandType);
    }
}

The UpdateAboutDataCommand class implements the ICommand interface. This command supplies the logic to update and also to undo an update in the execute and the unexecute methods. For the undo, the previous state of the entity is saved in the command.

 
using System;
using System.Linq;
using Angular2AutoSaveCommands.Models;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Providers.Commands
{
    public class UpdateAboutDataCommand : ICommand
    {
        private readonly ILogger _logger;
        private readonly CommandDto _commandDto;
        private AboutData _previousAboutData;

        public UpdateAboutDataCommand(ILoggerFactory loggerFactory, CommandDto commandDto)
        {
            _logger = loggerFactory.CreateLogger("UpdateAboutDataCommand");
            _commandDto = commandDto;
        }

        public void Execute(DomainModelMsSqlServerContext context)
        {
            _previousAboutData = new AboutData();

            var aboutData = _commandDto.Payload.ToObject<AboutData>();
            var entity = context.AboutData.First(t => t.Id == aboutData.Id);

            _previousAboutData.Description = entity.Description;
            _previousAboutData.Deleted = entity.Deleted;
            _previousAboutData.Id = entity.Id;

            entity.Description = aboutData.Description;
            entity.Deleted = aboutData.Deleted;
            _logger.LogDebug("Executed");
        }

        public void UnExecute(DomainModelMsSqlServerContext context)
        {
            var aboutData = _commandDto.Payload.ToObject<AboutData>();
            var entity = context.AboutData.First(t => t.Id == aboutData.Id);

            entity.Description = _previousAboutData.Description;
            entity.Deleted = _previousAboutData.Deleted;
            _logger.LogDebug("Unexecuted");
        }

        public CommandDto ActualCommandDtoForNewState(string commandType)
        {
            if (commandType == CommandTypes.UNDO)
            {
                var commandDto = new CommandDto();
                commandDto.ActualClientRoute = _commandDto.ActualClientRoute;
                commandDto.CommandType = _commandDto.CommandType;
                commandDto.PayloadType = _commandDto.PayloadType;
            
                commandDto.Payload = JObject.FromObject(_previousAboutData);
                return commandDto;
            }
            else
            {
                return _commandDto;
            }
        }
    }
}

The startup class adds the interface/class pairs to the built-in IoC. The MS SQL Server is defined here using the appsettings to read the database connection string. EFCore migrations are used to create the database.

using System;
using System.Linq;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Angular2AutoSaveCommands.Providers;
using Microsoft.EntityFrameworkCore;

namespace Angular2AutoSaveCommands
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            var sqlConnectionString = Configuration.GetConnectionString("DataAccessMsSqlServerProvider");

            services.AddDbContext<DomainModelMsSqlServerContext>(options =>
                options.UseSqlServer(  sqlConnectionString )
            );

            services.AddMvc();

            services.AddScoped<ICommandDataAccessProvider, CommandDataAccessProvider>();
            services.AddScoped<ICommandHandler, CommandHandler>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                 "/home",
                 "/about"
             };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseDefaultFiles();

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The application api can be tested using fiddler. The following HTTP POST requests are sent in this order, execute(ADD), execute(UPDATE), Undo, Undo, Redo

http://localhost:5000/api/command/execute
User-Agent: Fiddler
Host: localhost:5000
Content-Type: application/json

{
  "commandType":"ADD",
  "payloadType":"ABOUT",
  "payload":
   { 
      "Id":0,
      "Description":"add a new about item",
      "Deleted":false
    },
   "actualClientRoute":"https://damienbod.com/add"
}

http://localhost:5000/api/command/execute
User-Agent: Fiddler
Host: localhost:5000
Content-Type: application/json

{
  "commandType":"UPDATE",
  "payloadType":"ABOUT",
  "payload":
   { 
      "Id":10003,
      "Description":"update the existing about item",
      "Deleted":false
    },
   "actualClientRoute":"https://damienbod.com/update"
}

http://localhost:5000/api/command/undo
http://localhost:5000/api/command/undo
http://localhost:5000/api/command/redo

The data is sent in this order and the undo, redo works as required.
undoRedofiddler_01

The data can also be validated in the database using the CommandEntity table.

undoRedosql_02

Links:

http://www.codeproject.com/Articles/33384/Multilevel-Undo-and-Redo-Implementation-in-Cshar



Pedro Félix: On contracts and HTTP APIs

Reading the twitter conversation started by this tweet

made me put in written words some of the ideas that I have about HTTP APIs, contracts and “out-of-band” information.
Since it’s vacations time, I’ll be brief and incomplete.

  • On any interface, it is impossible to avoid having contracts (i.e. shared “out-of-band” information) between provider and consumer. On a HTTP API, the syntax and semantics of HTTP itself is an example of this shared information. If JSON is used as a base for the representation format, then its syntax and semantics rules are another example of shared “out-of-band” information.
  • However not all contracts are equal in the generality, flexibility and evolvability they allow. Having the contract include a fixed resource URI is very different from having the contract defining a link relation. The former prohibits any change on the URI structure (e.g. host name, HTTP vs HTTPS, embedded information), while the later one enables it. Therefore, designing the contract is a very important task when creating HTTP APIs. And since the transfer contract is already rather well defined by HTTP, most of the design emphasis should be on the representation contract, include the hypermedia components.
  • Also, not all contracts have the same cost to implement (e.g. having hardcoded URIs is probably simpler than having to find links on representations), so (as usual) trade-offs have to be taken into account.
  • When implementing HTTP APIs is also very important to have the contract-related areas clearly identified. For me, this typically involves being able to easily answering questions such as: – Will I be breaking the contract if
    • I change this property name on this model?
    • I add a new property to this model?
    • I change the routing rules (e.g. adding a new path segment)?

Hope this helps
Looking forward for feedback

 



Taiseer Joudeh: Azure Active Directory B2C Overview and Policies Management – Part 1

Prior joining Microsoft I was heavily involved in architecting and building a large scale HTTP API which will be consumed by a large number of mobile application consumers on multiple platforms (iOS, Android, and Windows Phone). Securing the API and architecting the Authentication and Authorization part for the API was one of the large and challenging features which we built from scratch as we needed only to support local database account (allowing users to login using their own existing email/username and password). As well writing a proprietary code for each platform to consume the Authentication and Authorization end points, storing the tokens, and refresh them silently was a bit challenging and required skilled mobile apps developers to implement it securely on the different platforms. Don’t ask me why we didn’t use Xamarin for cross-platform development, it is a long story 🙂 During developing the back-end API I have learned that building identity management solution is not a trivial feature, and it is better to outsource it to a cloud service provider if this is a feasible option and you want your dev team to focus on building what matters; your business features!

Recently Microsoft has announced the general availability in North America data centers of a service named “Azure Active Directory B2C” which in my humble opinion will fill the gap of having a cloud identity and access management service targeted especially for mobile apps and web developers who need to build apps for consumers; consumers who want to sign in with their existing email/usernames, create new app-specific local accounts, or use their existing social accounts (Facebook, Google, LinkedIn, Amazon, Microsoft account) to sign in into the mobile/web app.

Azure Active Directory B2C

The Azure Active Directory B2C will allow backend developers to focus on the core business of their services while they outsource the identity management to Azure Active Directory B2C including (Signing-in, Signing-up, Password reset, Edit Profile, etc..). One important feature to mention here that the service can run on Azure cloud while your HTTP API is hosted on-premise, there is no need to have everything in the cloud if your use case requires hosting your services on-premise.  You can read more about all the features of Azure Active Directory B2C by visiting their official page.

The Azure Active Directory B2C can integrate seamlessly with the new unified authentication library named MSAL (Microsoft Authentication Library), this library will help developers to obtain tokens from Active Directory, Azure Active Directory B2C, and MSA for accessing protected resources. The library will support different platforms covering: .NET 4.5 + (Desktop Apps and Web apps), Windows Universal Apps, Windows Store apps (Windows 8 and above), iOS (via Xamarin), Android (via Xamarin), and .Net Core. Library still in preview, it should not be used in production application yet.

So during this series of posts, I will be covering different aspects of Azure Active Directory B2C as well integrating it with MSAL (Microsoft Authentication Library) in different front-end platforms (Desktop Application and Web Application).

Azure Active Directory B2C Overview and Policies Management

The source code for this tutorial is available on GitHub.

The MVC APP has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net)

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

What we’ll build in this tutorial?

During this post we will build a Web API 2 HTTP API which will be responsible for managing shipping orders (i.e. Listing orders, adding new ones, etc…), the orders data will be stored in Azure Table Storage, while we will outsource all the identity management to Azure Active Directory B2C, where service users/consumers will rely on AAD B2C to signup new accounts using their app-specific email/password, then allow them to login using their app-specific accounts.

Saying this we need a front-end apps to manipulate orders and communicate with the HTTP API, We will build a different type of apps during the series of posts where some of them will use MSAL.

So the components that all the tutorials will be built from are:

  • Azure Active Directory B2C tenant for identity management, it will act as our IdP (Identity Provider).
  • ASP.NET Web API 2 acting as HTTP API Service and secured by the Azure Active Directory B2C tenant.
  • Different front end apps which will communicate with Azure Active Directory B2C to sign-in users, obtain tokens, send them to the protected HTTP API, and retrieve results from the HTTP API and project it on the front end applications.

So let’s get our hands dirty and start building the tutorial.

Building the Back-end Resource (Web API)

Step 1: Creating the Web API Project

In this tutorial, I’m using Visual Studio 2015 and .Net framework 4.5.2, to get started create an empty solution and name it “WebApiAzureAcitveDirectoryB2C.sln”, then add new empty ASP.NET Web application named “AADB2C.Api”, the selected template for the project will be “Empty” template with no core dependencies, check the image below:

VS2015 Web Api Template

Once the project has been created, click on it’s properties and set “SSL Enabled” to “True”, copy the “SSL URL” value and right lick on project, select “Properties”, then select the “Web” tab from the left side and paste the “SSL URL” value in the “Project Url” text field and click “Save”. We need to allow https scheme locally once we debug the application. Check the image below:

Web Api SSL Enable

Note: If this is the first time you enable SSL locally, you might get prompted to install local IIS Express Certificate, click “Yes”.

Step 2: Install the needed NuGet Packages to bootstrap the API

This project is empty so we need to install the NuGet packages needed to setup our Owin server and configure ASP.NET Web API 2 to be hosted within an Owin server, so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.3
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.3
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.1

Step 3: Add Owin “Startup” Class

We need to build the API components because we didn’t use a ready made template, this way is cleaner and you understand the need and use for each component you install in your solution, so add a new class named “Startup”. It will contain the code below, please note that the method “ConfigureAuth” is left empty intentionally as we will visit this class many times after we create our Azure Active Directory B2C tenant, what I need to do now is to build the API without anything protection then protect with our new Azure Active Directory B2C IdP:

public class Startup
    {

        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Web API routes
            config.MapHttpAttributeRoutes();

            ConfigureOAuth(app);

            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
           
        }
    }

Step 4: Add support to store data on Azure Table Storage

Note: I have decided to store the fictitious data about customers orders in Azure table storage as this service will be published online and I need to demonstrate the features on how to distinguish users data based on the signed-in user, feel free to use whatever permanent storage you like to complete this tutorial, the implementation here is simple so you can replace it with a SQL Server, MySQL, or any other NoSQL store.

So let’s add the needed NuGet packages which allow us to access the Azure Table Storage in a .NET client, I recommend you to refer to the official documentation if you need to read more about Azure Table Storage.

Install-Package WindowsAzure.Storage
Install-Package Microsoft.WindowsAzure.ConfigurationManager

Step 5: Add Web API Controller responsible for orders management

Now we want to add a controller which is responsible for orders management (Adding orders, listing all orders which belong to a certain user) . So add new controller named “OrdersController” inside a folder named “Controllers” and paste the code below:

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        CloudTable cloudTable = null;

        public OrdersController()
        {
            // Retrieve the storage account from the connection string.
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));

            // Create the table client.
            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();

            // Retrieve a reference to the table.
            cloudTable = tableClient.GetTableReference("orders");

            // Create the table if it doesn't exist.
            // Uncomment the below line if you are not sure if the table has been created already
            // No need to keep checking that table exixts or not.
            //cloudTable.CreateIfNotExists();
        }

        [Route("")]
        public IHttpActionResult Get()
        {
         
            //This will be read from the access token claims.
            var userId = "TaiseerJoudeh";

            TableQuery <OrderEntity> query = new TableQuery<OrderEntity>()
                .Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, userId));

            var orderEntitis = cloudTable.ExecuteQuery(query).Select(
                o => new OrderModel() {
                OrderID = o.RowKey,
                ShipperName = o.ShipperName,
                ShipperCity = o.ShipperCity,
                TS = o.Timestamp
                });

            return Ok(orderEntitis);
        }

        [Route("")]
        public IHttpActionResult Post (OrderModel order)
        {
            //This will be read from the access token claims.
            var userId = "TaiseerJoudeh";

            OrderEntity orderEntity = new OrderEntity(userId);

            orderEntity.ShipperName = order.ShipperName;
            orderEntity.ShipperCity = order.ShipperCity;

            TableOperation insertOperation = TableOperation.Insert(orderEntity);

            // Execute the insert operation.
            cloudTable.Execute(insertOperation);

            order.OrderID = orderEntity.RowKey;

            order.TS = orderEntity.Timestamp;

            return Ok(order);
        }
    }

    #region Classes

    public class OrderModel
    {
        public string OrderID { get; set; }
        public string ShipperName { get; set; }
        public string ShipperCity { get; set; }
        public DateTimeOffset TS { get; set; }
    }

    public class OrderEntity : TableEntity
    {
        public OrderEntity(string userId)
        {
            this.PartitionKey = userId;
            this.RowKey = Guid.NewGuid().ToString("N");
        }

        public OrderEntity() { }

        public string ShipperName { get; set; }

        public string ShipperCity { get; set; }

    }

    #endregion

What we have implemented above is very straight forward, in the constructor of the controller, we have read the connection string for the Azure Table Storage from the web.config and created a cloud table instance which references the table named “Orders”. This table will hold the Orders data.

The structure of the table is you are thinking in SQL context even Azure Table Storage is NoSQL store is simple and it is represented in the class named “OrderEntity”, the “PartitionKey” will represent the “UserId”, and the “RowKey” will represent the “OrderId”. The “OrderId” will always contain an auto generated value.

Please note the following: a) You should not store the connection string for the table storage in web.config, it is better to use Azure Key Vault for a secure way to store your keys or you can set from Azure App Settings if you are going to host the Api on Azure. b) The “UserId” now is fixed, but eventually, it will read the authenticated UserId from the access token claims once we establish the IdP provider and configure our API to rely on Azure Active Directory B2C to protect it.

By taking a look at the “POST” action, you will notice that we are adding a new record to the table storage, and the “UserId” is fixed for now and we will visit this and fix it. The same applies to the “GET” action where we read the data from Azure Table Storage for a fixed user.

Now the API is ready for testing, you can issue a GET request or POST request and the data will be stored under the fixed “UserId” which is “TaiseerJoudeh”. Note that there is no Authorization header set as the API still publicly available for anyone. Below is a reference for the POST request:

POST Request:

POST /api/orders HTTP/1.1
Host: localhost:44339
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: 6f1164fa-8560-98fd-6566-892517f1003e

{
    "shipperName" :"Nike",
    "shipperCity": "Clinton"
}

Configuring the Azure Active Directory B2C Tenant

Step 5: Create an Azure Active Directory B2C tenant

Now we need to create the Azure Active Directory B2C tenant, for the mean time you can create it from the Azure Classic Portal and you will be able to manage all the settings from the new Azure Preview Portal.

  • To start the creation process login to the classic portal and navigate to: New > App Services > Active Directory > Directory > Custom Create as the image below:

Azure AD B2C Directory

  • A new popup will appear as the image below asking you to fill some information, note that if you selected one of the following countries (United States, Canada, Costa Rica, Dominican Republic, El Salvador, Guatemala, Mexico, Panama, Puerto Rico and Trinidad and Tobago) your Azure AD B2C will be Production-Scale tenant, as Azure AD B2C is GA only in the countries listed (North America). This will change in the coming months and more countries will be announced as GA. You can read more about the road map of Azure AD B2C here. Do not forget to check “This is a B2C directory” for sure 🙂

Azure AD B2C New Directory

  • After your tenant has been created, it will appear in the Active Directory extension bar, as the image below; select the tenant and click on “Configure” tab,  then click on “Manage B2C Settings” as the image below. This will open the new Azure Preview Portal where we will start registering the App and managing policies there.

Azure AD B2C Manage Settings

Step 6: Register our application in Azure AD B2C tenant

Now we need to register the application under the tenant we’ve created, this will allow us to add the sign-in, sign-up, edit profile features in our app, to do so follow the below steps:

  • Select “Applications” from the “Settings” blade for the B2C tenant we’ve created, then click on the “Add” Icon on the top
  • A new blade will open asking you to fill the following information
    • Name: This will be the application name that will describe your application to consumers. In our case I have used “BitofTech Demo App”
    • Web API/Web APP: we need to turn this on as we are protecting a Web Api and Web app.
    • Allow implicit flow: We will turn this on as well as we need to use OpenId connect protocol to obtain an id token
    • Reply URL: those are the registered URLs where the Azure Active Directory B2C will send the authentication response to (tokens) or error responses to. The client applications calling the API can specify the Reply URL, but it should be registered in the tenant by the administrator in order to work. In our case I will put the Reply URL now to the Web API URL which is “https://localhost:44339/” this will be good for testing purposes but in the next post I will add another URL for the Web application we will build to consume the API. As you notice you can register many Reply URLs so you can support different environments (Dev, staging, production, etc…)
    • Native Client: You need to turn this on if you are building mobile application or desktop application client, for the mean time there is no need to turn it on as we are building web application (Server side app) but we will visit this again in the coming posts and enable this once we build a desktop app to consume the API.
    • App key or App secret: This will be used to generate a “Client Secret” for the App which is needed to authenticate the App in the Authorization/Hybrid OAuth 2.0 flow. We will need this in the future posts once I describe how we can obtain access tokens, open id tokens and refresh tokens using Raw HTTP requests. For the mean time, there is no need to generate an App key.
  • One you fill all the information, click “Save” and the application will be created and Application ID will be generated, copy this value and keep it on the notepad as we will use it later on.
  • below an image which shows the App after filling the needed information:

Azure AD B2C New App

Step 7: Selecting Identity Providers

Azure Active Directory B2C offers multiple social identity providers Microsoft, Google, Amazon, LinkedIn and Facebook in addition to the local App-specific accounts. The local account can be configured to use a “Username” or “Email” as a unique attribute for the account, we will use the “Email” and we will use only the local accounts in this tutorial to keep things simple and straight forward.

You can change the “Identity Providers” by selecting the “Identity providers” blade. This link will be helpful if you need to configure it.

Step 8: Add custom attributes

Azure AD B2C directory comes with a set of “built-in” attributes that represents information about the user, attributes such as (Email, First name, Last name, etc…) those attributes can be extended in case you needed to add extra information about the user upon signing up (creating a profile) or editing it.

At the mean time you can create an attribute and set the datatype for it as “String”, I believe that this limitation would be resolved in the coming releases.

To do so select “User attributes” blade and click on the “Add” icon, a new blade will open asking you to fill the attribute name, data type and description. In our case, I’ve added an attribute named “Gender” to capture the gender of the user during the registration process (profile creation or sign up). Below an image which represents this process:

B2C Custom Attribute

We will see in the next steps how we can retrieve this custom attribute value in our application, there are 2 ways to do so, first one is to include it in claims encoded in the token and the second one is to use Azure AD Graph API. We will use the first method.

In the next step, I will show you how to include this custom attribute in the sign-up policy.

Step 9: Creating different policies

The unique thing about Azure Active Directory B2C is using the extensible policy framework, which allows the developers to define an easy and reusable way to build the identity experience that they want to provide for application consumers (end users). So for example to enroll a new user in your app and create a app-specific local account, you need to create a Signup Policy where you configure the attributes needed to capture it from the user, you configure the attributes (claims) you need to retrieve after successfully executing the policy, you can configure which identity providers consumers are allowed to use, as well you can configure the look and feel for the signup page by doing simple modifications such as changing label names, the order of the fields, or replace the UI entirety (more about this in future post). All this applies to other policies used to implement identity features such as signing in, editing profile.

As well by using the extensible policies framework we can create multiple policies of different types in our tenant and use them in our applications as needed. Policies can be reused across applications, as well they can be exported and uploaded for easier management. This allows us to define and modify identity experiences with minimal or no changes to application code.

Now let’s create the first policy which is the “Signup” policy which will build the experience for users during the signup process and I show you how to test it out. to do so follow the below steps:

  • Select the “Sign-up” policies.
  • Click on the “Add” icon at the top of the blade.
  • Select a name for the policy, picking up a clear name is important as we will reference the name in our application, in our case I’ve used “signup”.
  • Select the “Identity providers” and select “Email signup”. In our case this the only provider we have configured for this tenant so far.
  • Select the “Sign-up” attributes. Now we have the chance to choose the attributes we want to collect from the user during the signup process. I have selected 6 attributes as the image below.
  • Select the “Application claims”. Now we have the chance to choose the claims we want to return in the tokens sent back to out application after a successful signup process, remember that those claims are encoded within the token so do not get crazy about adding many claims as the token size will increase. I have selected 9 claims as the image below.
  • Finally, click on “Create” button.

Signup Policy Attribute

Notes:

  • The policy that will be created will be named as “B2C_1_signup” all the policies will be prefixed by “B2C_1_” fragment, do not ask me why but it seems its implementation detail 🙂
  • You can change the attribute label names (Surname -> Last Name) as well change the order of the fields by dragging the attributes, and set if the field is mandatory or not. Notice how I changed the custom attribute “Gender” to display as a drop down list and have a fixed items such as “Male” and “Female”. All this can be done by selecting the “Page UI customization” section.
  • Once the policy has been created you can configure the ID token, and refresh token expiration date time by selecting the section “Toke, session & SSO config”. I will cover this in the coming posts, for now we will keep the defaults for all policies we will create, and you can read more about this here.
  • Configuring ID token and refresh token expiration times is done pair policy not tenant, IMO I do not know why this was not done per tenant, not per policy, this for sure gives you better flexibility and finer grained control on how to manage policies, but I can not think of a use case where you want to have a different expiration dates for different policies. We will keep them the same for all policies we will create unless we are testing out something.
  • Below an image on how to change the custom attribute “Gender” order between other fields as well how to the “User input type” to use Drop down list:

Azure B2C Edit Attribute

Step 10: Creating the Sign in and Edit Profile policies

I won’t bore you with the repeated details for creating the other 2 policies which will be using during this tutorial, they all follow the same approach I have illustrated in the previous step. Please note the below about the newly created policies:

  • The policy which will be used to sign in the user (login) will be named “Signin“, so after creating it will be named “B2C_1_Signin“.
  • The policy which will be used to edit the created profile will be named “Editprofile“, so after creating it will be named “B2C_1_Editprofile“.
  • Do not forget to configure the Gender custom attribute for the “Editprofile” policy, as we need to display the values in the drop-down list instead of a text box.
  • Select the same claims we have already selected for the signup policy (8 claims)
  • You can click “Run now” button and test the new policies using a user that you already created from the sign up policy (Jump to next step before).
  • At the mean time the only way to execute those policies and test them out in this post and the coming one is to use the “Run now” button until I build a web application which communicates with the Web API and Azure Active Directory B2C tenant.

Step 11: Testing the created signup policy in Azure AD B2C tenant

Azure Active Directory B2C provide us with the ability to test the policies locally without leaving the azure portal, to do so all you need to click on is the “Run now” button and select the preferred Reply URL in case you registered many Reply URL once you registered the App, in our case we will have only a single app and a single reply URL. The Reply URL will be used to return the Id token in hash fragment to the Reply URL selected.

Once you click “Run now” button a new window will open and you will be able to test the sign up policy by filling up the needed information, notice that you need to use a real email in order to send activation code to it and verify that you own this email, I believe the Azure AD team implemented by verifying the email before creating the account to avoid creating many unreal emails that will never get verified. Smart decision.

Once you receive the verification email with the six digit code, you need to enter it in the verification code text box and click on “verify”, if all is good the “Create” button is enabled and you can complete filling the profile. You can change the content of the email by following this link

The password policy (complexity) used here is the same one used in Azure Active Directory, you can read more about it here.

After you fill all the mandatory attributes as the image below click create and you will notice that a redirect took place to the Reply URL and there is an Id Token returned as a hash fragment. This Id token contains all the claims specified in the policy, you can test it out by using a JWT debugging tool such as calebb.net so if we tried to debug the token we’ve received after running the sign up policy, you will see all the claims we asked for encoded in this JWT token.

Azure AD B2C Signup Test

Notes about the claims:

  • The newly “Gender” custom attribute we have added is returned under a claim named “extension_Gender“. It seems that all the custom attributed are prefixed by the phrase “extension”, I need to validate this with Azure AD team.
  • The globally user unique identifier is returned in the claim named “oid”, we will depend on this claim value to distinguish between registered users.
  • This token is generated based on the policy named “B2C_1_signup”, note the claim named “tfp”.

To have a better understanding of each claim meaning, please check this link.

{
  "exp": 1471954089,
  "nbf": 1471950489,
  "ver": "1.0",
  "iss": "https://login.microsoftonline.com/tfp/3d960283-c08d-4684-b378-2a69fa63966d/b2c_1_signup/v2.0/",
  "sub": "Not supported currently. Use oid claim.",
  "aud": "bc348057-3c44-42fc-b4df-7ef14b926b78",
  "nonce": "defaultNonce",
  "iat": 1471950489,
  "auth_time": 1471950489,
  "oid": "31ef9c5f-6416-48b8-828d-b6ce8db77d61",
  "emails": [
    "ahmad.hasan@gmail.com"
  ],
  "newUser": true,
  "given_name": "Ahmad",
  "family_name": "Hasan",
  "extension_Gender": "M",
  "name": "Ahmad Hasan",
  "country": "Jordan",
  "tfp": "B2C_1_signup"
}

This post turned out to be longer than anticipated so I will complete in the coming post , in the next post where I will show you how to reconfigure Our Web Api project to rely on out Azure AD B2C IdP and validate those tokens.

The source code for this tutorial is available on GitHub.

The MVC APP has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net)

Follow me on Twitter @tjoudeh

Resources

The post Azure Active Directory B2C Overview and Policies Management – Part 1 appeared first on Bit of Technology.


Pedro Félix: Focus on the representation semantics, leave the transfer semantics to HTTP

A couple of days ago I was reading the latest OAuth 2.0 Authorization Server Metadata document version and my eye got caught on one sentence. On section 3.2, the document states

A successful response MUST use the 200 OK HTTP status code and return a JSON object using the “application/json” content type (…)

My first reaction was thinking that this specification was being redundant: of course a 200 OK HTTP status should be returned on a successful response. However, that “MUST” in the text made me think: is a 200 really the only acceptable response status code for a successful response? In my opinion, the answer is no.

For instance, if caching and ETags are being used, the client can send a conditional GET request (see Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests) using the If-None-Match header, for which a 304 (Not Modified) status code is perfectly acceptable. Another example is if the metadata location changes and the server responds with a 301 (Moved Permanently) or a 302 (Found) status code.Does that means the request was unsuccessful? In my opinion, no. It just means that the request should be followed by a subsequent request to another location.

So, why does this little observation deserve a blog post?
Well, mainly because it reflects two common tendencies when designing HTTP APIs (or HTTP interfaces):

  • First, the tendency to redefine transfer semantics that are already defined by HTTP.
  • Secondly, a very simplistic view of HTTP, ignoring parts such as caching and optimistic concurrency.

The HTTP specification already defines a quite rich set of mechanisms for representation transfer, and HTTP related specifications should take advantage of that. What HTTP does not define is the semantics of the representation itself. That should be the focus of specifications such as the OAuth 2.0 Authorization Server Metadata.

When defining HTTP APIs, focus on the representation semantics. The transfer semantics is already defined by the HTTP protocol.

 



Dominick Baier: Why does my Authorize Attribute not work?

Sad title, isn’t it? The alternative would have been “The complicated relationship between claim types, ClaimsPrincipal, the JWT security token handler and the Authorize attribute role checks” – but that wasn’t very catchy.

But the reality is, that many people are struggling with getting role-based authorization (e.g. [Authorize(Roles = “foo”)]) to work – especially with external authentication like IdentityServer or other identity providers.

To fully understand the internals I have to start at the beginning…

IPrincipal
When .NET 1.0 shipped, it had a very rudimentary authorization API based on roles. Microsoft created the IPrincipal interface which specified a bool IsInRole(string roleName). They also created a couple of implementations for doing role-based checks against Windows groups (WindowsPrincipal) and custom data stores (GenericPrincipal).

The idea behind putting that authorization primitive into a formal interface was to create higher level functionality for doing role-based authorization. Examples of that are the PrincipalPermissionAttribute, the good old web.config Authorization section…and the [Authorize] attribute.

Moving to Claims
In .NET 4.5 the .NET team did a radical change and injected a new base class into all existing principal implementations – ClaimsPrincipal. While claims were much more powerful than just roles, they needed to maintain backwards compatibility. In other words, what was supposed to happen if someone moved a pre-4.5 application to 4.5 and called IsInRole? Which claim will represent roles?

To make the behaviour configurable they introduced the RoleClaimType (and also NameClaimType) property on ClaimsIdentity. So practically speaking, when you call IsInRole, ClaimsPrincipal check its identities if a claim of whatever type you set on RoleClaimType with the given value is present. As a default value they decided on re-using a WS*/SOAP -era proprietary type they introduced with WIF (as part of the ClaimTypes class): http://schemas.microsoft.com/ws/2008/06/identity/claims/role.

So to summarize, if you call IsInRole, by default the assumption is that your claims representing roles have the type mentioned above – otherwise the role check will not succeed.

When you are staying within the Microsoft world and their guidance, you will probably always use the ClaimTypes class which has a Role member that maps to the above claim type. This will make role checks automagically work.

Fast forward to modern Applications and OpenID Connect
When you are working with external identity providers, the chance is quite low that they will use the Microsoft legacy claim types. They will rather use the more modern standard OpenID Connect claim types.

In that case you need to be aware of the default behaviour of ClaimsPrincipal – and either set the NameClaimType and RoleClaimType to the right values manually – or transform the external claims types to Microsoft’s claim types.

The latter approach is what Microsoft implemented (of course) in their JWT validation library. The JWT handler tries to map all kinds of external claim types to the corresponding values on the ClaimTypes class – e.g. role to http://schemas.microsoft.com/ws/2008/06/identity/claims/role.

I personally don’t like that, because I think that claim types are an explicit contract in your application, and changing them should be part of application logic and claims transformation – and not a “smart” feature of token validation. That’s why you will always see the following line in my code:

JwtSecurityTokenHandler.InboundClaimTypeMap.Clear();

..which turns the mapping off. Newer versions of the handler call it DefaultInboundClaimTypeMap.

Setting the claim types manually
The constructor of ClaimsIdentity allows setting the claim types explicitly:

var id = new ClaimsIdentity(claims, “authenticationType”, “name”, “role”);
var p = new ClaimsPrincipal(id);

Also the token validation parameters object used by the JWT library has that feature. It bubbles up to e.g. the OpenID Connect authentication middleware like this:

var oidcOptions = new OpenIdConnectOptions
{
    AuthenticationScheme = "oidc",
    SignInScheme = "cookies",
 
    Authority = Clients.Constants.BaseAddress,
    ClientId = "mvc.implicit",
    ResponseType = "id_token",
    SaveTokens = true,
 
    TokenValidationParameters = new TokenValidationParameters
    {
        NameClaimType = "name",
        RoleClaimType = "role",
    }
};

Other JWT related libraries have the same capabilities – just have a look around.

Summary
Role checks are legacy – they only exist in the (Microsoft) claims world because of backwards compatibility with IPrincipal. There’s no need for them anymore – and you shouldn’t do role checks. If you want to check for the existence of specific claims – simply query the claims collection for what you are looking for.

If you need to bring old code that uses role checks forward, either let the JWT handler do some magic for you, or take control over the claim types yourself. You probably know by now what I would do ;)

 

…oh – and just in case you were looking for some practical advice here. The next time your [Authorize] attribute does not behave as expected – bring up the debugger, inspect your ClaimsPrincipal (e.g. Controller.User) and compare the RoleClaimType property with the claim type that holds your roles. If they are different – there’s your answer.

Screenshot 2016-08-21 14.20.28

 

 


Filed under: .NET Security, OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer

We have a demo instance of IdentityServer3 on https://demo.identityserver.io.

I already used this for various samples (e.g. the OpenID Connect native clients) – and it makes it easy to try IdentityServer with your clients without having to deploy and configure anything yourself.

The Auth0 guys just released a nice OpenID Connect playground website that allows you to interact with arbitrary spec compliant providers. If you want to try it yourself with IdentityServer – click on the configuration link and use these settings:

Screenshot 2016-08-17 10.09.34

In essence you only need to provide the URL of the discovery document, the client ID and the secret. The rest gets configured automatically for you.

Pressing Start will bring you to our standard login page:

Screenshot 2016-08-17 11.22.56

You can either use bob / bob (or alice / alice) to log in – or use your Google account.

Logging in will bring you to the consent screen – and then back to the playground:

Screenshot 2016-08-17 11.24.24

Now you can exercise the code to token exchange as well as the validation. As a last step you can even jump directly to jwt.io for inspecting the identity token:

Screenshot 2016-08-17 11.27.05

The source code for the IdentityServer demo web site can be found here.

We also have a more client types preconfigured, e.g. OpenID Connect hybrid flow, implicit flow as well as clients using PKCE. You can see the full list here.

You can request the typical OpenID Connect scopes – as well as a scope called api. The resulting access token can then be used to call https://demo.identityserver.io/api/identity which in turn will echo back the token claims as a JSON document.

Screenshot 2016-08-17 11.45.50

Have fun!

 


Filed under: ASP.NET, IdentityServer, OpenID Connect, OWIN, Uncategorized, WebAPI


Dominick Baier: Commercial Support Options for IdentityServer

Many customers have asked us for production support for IdentityServer. While this is something we would love to provide, Brock and I can’t do that on our own because we can’t guarantee the response times.

I am happy to announce that we have now partnered with our good friends at Rock Solid Knowledge to provide commercial support for IdentityServer!

RSK has excellent people with deep IdentityServer knowledge and Brock and I will help out as 2nd level support if needed.

Head over to https://www.identityserver.com/ and get in touch with them!


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Fixing OAuth 2.0 with OpenID Connect?

I didn’t like Nat’s Fixing OAuth? post.

“For protecting a resource with low value, current RFC6749 and RFC6750 with an appropriate constraint should be good enough…For protecting a resource whose value is higher than a certain level, e.g., the write access to the Financial API, then it would be more appropriate to use a modified protocol.”

I agree that write access to a financial API is a high value operation (and security measure will go far beyond authentication and token requests) – but most users and implementers of OAuth 2.0 based system would surely disagree that their resources only have a low value.

Then on the other hand I agree that OAuth 2.0 (or rather RFC6749 and 6750) on its own indeed has its issues and I would advise against using it (important part “on its own”).

Instead I would recommend using OpenID Connect – all of the OAuth 2.0 problems regarding client to provider communication are already fixed in OIDC – metadata, signed protocol responses, sender authentication, nonces etc.

When we designed identity server, we always saw OpenID Connect as a “super-set” of OAuth 2.0 and always recommended against using OAuth without the OIDC parts. Some people didn’t like that – but applying sound constraints definitely helped security.

I really don’t understand why this is not the official messaging? Maybe it’s political?

Screenshot 2016-07-29 08.42.17.png

(no response)

Wrt to the issues around bearer tokens – well – I really, really don’t understand why proof of possession and HTTP signing takes that long and seems to be such a low priority. We successfully implemented PoP tokens in IdentityServer and customers are using it. Of course there are issues – there will always be issues. But sitting on a half done spec for years will definitely not solve them.

So my verdict is – for interactive applications, don’t use OAuth 2.0 on its own. Just use OpenID Connect and identity tokens in addition to access tokens – you don’t need to be a financial API to have proper security.

 


Filed under: IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: .NET Core 1.0 is released, but where is IdentityServer?

In short: we are working on it.

Migrating the code from Katana to ASP.NET Core was actually mostly mechanical. But obviously new approaches and patterns have been introduced which might, or might not align directly with how we used to do things in IdentityServer3.

We also wanted to take the time to do some re-work and re-thinking, as well as doing some breaking changes that we couldn’t easily do before.

For a roadmap – in essence we will release a beta including the new UI interaction next week. Then we will have an RC by August and an RTM before the final ASP.NET/.NET Core tooling ships later this year.

Meanwhile we encourage you to try the current bits and give us feedback. The more the better.

Stay tuned.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Update for authentication & API access for native applications and IdentityModel.OidcClient

The most relevant spec for authentication and API access for native apps has been recently updated.

If you are “that kind of person” that enjoys looking at diffs of pre-release RFCs – you would have spotted a new way of dealing with the system browser for desktop operating systems (e.g. Windows or MacOS).

Quoting section 7.3:

“More applicable to desktop operating systems, some environments allow apps to create a local HTTP listener on a random port, and receive URI redirects that way.  This is an acceptable redirect URI choice for native apps on compatible platforms.”

IOW – your application launches a local “web server”, starts the system browser with a local redirect URI and waits for the response to come back (either a code or an error). This is much easier than trying to fiddle with custom URL monikers and such on desktop operating systems.

William Denniss – one of the authors of the above spec and the corresponding reference implementations – also created a couple of samples that show the usage of that technique for Windows desktop apps.

Inspired by that I, created a sample showing how to do OpenID Connect authentication from a console application using IdentityModel.OidcClient.

In a nutshell – it works like this:

Open a local listener

// create a redirect URI using an available port on the loopback address.
string redirectUri = string.Format("http://127.0.0.1:7890/");
Console.WriteLine("redirect URI: " + redirectUri);
 
// create an HttpListener to listen for requests on that redirect URI.
var http = new HttpListener();
http.Prefixes.Add(redirectUri);
Console.WriteLine("Listening..");
http.Start();

 

Construct the start URL, open the system browser and wait for a response

var options = new OidcClientOptions(
    "https://demo.identityserver.io",
    "native.code",
    "secret",
    "openid profile api",
    redirectUri);
options.Style = OidcClientOptions.AuthenticationStyle.AuthorizationCode;
 
var client = new OidcClient(options);
var state = await client.PrepareLoginAsync();
 
Console.WriteLine($"Start URL: {state.StartUrl}");
            
// open system browser to start authentication
Process.Start(state.StartUrl);
 
// wait for the authorization response.
var context = await http.GetContextAsync();

 

Process the response and access the claims and tokens

var result = await client.ValidateResponseAsync(context.Request.Url.AbsoluteUri, state);
 
if (result.Success)
{
    Console.WriteLine("\n\nClaims:");
    foreach (var claim in result.Claims)
    {
        Console.WriteLine("{0}: {1}", claim.Type, claim.Value);
    }
 
    Console.WriteLine();
    Console.WriteLine("Access token:\n{0}", result.AccessToken);
 
    if (!string.IsNullOrWhiteSpace(result.RefreshToken))
    {
        Console.WriteLine("Refresh token:\n{0}", result.RefreshToken);
    }
}
else
{
    Console.WriteLine("\n\nError:\n{0}", result.Error);
}
 
http.Stop();

 

Sample can be found here – have fun ;)

 

 


Filed under: IdentityModel, OAuth, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.