Andrew Lock: Using CancellationTokens in ASP.NET Core MVC controllers

Using CancellationTokens in ASP.NET Core MVC controllers

In this post I'll show how you can use a CancellationToken in your ASP.NET Core action method to stop execution when a user cancels a request from their browser. This can be useful if you have long running requests that you don't want to continue using up resources when a user clicks "stop" or "refresh" in their browser.

I'm not really going to cover any of the details of async, await, Tasks or CancellationTokens in this post, I'm just going to look at how you can inject a CancellationToken into your action methods, and use that to detect when a user has cancelled a request.

Long running requests and cancellation

Have you every been on a website where you've made a request for a page, and it just sits there, supposedly loading? Eventually you get board and click the "Stop" button, or maybe hammer F5 to reload the page. Users expect a page to load pretty much instantly these days, and when it doesn't, a quick refresh can be very tempting.

That's all well and good for the user, but what about your poor server? If the action method the user is hitting takes a long time to run, then refreshing five times will fire off 5 requests. Now you're doing 5 times the work. That's the default behaviour in MVC - even though the user has refreshed the browser, which cancels the original request, your MVC action won't know that the value it's computing is going to be thrown away at the end of it!

In this post, we'll assume you have an MVC action that can take some time to complete, before sending a response to the user. While that action is processing, the user might cancel the request directly, or refresh the page (which effectively cancels the original request, and initiates a new one).

I'm ignoring the fact that long running actions are generally a bad idea. If you find yourself with many long running actions in your app, you might be better off considering a solution based on CQRS and messaging queues, so you can quickly return a response to the user, and can process the result of the action on a background thread.

For example, consider the following MVC controller. This is a toy example, that simply waits for an 10s before returning a message to the user, but the Task.Delay() could be any long-running process, such as generating a large report to return to the user.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get()
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

If we hit the URL /slowtest then the request will run for 10s, and eventually will return the message:

Using CancellationTokens in ASP.NET Core MVC controllers

If we check the logs, you can see the whole action executed as expected:

Using CancellationTokens in ASP.NET Core MVC controllers

So now, what happens if the user refreshes the browser, half way through the request? The browser never receives the response from the first request, but as you can see from the logs, the action method executes to completion twice - once for the first (cancelled) request, and once for the second (refresh) request:

Using CancellationTokens in ASP.NET Core MVC controllers

Whether this is correct behaviour will depend on your app. If the request modifies state, then you may not want to halt execution mid-way through a method. On the other hand, if the request has no side-effects, then you probably want to stop the (presumably expensive) action as soon as you can.

ASP.NET Core provides a mechanism for the web server (e.g. Kestrel) to signal when a request has been cancelled using a CancellationToken. This is exposed as HttpContext.RequestAborted, but you can also inject it automatically into your actions using model binding.

Using CancellationTokens in your MVC Actions

CancellationTokens are lightweight objects that are created by a CancellationTokenSource. When a CancellationTokenSource is cancelled, it notifies all the consumers of the CancellationToken. This allows one central location to notify all of the code paths in your app that cancellation was requested.

When cancelled, the IsCancellationRequested property of the cancellation token will be set to True, to indicate that the CancellationTokenSource has been cancelled. Depending on how you are using the token, you may or may not need to check this property yourself. I'll touch on this a little more in the next section, but for now, let's see how to use a CancellationToken in our action methods.

Lets consider the previous example again. We have a long-running action method (which for example, is generating a read-only report by calling out to a number of other APIs). As it as an expensive method, we want to stop executing the action as soon as possible if the request is cancelled by the user.

The following code shows how we can hook into the central CancellationTokenSource for the request, by injecting a CancellationToken into the action method, and passing the parameter to the Task.Delay call:

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000, cancellationToken);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

MVC will automatically bind any CancellationToken parameters in an action method to the HttpContext.RequestAborted token, using the CancellationTokenModelBinder. This model binder is registered automatically when you call services.AddMvc() (or services.AddMvcCore()) in Startup.ConfigureServices().

With this small change, we can test out our scenario again. We'll make an initial request, which starts the long-running action, and then we'll reload the page. As you can see from the logs below, the first request never completes. Instead the Task.Delay call throws a TaskCancelledException when it detects that the CancellationToken.IsCancellationRequested property is true, immediately halting execution.

Using CancellationTokens in ASP.NET Core MVC controllers

Shortly after the request is cancelled by the user refreshing the browser, the original request is aborted with a TaskCancelledException which propagates back through the MVC filter pipeline, and back up the middleware pipeline.

In this scenario, the Task.Delay() method keeps an eye on the CancellationToken for you, so you never need to manually check if the token has been cancelled yourself. Depending on your scenario, you may be able to rely on framework methods like these to check the state of the CancellationToken, or you may have to watch for cancellation requests yourself.

Checking the cancellation state

If you're calling a built in method that supports cancellation tokens, like Task.Delay() or HttpClient.SendAsync(), then you can just pass in the token, and let the inner method take care of actually cancelling (throwing) for you.

In other cases, you may have some synchronous work you're doing, which you want to be able to be able to cancel. For example, imagine you're building a report to calculate all of the commission due to a company's employees. You're looping over every employee, and then looping over each sale they've made.

A simple solution to be able to cancel this report generation mid-way would be to check the CancellationToken inside the for loop, and abandon ship if the user cancels the request. The following example represents this kind of situation by looping 10 times, and performing some synchronous (non-cancellable) work, represented by the call to Thread.Sleep(). At the start of each loop, we check the cancellation token and throw if cancellation has been requested. This lets us add cancellation to an otherwise long-running synchronous process.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        for(var i=0; i<10; i++)
        {
            cancellationToken.ThrowIfCancellationRequested();
            // slow non-cancellable work
            Thread.Sleep(1000);
        }
        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

Now if you cancel the request the call to ThrowIfCancelletionRequested() will throw an OperationCanceledException, which again will propogate up the filter pipeline and up the middleware pipeline.

Tip: You don't have to use ThrowIfCancellationRequested(). You could check the value of IsCancellationRequested and exit the action gracefully. This article contains some general best practice patterns for working with cancellation tokens.

Typically, exceptions in action methods are bad, and this exception is treated no differently. If you're using the ExceptionHandlerMiddleware or DeveloperExceptionMiddleware in your pipeline, these will attempt to handle the exception, and generate a user-friendly error message. Of course, the request has been cancelled, so the user will never see this message!

Rather than filling your logs with exception messages from cancelled requests, you will probably want to catch these exceptions. A good candidate for catching cancellation exceptions from your MVC actions is an ExceptionFilter.

Catching cancellations with an ExceptionFilter

ExceptionFilters are an MVC concept that can be used to handle exceptions that occur either in your action methods, or in your action filters. If you're not familiar with the filter pipeline, I recommend checking out the documentation.

You can apply ExceptionFilters at the action level, at the controller level (in which case they apply to every action in the controller), or at the global level (in which case they apply to every action in your app). Typically they're implemented as attributes, so you can decorate your action methods with them.

For this example, I'm going to create a simple ExceptionFilter and add it to the global filters. We'll handle the exception, log it, and create a simple response so that we can just wind up the request as quick as possible. The actual response (Result) we generate doesn't really matter, as it's never getting sent to the browser, so our goal is to handle the exception in as tidy a way as possible.

public class OperationCancelledExceptionFilter : ExceptionFilterAttribute  
{
    private readonly ILogger _logger;

    public OperationCancelledExceptionFilter(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<OperationCancelledExceptionFilter>();
    }
    public override void OnException(ExceptionContext context)
    {
        if(context.Exception is OperationCanceledException)
        {
            _logger.LogInformation("Request was cancelled");
            context.ExceptionHandled = true;
            context.Result = new StatusCodeResult(400);
        }
    }
}

This filter is very simple. It derives from the base ExceptionFilterAttribute for simplicity, and overrides the OnException method. This provides an ExceptionContext object with information about the exception, the action method being executed, the ModelState - all sorts of interesting stuff!

All we care about are the OperationCanceledException exceptions, and if get one, we just write a log message, mark the exception as handled, and return a 200 result. Obviously we could log more (the URL would be an obvious start), but you get the idea.

Note that we are handling OperationCanceledException. The Task.Delay method throws a TaskCancelledException when cancelled, but that derives from OperationCanceledException, so we'll catch both types with this filter.

I'm not going to argue about whether this should be a 200/400/500 status code result. The request is cancelled and the client will never see it, so it really doesn't matter that much. I chose to go with a 400 result, but you have to be aware that if you have any middleware in place to catch errors like this, such as the StatusCodeMiddleware, then it could end up catching the response and doing pointless extra work to generate a "friendly" error page. On the other hand, if you return a 200, be careful if you have middleware that might cache the response to this "successful" request!

To hook up the exception filter globally, you add it in the call to services.AddMvc() in Startup.ConfigureServices:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(options =>
        {
            options.Filters.Add<OperationCancelledExceptionFilter>();
        });
    }
}

Now if the user refreshes their browser mid request, the request will still be cancelled, but we are back to a nice log message, instead of exceptions propagating all the way up our middleware pipeline.

Using CancellationTokens in ASP.NET Core MVC controllers

Summary

Users can can cancel requests to your web app at any point, by hitting the stop or reload button on your browser. Typically, your app will continue to generate a response anyway, even though Kestrel won't send it to the user. If you have a long running action method, then you may want to detect when a request is cancelled, and stop execution.

You can do this by injecting a CancellationToken into your action method, which will be automatically bound to the HttpContext.RequestAborted token for the request. You can check this token for cancellation as usual, and pass it to any asynchronous methods that support it. If the request is cancelled, an OperationCanceledException or TaskCanceledException will be thrown.

You can easily handle this exceptions using an ExceptionFilter, applied to the action or controller directly, or alternatively applied globally. The response won't be sent to the user's browser, so this isn't essential, but you can use it to tidy up your logs, and short circuit the pipeline in as efficient manner as possible.

Thanks to @purekrome for requesting this post and even providing the code outline!


Darrel Miller: HTTP Pattern Index

When building HTTP based applications we are limited to a small set of HTTP methods in order to achieve the goals of our application. Once our needs go beyond simple CRUD style manipulation of resource representations, we need to be a little more creative in the way we manipulate resources in order to achieve more complex goals.

The following patterns are based on scenarios that I myself have used in production applications, or I have seen others implement. These patterns are language agnostic, domain agnostic and to my knowledge, exist within the limitations of the REST constraints.


Name Description
Alias A resource designed to provide a logical identifier but without being responsible for incurring the costs of transferring the representation bytes.
Action Coming soon A processing resource used to convey a client's intent to invoke some kind of unsafe action on a secondary resource.
Bouncer A resource designed to accept a request body containing complex query parameters and redirect to a new location to enable the results of complex and expensive queries to be cached.
Builder Coming soon: A builder resource is much like a factory resource in that it is used to create another resource, however, a builder is a transient resource that enables idempotent creation and allows the client to specify values that cannot change over the lifetime of the created resource.
Bucket A resource used to indicate the status of a "child" resource.
Discovery This type of resource is used to provide a client with the information it needs to be able to access other resources.
Factory A factory resource is one that is used to create another resource.
Miniput A resource designed to enable doing a partial updates to another resource.
Progress A progress resource is usually a temporary resource that is created automatically by the server to provide status on some long running process that has been initiated by a client.
Sandbox Coming soon: A processing resource that is paired with a regular resource to enable making "whatif" style updates and seeing what the results would have been if applied against the regular resource.
Toggle Coming soon: A resource that has two distinct states and can easily be switched between those states.
Whackamole A type of resource that when deleted, re-appears as a different resource.
Window Coming soon: A resource that provides access to a subset of a larger set of information through the use of parameters that filter, project and zoom information from the complete set.


Andrew Lock: The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

This article describes why you get the error "The SDK 'Microsoft.Net.Sdk.Web' specified could not be found" when creating a new project in Visual Studio 2017 15.3, which prevents the project from loading, and how to fix it.

tl;dr; I had a rogue global.json sitting in a parent folder, that was tying the SDK version to 1.X. Removing that, (or adding a global.json for 2.x fixed the problem).

Update: Shortly after publishing this post, I noticed a tweet from Patrik who was getting a similar error, but for a different situation. He had installed the VS 2017 15.3 update, and could no longer open ASP.NET Core 1.1 projects!

It turns out, he'd uncovered the route of the problem, and the issue I was having - VS 2017 update 3 is incompatible with the 1.0.0 SDK:

Kudos to him for figuring it out!

2.0 all the things

As I'm sure anyone who's reading this is aware, Microsoft released the final version of .NET Standard 2.0, .NET Core 2.0, and ASP.NET Core 2.0 yesterday. These brings a huge number of changes, perhaps most importantly being the massive increase in API surface brought by .NET STandard 2.0, which will make porting applications to .NET Core much easier.

As part of the release, Microsoft also released Visual Studio 2017 update 3. This also has a bunch of features, but most importantly in supports .NET Core 2.0. Before this point, if you wanted to play with the .NET Core 2.0 bits you had to install the preview version of Visual Studio.

That's no longer as scary as it once was, with VS new lightweight installer and side by side installers. But I've been burned one to many times, and just didn't feel like risking having to pave my machine, so I decided to hold off the preview version. That didn't stop me playing with the preview bits of course, OmniSharp means developing in VS Code and with the CLI is almost as good, and JetBrains Rider went RTM a couple of weeks ago.

Still, I was excited to play with 2.0 on my home turf, in Visual Studio, so I:

  • Opened up the Visual Studio Installer program - This should force VS to check for updates, instead of waiting for it to notice that an update was available. It still took a little while (10 mins) for 15.3 to become available, but I clicked the update button as soon as it was available

  • Installed the .NET Core 2.0 SDK from here - You have to do this step separately at the moment. Once this is installed, the .NET Core 2.0 templates will light up in Visual Studio.

With both of these installed I decided on a quick test to make sure everything was running smoothly. I'd create a basic app using new 2.0 templates.

Creating a new ASP.NET Core 2.0 web app

The File > New Project experience is pretty much the same in ASP.NET Core 2.0, but there are some additional templates available after you choose ASP.NET Core Web Application. If you switch the framework version to ASP.NET Core 2.0, you'll see some new templates appear, including SPA templates for Angular and React.js:

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

I left everything at the defaults - no Docker support enabled, no authentication - and selected Web Application (Model-View-Controller).

Note that the templates have been renamed a little. The Web Application template creates a new project using Razor pages, while the Web Application (Model-View-Controller) template creates a template using separate controllers.

Click OK, and wait for the template to scaffold… and …

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Oh dear. What's going on here?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

So, there was clearly a problem creating the solution. My first thought was that it was a bug in the new VS 2017 update. A little odd seeing as noone else on Twitter seemed to have mentioned it, but not overly surprising given it had just been released. I should expect some kinks right?

A quickgoogling for the error, turned up this issue, but that seemed to suggest the error was an old one that had been fixed.

I gave it a second go, but sure enough, the same error occurred. Clicking OK left me with a solution with no projects.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The project template was created successfully on disk, so I thought, why not just add it to the solution directly: Right Click on the solution file Add > Existing Project?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Hmmm, so definitely something significantly wrong here...

Check your global.json

The error was complaining that the SDK was not found. Why would that happen? I had definitely installed the .NET Core 2.0 SDK, and VS could definitely see it, as it had shown me the 2.0 templates.

It was at this point I had an epiphany. A while back, when experimenting with a variety of preview builds, I had recurring issues when I would switch back and forth between preview projects and ASP.NET Core 1.0 projects.

To get round the problem, I created a sub folder in my Repos folder for preview builds, and dropped a global.json into the folder for the newer SDK, and placed the following global.json in the root of my Repos folder:

{
  "sdk": {
    "version": "1.0.0"
  }
}

Any time I created a project in the Previews folder, it would use the preview SDK, but a project created anywhere else would use the stable 1.0.0 SDK. This was the route of my problem.

I was trying to create an ASP.NET Core 2.0 project in a folder tied to the 1.0.0 SDK. That older SDK doesn't support the new 2.0 projects, so VS was borking when it tried to load the project.

The simple fix was to either delete the global.json entirely (the highest SDK version will be used in that case), or update it to 2.0.0.

In general, you can always use the latest version of the SDK to build your projects. The 2.0.0 SDK can be used to build 1.0.0 projects.

After updating the global.json, VS was able to add the existing project, and to create new projects with no issues.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Summary

I was running into an issue where creating a new ASP.NET Core 2.0 project was giving me an error The SDK 'Microsoft.Net.Sdk.Web' specified could not be found, and leaving me unable to open the project in Visual Studio. The problem was the project was created in a folder that contained a global.json file, tying the SDK version to 1.0.0.

Deleting the global.json, or updating it to 2.0, fixed the issue. Be sure to check parent folders too - if any parent folder contains a global.json, the SDK version specified in the "closest" folder will be used.


Damien Bowden: Angular Configuration using ASP.NET Core settings

This post shows how ASP.NET Core application settings can be used to configure an Angular application. ASP.NET Core provides excellent support for different configuration per environment, and so using this for an Angular application can be very useful. Using CI, one release build can be automatically created with different configurations, instead of different release builds per deployment target.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/master/src/AngularClient

ASP.NET Core Hosting application

The ClientAppSettings class is used to load the strongly typed appsettings.json from the json file. The class contains the properties required for OIDC configuration in the SPA and the required API URLs. These properties have different values per deployment, so we do not want to add these in a typescript file, or change with each build.

namespace AngularClient.ViewModel
{
    public class ClientAppSettings
    {
        public string  stsServer { get; set; }
        public string redirect_url { get; set; }
        public string client_id { get; set; }
        public string response_type { get; set; }
        public string scope { get; set; }
        public string post_logout_redirect_uri { get; set; }
        public bool start_checksession { get; set; }
        public bool silent_renew { get; set; }
        public string startup_route { get; set; }
        public string forbidden_route { get; set; }
        public string unauthorized_route { get; set; }
        public bool log_console_warning_active { get; set; }
        public bool log_console_debug_active { get; set; }
        public string max_id_token_iat_offset_allowed_in_seconds { get; set; }
        public string apiServer { get; set; }
        public string apiFileServer { get; set; }
    }
}

The appsettings.json file contains the actual values which will be used for each different environment.

{
  "ClientAppSettings": {
    "stsServer": "https://localhost:44318",
    "redirect_url": "https://localhost:44311",
    "client_id": "angularclient",
    "response_type": "id_token token",
    "scope": "dataEventRecords securedFiles openid profile",
    "post_logout_redirect_uri": "https://localhost:44311",
    "start_checksession": false,
    "silent_renew": false,
    "startup_route": "/dataeventrecords",
    "forbidden_route": "/forbidden",
    "unauthorized_route": "/unauthorized",
    "log_console_warning_active": true,
    "log_console_debug_active": true,
    "max_id_token_iat_offset_allowed_in_seconds": 10,
    "apiServer": "https://localhost:44390/",
    "apiFileServer": "https://localhost:44378/"
  }
}

The ClientAppSettings class is then added to the IoC in the ASP.NET Core Startup class and the ClientAppSettings section is used to fill the instance with data.

public void ConfigureServices(IServiceCollection services)
{
  services.Configure<ClientAppSettings>(Configuration.GetSection("ClientAppSettings"));
  services.AddMvc();

A MVC Controller is used to make the settings public. This class gets the strongly typed settings from the IoC and returns it in a HTTP GET request. No application secrets should be included in this HTTP GET request!

using AngularClient.ViewModel;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

namespace AngularClient.Controllers
{
    [Route("api/[controller]")]
    public class ClientAppSettingsController : Controller
    {
        private readonly ClientAppSettings _clientAppSettings;

        public ClientAppSettingsController(IOptions<ClientAppSettings> clientAppSettings)
        {
            _clientAppSettings = clientAppSettings.Value;
        }

        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_clientAppSettings);
        }
    }
}

Configuring the Angular application

The Angular application needs to read the settings and use these in the client application. A configClient function is used to GET the data from the server. The APP_INITIALIZER could also be used, but as the settings are been used in the main AppModule, you still have to wait for the HTTP GET request to complete.

configClient() {

	// console.log('window.location', window.location);
	// console.log('window.location.href', window.location.href);
	// console.log('window.location.origin', window.location.origin);

	return this.http.get(window.location.origin + window.location.pathname + '/api/ClientAppSettings').map(res => {
		this.clientConfiguration = res.json();
	});
}

In the constructor of the AppModule, the module subscribes to the configClient function. Here the configuration values are read and the properties are set as required for the SPA application.

clientConfiguration: any;

constructor(public oidcSecurityService: OidcSecurityService, private http: Http, private configuration: Configuration) {

	console.log('APP STARTING');
	this.configClient().subscribe(config => {

		let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
		openIDImplicitFlowConfiguration.stsServer = this.clientConfiguration.stsServer;
		openIDImplicitFlowConfiguration.redirect_url = this.clientConfiguration.redirect_url;
		openIDImplicitFlowConfiguration.client_id = this.clientConfiguration.client_id;
		openIDImplicitFlowConfiguration.response_type = this.clientConfiguration.response_type;
		openIDImplicitFlowConfiguration.scope = this.clientConfiguration.scope;
		openIDImplicitFlowConfiguration.post_logout_redirect_uri = this.clientConfiguration.post_logout_redirect_uri;
		openIDImplicitFlowConfiguration.start_checksession = this.clientConfiguration.start_checksession;
		openIDImplicitFlowConfiguration.silent_renew = this.clientConfiguration.silent_renew;
		openIDImplicitFlowConfiguration.startup_route = this.clientConfiguration.startup_route;
		openIDImplicitFlowConfiguration.forbidden_route = this.clientConfiguration.forbidden_route;
		openIDImplicitFlowConfiguration.unauthorized_route = this.clientConfiguration.unauthorized_route;
		openIDImplicitFlowConfiguration.log_console_warning_active = this.clientConfiguration.log_console_warning_active;
		openIDImplicitFlowConfiguration.log_console_debug_active = this.clientConfiguration.log_console_debug_active;
		openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = this.clientConfiguration.max_id_token_iat_offset_allowed_in_seconds;

		this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);

		configuration.FileServer = this.clientConfiguration.apiFileServer;
		configuration.Server = this.clientConfiguration.apiServer;
	});
}

The Configuration class can then be used throughout the SPA application.

import { Injectable } from '@angular/core';

@Injectable()
export class Configuration {
    public Server = 'read from app settings';
    public FileServer = 'read from app settings';
}

I am certain, there is a better way to do the Angular configuration, but not much information exists for this. APP_INITIALIZER is not so well documentated. Angular CLI has it’s own solution, but the configuration file cannot be read per environment.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments

https://www.intertech.com/Blog/deploying-angular-4-apps-with-environment-specific-info/

https://stackoverflow.com/questions/43193049/app-settings-the-angular-4-way

https://damienbod.com/2015/10/11/asp-net-5-multiple-configurations-without-using-environment-variables/



Andrew Lock: Introduction to the ApiExplorer in ASP.NET Core

Introduction to the ApiExplorer in ASP.NET Core

One of the standard services added when you call AddMvc() or AddMvcCore() in an ASP.NET Core MVC application is the ApiExplorer. In this post I'll show a quick example of its capabilities, and give you a taste of the metadata you can obtain about your application.

Exposing your application's API with the ApiExplorer

The ApiExplorer contains functionality for discovering and exposing metadata about your MVC application. You can use it to provide details such as a list of controllers and actions, their URLs and allowed HTTP methods, parameters and response types.

How you choose to use these details is up to you - you could use it to auto-generate documentation, help pages, or clients for your application. The Swagger and Swashbuckle.AspNetCore frameworks use the ApiExplorer functionality to provide a fully featured documentation framework, and are well worth a look if that's what you're after.

For this article, I'll hook directly into the ApiExplorer to generate a simple help page for a basic Web API controller.

Introduction to the ApiExplorer in ASP.NET Core

Adding the ApiExplorer to your applications

The ApiExplorer functionality is part of the Microsoft.AspNetCore.Mvc.ApiExplorer package. This package is referenced by default when you include the Microsoft.AspNetCore.Mvc package in your application, so you generally won't need to add the package explicitly. If you are starting from a stripped down application, you can add the package directly to make the services available.

As Steve Gordon describes in his series on the MVC infrastructure, the call to AddMvc in Startup.ConfigureServices automatically adds the ApiExplorer services to your application by calling services.AddApiExplorer(), so you don't need to explicitly add anything else to your Startup class .

The call to AddApiExplorer in turn, calls an internal method AddApiExplorerServices(), which adds the actual services you will use in your application:

internal static void AddApiExplorerServices(IServiceCollection services)  
{
    services.TryAddSingleton<IApiDescriptionGroupCollectionProvider, ApiDescriptionGroupCollectionProvider>();
    services.TryAddEnumerable(
        ServiceDescriptor.Transient<IApiDescriptionProvider, DefaultApiDescriptionProvider>());
}

This adds a default implementations of the IApiDescriptionGroupCollectionProvider that exposes the API endpoints to your application. To access the list of APIs, you just need to inject the service into your controller/services.

Listing your application's metadata

For this app, we'll just include the default ValuesController that is added to the default web API project:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

In addition, we'll create a simple controller that renders details about the Web API endpoints in your application to a razor page, as you saw earlier, called DocumentationController.

First, inject the IApiDescriptionGroupCollectionProvider into the controller. For simplicity, we'll just return this directly as the model to the Razor view page - we'll decompose the details it provides in the Razor page.

public class DocumentationController : Controller  
{
    private readonly IApiDescriptionGroupCollectionProvider _apiExplorer;
    public DocumentationController(IApiDescriptionGroupCollectionProvider apiExplorer)
    {
        _apiExplorer = apiExplorer;
    }

    public IActionResult Index()
    {
        return View(_apiExplorer);
    }
}

The provider exposes a collection of ApiDescriptionGroups, each of which contains a collection of ApiDescriptions. You can think of an ApiDescriptionGroup as a controller, and an ApiDescription as an action method.

The ApiDescription contains a wealth of information about the action method - parameters, the URL, the type of media that can be returned - basically everything you might want to know about an API!

The Razor page below lists out all the APIs that are exposed in the application. There's a slightly overwhelming amount of detail here, but it lists everything you might need to know!

@using Microsoft.AspNetCore.Mvc.ApiExplorer;
@model IApiDescriptionGroupCollectionProvider

<div id="body">  
    <section class="featured">
        <div class="content-wrapper">
            <hgroup class="title">
                <h1>ASP.NET Web API Help Page</h1>
            </hgroup>
        </div>
    </section>
    <section class="content-wrapper main-content clear-fix">
        <h3>API Groups, version @Model.ApiDescriptionGroups.Version</h3>
        @foreach (var group in Model.ApiDescriptionGroups.Items)
            {
            <h4>@group.GroupName</h4>
            <ul>
                @foreach (var api in group.Items)
                {
                    <li>
                        <h5>@api.HttpMethod @api.RelativePath</h5>
                        <blockquote>
                            @if (api.ParameterDescriptions.Count > 0)
                            {
                                <h6>Parameters</h6>
                                    <dl class="dl-horizontal">
                                        @foreach (var parameter in api.ParameterDescriptions)
                                        {
                                            <dt>Name</dt>
                                            <dd>@parameter.Name,  (@parameter.Source.Id)</dd>
                                            <dt>Type</dt>
                                            <dd>@parameter.Type?.FullName</dd>
                                            @if (parameter.RouteInfo != null)
                                            {
                                                <dt>Constraints</dt>
                                                <dd>@string.Join(",", parameter.RouteInfo.Constraints?.Select(c => c.GetType().Name).ToArray())</dd>
                                                <dt>DefaultValue</dt>
                                                <dd>parameter.RouteInfo.DefaultValue</dd>
                                                <dt>Is Optional</dt>
                                                <dd>@parameter.RouteInfo.IsOptional</dd>
                                            }
                                        }
                                    </dl>
                            }
                            else
                            {
                                <i>No parameters</i>
                            }
                        </blockquote>
                        <blockquote>
                            <h6>Supported Response Types</h6>
                            <dl class="dl-horizontal">
                                @foreach (var response in api.SupportedResponseTypes)
                                {
                                    <dt>Status Code</dt>
                                        <dd>@response.StatusCode</dd>

                                        <dt>Response Type</dt>
                                        <dd>@response.Type?.FullName</dd>

                                        @foreach (var responseFormat in response.ApiResponseFormats)
                                        {
                                            <dt>Formatter</dt>
                                            <dd>@responseFormat.Formatter?.GetType().FullName</dd>
                                            <dt>Media Type</dt>
                                            <dd>@responseFormat.MediaType</dd>
                                        }
                                }
                            </dl>

                        </blockquote>
                    </li>
                }
            </ul>
        }
    </section>
</div>  

If you run the application now, you might be slightly surprised by the response:

Introduction to the ApiExplorer in ASP.NET Core

Even though we have the default ValuesController in the project, apparently, there's no APIs!

Enabling documentation of your controllers

By default, controllers in ASP.NET Core are not included in the ApiExplorer. There are a whole host of attributes you can apply to customise the metadata produced by the ApiExplorer, but the critical one here is [ApiExplorerSettings].

By applying this attribute to a controller, you can control whether or not it is included in the API, as well as its name in the ApiExplorer:

[Route("api/[controller]")]
[ApiExplorerSettings(IgnoreApi = false, GroupName = nameof(ValuesController))]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    // other action methods
}

After applying this attribute, and viewing the groups in IApiDescriptionGroupCollectionProvider you can see that the API is now available:

Introduction to the ApiExplorer in ASP.NET Core

ApiExplorer and conventional routing

Note, you can only apply the [ApiExplorerSettings] attribute to controllers and actions that use attribute routing. If you enable the ApiExplorer on an action that uses conventional routing, you will be greeted with an error like the following:

Introduction to the ApiExplorer in ASP.NET Core

Remember, ApiExplorer really is just for your APIs! If you stick to the convention of using attribute routing for your Web API controllers and conventional routing for your MVC controllers you'll be fine, but it's just something to be aware of.

Summary

This was just a brief introduction to the ApiExplorer functionality that exposes a variety of metadata about the Web APIs in your application. You're unlikely to use it quite like this, but it's interesting to see all the introspection options available to you.


Andrew Lock: How to format response data as XML or JSON, based on the request URL in ASP.NET Core

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

I think it's safe to say that most ASP.NET Core applications that use a Web API return data as JSON. What with JavaScript in the browser, and JSON parsers everywhere you look, this makes perfect sense. Consequently, ASP.NET Core is very much geared towards JSON, but it is perfectly possible to return data in other formats (for example Damien Bowden recently added a Protobuf formatter to the WebApiContrib.Core project).

In this post, I'm going to focus on a very specific scenario. You want to be able to return data from a Web API action method in one of two different formats - JSON or XML, and you want to control which format is used by the extension of the URL. For example /api/Values.xml should format the result as XML, while /api/Values.json should format the result as JSON.

Using the FormatFilterAttribute to read the format from the URL

Out of the box, if you use the standard MVC service configuration by calling services.AddMvc(), the JSON formatters are configured for your application by default. All that you need to do is tell your action method to read the format from the URL using the FormatFilterAttribute.

You can add the [FormatFilter] attribute to your action methods, to indicate that the output format will be defined by the URL. The FormatFilter looks for a route parameter called format in the RouteData for the request, or in the querystring. If you want to use the .json approach I described earlier, you should make sure the route template for your actions includes a .{format} parameter:

public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet("api/values.{format}"), FormatFilter]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

Note: You can make the .format suffix optional using the syntax .{format?}, but you need to make sure the . follows a route parameter, e.g. api/values/{id}.{format?}. If you try to make the format optional in the example above (api/values.{format?}) you'll get a server error. A bit odd, but there you go…

With the route template updated, and the [FormatFilter] applied to the method, we can now test our JSON formatters:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Success - we have returned JSON when requested! Let's give it a try with the xml suffix:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Doh, no such luck. As I mentioned earlier, the JSON formatters are registered by default; if we want to return XML then we'll need to configure the XML formatters too.

Adding the XML formatters

In ASP.NET Core, everything is highly modular, so you only add the functionality you need to your application. Consequently, there's a separate NuGet package for the XML formatters that you need to add to your .csproj file - Microsoft.AspNetCore.Mvc.Formatters.Xml

<PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Xml" Version="1.1.3" />  

Note: If you're using ASP.NET Core 2.0, this package is included by default as part of the Microsoft.AspNetCore.All metapackage.

Adding the package to your project lights up an extension method on the IMvcBuilder instance returned by the call to services.AddMvc(). The AddXmlSerializerFormatters() method adds both input and output formatters, so you can serialise objects to and from XML.

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc()
        .AddXmlSerializerFormatters();
}

Alternatively, if you only want to be able to format results as XML, but don't need to be able to read XML from a request body, you can just add the output formatter instead:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    });
}

By adding this output formatter we can now format objects as XML. However, if you test the XML URL again at this point, you'll still get the same 404 response as we did before. What gives?

Registering a type mapping for the format suffix

By registering the XML formatters, we now have the ability to format XML. However, the FormatFilter doesn't know how to handle the .xml suffix we're using in the request URL. To make this work, we need to tell the filter that the xml suffix maps to the application/xml MIME type.

You can register new type mappings by configuring the FormatterMappings options, when you call AddMvc(). These define the mappings between the {format} parmeter and the MIME type that the FormatFilter will use. For example:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("xml", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("config", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("js", MediaTypeHeaderValue.Parse("application/json"));
    })
        .AddXmlSerializerFormatters();

The FormatterMappings property contains a dictionary of all the suffix to MIME type mappings. You can add new ones using the SetMediaTypeMappingForFormat, passing the suffix as the key and the MIME type as the value.

In the example above I've actually registered three new mappings. I've added the xml and config files as XML, and added a new js suffix that maps to JSON, just to demonstrate that JSON isn't actually anything special here!

With this last piece of configuration in place, we can now finally request XML by using the .xml or .config suffix in our URLs:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Summary

In this post you saw how to use the FormatFilter to specify the desired output format by using a file-type suffix on your URLs. To do so, there were four steps:

  1. Add the [FormatFilter] attribute to your action method
  2. Ensure the route to the action contains a {format} route parameter (or pass it in the querystirng e.g. ?format=xml)
  3. Register the output formatters you wish to support with MVC. To add both input and output XML formatters, use the AddXmlSerializerFormatters() extensions method
  4. Register a new type mapping between a format suffix and a MIME type on the MvcOptions object. For example, you could add XML using:
options.FormatterMappings.SetMediaTypeMappingForFormat(  
    "xml", MediaTypeHeaderValue.Parse("application/xml"));

If a type mapping is not configured for a suffix, then you'll get a 404 Not Found response when calling the action.


Anuraj Parameswaran: Running PHP on .NET Core with Peachpie

This post is about running PHP on .NET Core with Peachpie. Peachpie is an open source PHP Compiler to .NET. This innovative compiler allows you to run existing PHP applications with the performance, speed, security and interoperability of .NET.


Andrew Lock: Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

ASP.NET Core Identity is an authentication and membership system that lets you easily add login functionality to your ASP.NET Core application. It is designed in a modular fashion, so you can use any "stores" for users and claims that you like, but out of the box it uses Entity Framework Core to store the entities in a database.

By default, EF Core uses naming conventions for the database entities that are typical for SQL Server. In this post I'll describe how to configure your ASP.NET Core Identity app to replace the database entity names with conventions that are more common to PostgreSQL.

I'm focusing on ASP.NET Core Identity here, where the entity table name mappings have already been defined, but there's actually nothing specific to ASP.NET Core Identity in this post. You can just as easily apply this post to EF Core in general, and use more PostgreSQL-friendly conventions for all your EF Core code. See here for the tl;dr code!

Moving to PostgreSql as a SQL Server aficionado

ASP.NET Core Identity can use any database provider that is supported by EF Core - some of which are provided by Microsoft, others are third-party or open source components. If you use the templates that come with the .NET CLI via dotnet new, you can choose SQL Server or SQLite by default. Personally, I've been working more and more with PostgreSQL, the powerful cross-platform, open source database.

As someone who's familiar with SQL Server, one of the biggest differences that can bite you when you start working with PostgreSQL is that table and column names are case sensitive! This certainly takes some getting used to, and, frankly is a royal pain in the arse to work with if you stick to your old habits. If a table is created with uppercase characters in the table or column name, then you have to ensure you get the case right, and wrap the identifiers in double quotes, as I'll show shortly.

This is unfortunate when you come from a SQL Server world, where camel-case is the norm for table and column names. For example, imagine you have a table called AspNetUsers, and you want to retrieve the Id, Email and EmailConfirmed fields:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

To query this table in PostgreSQL, you'd have to do something like:

SELECT "Id", "Email", "EmailConfirmed" FROM "AspNetUsers"  

Notice the quote marks we need? This only gets worse when you need to escape the quotes because you're calling from the command line, or defining a SQL query in a C# string, for example:

$ psql -d DatabaseWithCaseIssues -c "SELECT \"Id\", \"Email\", \"EmailConfirmed\" FROM \"AspNetUsers\" "

Clearly nobody wants to be dealing with this. Instead it's convention to use snake_case for database objects instead of CamelCase.

snake_case > CamelCase in PostreSQL

Snake case uses lowercase for all of the identifiers, and instead of using capitals to demarcate words, it uses an underscore, _. This is perfect for PostgreSQL, as it neatly avoids the case issue. If, we could rename our entity table names to asp_net_users, and the corresponding fields to id, email and email_confirmed, then we'd neatly side-step the quoting issue:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

This makes the PostgreSQL queries way simpler, especially when you would otherwise need to escape the quote marks:

$ psql -d DatabaseWithCaseIssues -c "SELECT id, email, email_confirmed FROM asp_net_users"

If you're using EF Core, then theoretically all this wouldn't matter to you. The whole point is that you don't have to write SQL code yourself, and you can just let the underlying framework generate the necessary queries. If you use CamelCase names, then the EF Core PostgreSQL database provider will happily escape all the entity names for you.

Unfortunately, reality is a pesky beast. It's just a matter of time before you find yourself wanting to write some sort of custom query directly against the database to figure out what's going on. More often than not, if it comes to this, it's because there's an issue in production and you're trying to figure out what went wrong. The last thing you need at this stressful time is to be messing with casing issues!

Consequently, I like to ensure my database tables are easy to query, even if I'll be using EF Core or some other ORM 99% of the time.

EF Core conventions and ASP.NET Core Identity

ASP.NET Core Identity takes care of many aspects of the identity and membership system of your app for you. In particular, it creates and manages the application user, claim and role entities for you, as well as a variety of entities related to third-party logins:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

If you're using the EF Core package for ASP.NET Core Identity, these entities are added to an IdentityDbContext, and configured within the OnModelCreating method. If you're interested, you can view the source online - I've shown a partial definition below, that just includes the configuration for the Users property which represents the users of your app

public abstract class IdentityDbContext<TUser>  
{
    public DbSet<TUser> Users { get; set; }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("UserNameIndex").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("EmailIndex");
            b.ToTable("AspNetUsers");
        }
        // additional configuration
    }
}

The IdentityDbContext uses the OnModelCreating method to configure the database schema. In particular, it defines the name of the user table to be "AspNetUsers" and sets the name of a number of indexes. The column names of the entities default to their C# property values, so they would also be CamelCased.

In your application, you would typically derive your own DbContext from the IdentityDbContext<>, and inherit all of the schema associated with ASP.NET Core Identity. In the example below I've done this, and specified TUser type for the application to be ApplicationUser:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }
}

With the configuration above, the database schema would use all of the default values, including the table names, and would give the database schema we saw previously. Luckily, we can override these values and replace them with our snake case values instead.

Replacing specific values with snake case

As is often the case, there are multiple ways to achieve our desired behaviour of mapping to snake case properties. The simplest conceptually is to just overwrite the values specified in IdentityDbContext.OnModelCreating() with new values. The later values will be used to generate the database schema. We simply override the OnModelCreating() method, call the base method, and then replace the values with our own:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("user_name_index").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("email_index");
            b.ToTable("asp_net_users");
        }
        // additional configuration
    }
}

Unfortunately, there's a problem with this. EF Core uses conventions to set the names for entities and properties where you don't explicitly define their schema name. In the example above, we didn't define the property names, so they will be CamelCase by default.

If we want to override these, then we need to add additional configuration for each entity property:

b.Property(b => b.EmailConfirmation).HasColumnName("email_confirmation");  

Every. Single. Property.

Talk about laborious and fragile…

Clearly we need another way. Instead of trying to explicitly replace each value, we can use a different approach, which essentially creates alternative conventions based on the existing ones.

Replacing the default conventions with snake case

The ModelBuilder instance that is passed to the OnModelCreating() method contains all the details of the database schema that will be created. By default, the database object names will all be CamelCased.

By overriding the OnModelCreating method, you can loop through each table, column, foreign key and index, and replace the existing value with its snake case equivalent. The following example shows how you can do this for every entity in the EF Core model. The ToSnakCase() extension method (shown shortly) converts a camel case string to a snake case string.

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        foreach(var entity in builder.Model.GetEntityTypes())
        {
            // Replace table names
            entity.Relational().TableName = entity.Relational().TableName.ToSnakeCase();

            // Replace column names            
            foreach(var property in entity.GetProperties())
            {
                property.Relational().ColumnName = property.Name.ToSnakeCase();
            }

            foreach(var key in entity.GetKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var key in entity.GetForeignKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var index in entity.GetIndexes())
            {
                index.Relational().Name = index.Relational().Name.ToSnakeCase();
            }
        }
    }
}

The ToSnakeCase() method is just a simple extension method that looks for a lower case letter or number, followed by a capital letter, and inserts an underscore. There's probably a better / more efficient way to achieve this, but it does the job!

public static class StringExtensions  
{
    public static string ToSnakeCase(this string input)
    {
        if (string.IsNullOrEmpty(input)) { return input; }

        var startUnderscores = Regex.Match(input, @"^_+");
        return startUnderscores + Regex.Replace(input, @"([a-z0-9])([A-Z])", "$1_$2").ToLower();
    }
}

These conventions will replace all the database object names with snake case values, but there's one table that won't be modified, the actual migrations table. This is defined when you call UseNpgsql() or UseSqlServer(), and by default is called __EFMigrationsHistory. You'll rarely need to query it outside of migrations, so I won't worry about it for now.

With our new conventions in place, we can add the EF Core migrations for our snake case schema. If you're starting from one of the VS or dotnet new templates, delete the default migration files created by ASP.NET Core Identity:

  • 00000000000000_CreateIdentitySchema.cs
  • 00000000000000_CreateIdentitySchema.Designer.cs
  • ApplicationDbContextModelSnapshot.cs

and create a new set of migrations using:

$ dotnet ef migrations add SnakeCaseIdentitySchema

Finally, you can apply the migrations using

$ dotnet ef database update

After the update, you can see that the database schema has been suitably updated. We have snake case table names, as well as snake case columns (you can take my word for it on the foreign keys and indexes!)

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Now we have the best of both worlds - we can use EF Core for all our standard database action, but have the option of hand crafting SQL queries without crazy amounts of ceremony.

Note, although this article focused on ASP.NET Core Identity, it is perfectly applicable to EF Core in general.

Summary

In this post, I showed how you could modify the OnModelCreating() method so that EF Core uses snake case for database objects instead of camel case. You can look through all the entities in EF Core's model, and change the table names, column names, keys, and indexes to use snake case. For more details on the default EF Core conventions, I recommend perusing the documentation!


Andrew Lock: Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

I was recently creating a new GitHub project and I wanted to target ASP.NET Core 2.0 preview 2. I like to use AppVeyor for the CI build and for publishing to MyGet/NuGet, as I can typically just copy and paste a single file between projects to get my standard build pipeline. Unfortunately, targeting the latest preview is easier said than done! In this post, I'll show how to update your appveyor.yml file so you can build your .NET Core preview libraries on AppVeyor.

Building .NET Core projects on AppVeyor

If you are targeting a .NET Core SDK version that AppVeyor explicitly supports, then you don't really have to do anything - a simple appveyor.yml file will handle everything for you. For example, the following is a (somewhat abridged) version I use on some of my existing projects:

version: '{build}'  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxxxxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

There's really nothing fancy here, most of this configuration is used to define when AppVeyor should run a build, and how to deploy the NuGet package to NuGet. There's essentially no configuration of the target environment required - the build simply calls the build.ps1 file to restore and build the project.

I've switched to using Cake for most of my projects these days, often based on a script from Muhammad Rehan Saeed. If this is your first time using AppVeyor to build your projects, I suggest you take a look at my previous post on using AppVeyor.

Unfortunately, if you try and build a .NET Core 2.0 preview 2 project with this script, you'll be out of luck. I found I got random, nondescript errors, such as this one:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Installing .NET Core 2.0 preview 2 in AppVeyor

Luckily, AppVeyor makes it easy to install additional dependencies before running your build script - you just add additional commands under the install node:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
install:  
  # Run additional commands here

The tricky part is working out exactly what to run! I couldn't find any official guidance on scripting the install, so I went hunting in some of the Microsoft GitHub repos. In particular I found the JavaScriptServices repo which manually installs .NET Core. The install node at the time of writing (for preview 1) was:

install:  
   # .NET Core SDK binaries
   - ps: $urlCurrent = "https://download.microsoft.com/download/3/7/F/37F1CA21-E5EE-4309-9714-E914703ED05A/dotnet-dev-win-x64.2.0.0-preview1-005977.exe"
   - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
   - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
   - ps: $tempFileCurrent = [System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), [System.IO.Path]::GetRandomFileName())
   - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
   - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
   - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"

There's a lot of commands in there. Most of it we can copy and paste, but the trickiest point is that download URL - GUIDs, really?

Luckily there's an easy way to find the URL for preview 2 - you can look at the release notes for the version of .NET Core you want to target.

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

The link you want is the Windows 64-bit SDK binaries. Just right-click, copy the link and paste into the appveyor.yml, to give the final file. The full AppVeyor file from my recent CommonPasswordValidator repository is shown below:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
environment:  
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true
  DOTNET_CLI_TELEMETRY_OPTOUT: true
install:  
  # Download .NET Core 2.0 Preview 2 SDK and add to PATH
  - ps: $urlCurrent = "https://download.microsoft.com/download/F/A/A/FAAE9280-F410-458E-8819-279C5A68EDCF/dotnet-sdk-2.0.0-preview2-006497-win-x64.zip"
  - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
  - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
  - ps: $tempFileCurrent = [System.IO.Path]::GetTempFileName()
  - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
  - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
  - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  server: https://www.myget.org/F/andrewlock-ci/api/v2/package
  api_key:
    secure: xxxxxx
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

Now when AppVeyor runs, you can see it running the install steps before running the build script:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Using predictable download URLs

Shortly after battling with this issue, I took another look at the JavaScriptServices project, and noticed they'd switched to using nicer URLs for the SDK binaries. Instead of using the horrible GUIDy URLs, you can use zip files stored on an Azure CDN instead. These URLs just require you know the SDK version (including the build number) For example:

It looks like preview 2 is the first to be available at this URL, but as you can see, later builds are also available if you want to work with the bleeding edge builds.

Summary

In this post I showed how you could use the install node of an appveyor.yml file to install ASP.NET Core 2.0 preview 2 into your AppVeyor build pipeline. This lets you target preview versions of .NET Core in your build pipeline, before they're explicitly supported by AppVeyor.


Andrew Lock: Creating a validator to check for common passwords in ASP.NET Core Identity

Creating a validator to check for common passwords in ASP.NET Core Identity

In my last post, I showed how you can create a custom validator for ASP.NET Core. In this post, I introduce a package that lets you validate that a password is not one of the most common passwords users choose.

You can find the package on GitHub and on NuGet, and can install it using dotnet add package CommonPasswordValidator. Currently, it supports ASP.NET Core 2.0 preview 2.

Full disclosure, this post is 100% inspired by the codinghorror.com article by Jeff Atwood on how they validate passwords in Discourse. If you haven't read it yet, do it now!

As Jeff describes in the appropriately named article Password Rules Are Bullshit, password rules can be a real pain. Obviously in theory, password rules make sense, but reality can be a bit different. The default Identity templates require:

  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

All these rules will theoretically increase the entropy of any passwords a user enters. But you just know that's not really what happens.

All it means is that instead of entering password, they enter Password1!

And on top of that, if you're using a password manager, these password rules can get in the way. So your 40 character random password happens to not have a digit in this time? Pretty sure it's still OK... should you really have to generate a new password?

Instead, Jeff Attwood suggests 5 pieces of advice when designing your password validation:

  1. Password rules are bullshit - These rarely achieve their goal, don't make the passwords of average users better, and penalise users using password managers.

    You can easily disable password rules in ASP.NET Core Identity by disabling the composition rules.

  2. Enforce a minimum Unicode password length - Length is an easy rule for users to grasp, and in general, a longer password will be more secure than a short one

    You can similarly set the minimum length in ASP.NET Core Identity using the options pattern, e.g. options.Password.RequiredLength = 10

  3. Check for common passwords - There's plenty of stats on the terrible password choices user make to their own devices, and you an create your own by checking out password lists available online. For example, 30% have a password from the top 10,000 most common passwords!

    In this post I'll describe a custom validator you can add to your ASP.NET Core Identity project to prevent users using the most common passwords

  4. Check for basic entropy - Even with a length requirement, and checking for common passwords, users can make terrible password choices like 9999999999. A simple approach to tackling this is to require a minimum number of unique digits.

    In ASP.NET Core Identity 2.0, you can require a minimum number of required digits using options.Password.RequiredUniqueChars = 6

  5. Check for special case passwords - User's shouldn't be allowed to use their username, email or other obvious values as their password.

You can create custom validators for ASP.NET Core Identity, as I showed in my previous post.

Whether you agree 100% with these rules doesn't really matter, but I think most people will agree with at least a majority of them. Either way, preventing the most common passwords is somewhat of a no-brainer.

There's no built-in way of achieving this, but thanks to ASP.NET Core Identity's extensibility, we can create a custom validator instead.

Creating a validator to check for common passwords

ASP.NET Core Identity lets you register custom password validators. These are executed when a user registers on your site, or changes their password, and let you apply additional constraints to the password.

In my last post, I showed how to create custom validators. Creating a validator to check for common passwords is pretty simple - we load the list of forbidden passwords into a HashSet, and check that the user's password is not one of them:

public class Top100PasswordValidator<TUser> : IPasswordValidator<TUser>  
        where TUser : class
{
    static readonly HashSet<string> Passwords { get; } = PasswordLists.Top100Passwords;

    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager,
                                                TUser user,
                                                string password)
    {
        if(Passwords.Contains(password))
        {
            var result = IdentityResult.Failed(new IdentityError
            {
                Code = "CommonPassword",
                Description = "The password you chose is too common."
            });
            return Task.FromResult(result);
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator is pretty standard. We have a list of passwords that you are not allowed to use, stored in the static HashSet<string>. ASP.NET Core Identity will call ValidateAsync when a new user registers, passing in the new user object, and the new password.

As we don't need to access the user object itself, we can make this validator completely generic to TUser, instead of limiting it to IdentityUser<TKey> as we did in my last post.

There's plenty of different passwords list we could choose from, so I chose to implement a few different variations, based on the 10 million passwords from 2016, depending on how restrictive you want to be.

  • Block passwords in the top 100 most common
  • Block passwords in the top 500 most common
  • Block passwords in the top 1,000 most common
  • Block passwords in the top 10,000 most common
  • Block passwords in the top 100,000 most common

Each of these passwords lists is stored as an embedded resource in the NuGet package. In the new .csproj file format, you do this by removing it from the normal wildcard inclusion, and marking as EmbeddedResource:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <None Remove="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <EmbeddedResource Include="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Identity" Version="2.0.0-preview2-final" />
  </ItemGroup>

</Project>  

With the lists embedded in the dll, we can simply load the passwords from the embedded resource into a HashSet.

Loading a list of strings from an embedded resource

You can read an embedded resource as a stream from the assembly using the GetManifestResourceStream() method on the Assembly type. I created a small helper class that loads the embedded file from the assembly, reads it line by line, and adds the password to the HashSet (using a case-insensitive string comparer).

internal static class PasswordLists  
{
    private static HashSet<string> LoadPasswordList(string resourceName)
    {
        HashSet<string> hashset;

        var assembly = typeof(PasswordLists).GetTypeInfo().Assembly;
        using (var stream = assembly.GetManifestResourceStream(resourceName))
        {
            using (var streamReader = new StreamReader(stream))
            {
                hashset = new HashSet<string>(
                    GetLines(streamReader),
                    StringComparer.OrdinalIgnoreCase);
            }
        }
        return hashset;
    }

    private static IEnumerable<string> GetLines(StreamReader reader)
    {
        while (!reader.EndOfStream)
        {
            yield return reader.ReadLine();
        }
    }
}

NOTE: When you pass in the resourceName to load, it must be properly namespaced. The namespace is based on the namespace of the Assembly, and the subfolder of the resource file.

Adding the custom validator to ASP.NET Core Identity

That's all there is to the validator itself. You can add it to the ASP.NET Core Identity validators collection using the AddPasswordValidator<>() method. For example:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddPasswordValidator<Top100PasswordValidator<ApplicationUser>>();

It's somewhat of a convention to create helper extension methods in ASP.NET Core, so we can easily add an additional extension that simplifies the above slightly:

public static class IdentityBuilderExtensions  
{        
    public static IdentityBuilder AddTop100PasswordValidator<TUser>(this IdentityBuilder builder) where TUser : class
    {
        return builder.AddPasswordValidator<Top100PasswordValidator<TUser>>();
    }
}

With this extension, you can add the validator using the following:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddTop100PasswordValidator<ApplicationUser>();

With the validator in place, if a user tries to use a password that's too common, they'll get a standard warning when registering on your site:

Creating a validator to check for common passwords in ASP.NET Core Identity

Summary

This post was based on the suggestion by Jeff Attwood that we should limit password composition rules, focus on length, and ensure users can't choose common passwords.

ASP.NET Core Identity lets you add custom validators. This post showed how you could create a validator that ensures the entered password isn't in the top 100 - 100,000 of the 10 million most common passwords.

You can view the source code for the validator on GitHub, or you can install the NuGet package using the command

dotnet add package CommonPasswordValidator  

Currently, the package targets .NET Core 2.0 preview 2. If you have any comments, suggestions, or bugs, please raise an issue or leave a comment! Thanks


Anuraj Parameswaran: Send Mail Using SendGrid In .NET Core

This post is about sending emails using Send Grid API in .NET Core. SendGrid is a cloud-based SMTP provider that allows you to send email without having to maintain email servers. SendGrid manages all of the technical details, from scaling the infrastructure to ISP outreach and reputation monitoring to whitelist services and real time analytics.


Damien Bowden: Implementing Two-factor authentication with IdentityServer4 and Twilio

This article shows how to implement two factor authentication using Twilio and IdentityServer4 using Identity. On the Microsoft’s Two-factor authentication with SMS documentation, Twilio and ASPSMS are promoted, but any SMS provider can be used.

Code: https://github.com/damienbod/AspNetCoreID4External

Setting up Twilio

Create an account and login to https://www.twilio.com/

Now create a new phone number and use the Twilio documentation to set up your account to send SMS messages. You need the Account SID, Auth Token and the Phone number which are required in the application.

The phone number can be configured here:
https://www.twilio.com/console/phone-numbers/incoming

Adding the SMS support to IdentityServer4

Add the Twilio Nuget package to the IdentityServer4 project.

<PackageReference Include="Twilio" Version="5.5.2" />

The Twilio settings should be a secret, so these configuration properties are added to the app.settings.json file with dummy values. These can then be used for the deployments.

"TwilioSettings": {
  "Sid": "dummy",
  "Token": "dummy",
  "From": "dummy"
}

A configuration class is then created so that the settings can be added to the DI.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace IdentityServerWithAspNetIdentity.Services
{
    public class TwilioSettings
    {
        public string Sid { get; set; }
        public string Token { get; set; }
        public string From { get; set; }
    }
}

Now the user secrets configuration needs to be setup on your dev PC. Right click the IdentityServer4 project and add the user secrets with the proper values which you can get from your Twilio account.

{
  "MicrosoftClientId": "your_secret..",
  "MircosoftClientSecret":  "your_secret..",
  "TwilioSettings": {
    "Sid": "your_secret..",
    "Token": "your_secret..",
    "From": "your_secret..",
  }
}

The configuration class is then added to the DI in the Startup class ConfigureServices method.

var twilioSettings = Configuration.GetSection("TwilioSettings");
services.Configure<TwilioSettings>(twilioSettings);

Now the TwilioSettings can be added to the AuthMessageSender class which is defined in the MessageServices file, if using the IdentityServer4 samples.

private readonly TwilioSettings _twilioSettings;

public AuthMessageSender(ILogger<AuthMessageSender> logger, IOptions<TwilioSettings> twilioSettings)
{
	_logger = logger;
	_twilioSettings = twilioSettings.Value;
}

This class is also added to the DI in the startup class.

services.AddTransient<ISmsSender, AuthMessageSender>();

Now the TwilioClient can be setup to send the SMS in the SendSmsAsync method.

public Task SendSmsAsync(string number, string message)
{
	// Plug in your SMS service here to send a text message.
	_logger.LogInformation("SMS: {number}, Message: {message}", number, message);
	var sid = _twilioSettings.Sid;
	var token = _twilioSettings.Token;
	var from = _twilioSettings.From;
	TwilioClient.Init(sid, token);
	MessageResource.CreateAsync(new PhoneNumber(number),
		from: new PhoneNumber(from),
		body: message);
	return Task.FromResult(0);
}

The SendCode.cshtml view can now be changed to send the SMS with the style, layout you prefer.

<form asp-controller="Account" asp-action="SendCode" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="SelectedProvider" type="hidden" value="Phone" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <div class="row">
        <div class="col-md-8">
            <button type="submit" class="btn btn-default">Send a verification code using SMS</button>
        </div>
    </div>
</form>

In the VerifyCode.cshtml, the ReturnUrl from the model property must be added to the form as a hidden item, otherwise your client will not be redirected back to the calling app.

<form asp-controller="Account" asp-action="VerifyCode" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <div asp-validation-summary="All" class="text-danger"></div>
    <input asp-for="Provider" type="hidden" />
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <h4>@ViewData["Status"]</h4>
    <hr />
    <div class="form-group">
        <label asp-for="Code" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Code" class="form-control" />
            <span asp-validation-for="Code" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <div class="checkbox">
                <input asp-for="RememberBrowser" />
                <label asp-for="RememberBrowser"></label>
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">Submit</button>
        </div>
    </div>
</form>

Testing the application

If using an existing client, you need to update the Identity in the database. Each user requires that the TwoFactoredEnabled field is set to true and a mobile phone needs to be set in the phone number field, (Or any phone which can accept SMS)

Now login with this user:

The user is redirected to the send SMS page. Click the send SMS button. This sends a SMS to the phone number defined in the Identity for the user trying to authenticate.

You should recieve an SMS. Enter the code in the verify view. If no SMS was sent, check your Twilio account logs.

After a successful code validation, the user is redirected back to the consent page for the client application. If not redirected, the return url was not set in the model.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/2fa

https://www.twilio.com/

http://docs.identityserver.io/en/release/

https://www.twilio.com/use-cases/two-factor-authentication



Andrew Lock: Creating custom password validators for ASP.NET Core Identity

Creating custom password validators for ASP.NET Core Identity

ASP.NET Core Identity is a membership system that lets you add user accounts to your ASP.NET Core applications. It provides the low-level services for creating users, verifying passwords and signing users in to your application, as well as additional features such as two-factor authentication (2FA) and account lockout after too many failed attempts to login.

When users register on an application, they typically provide an email/username and a password. ASP.NET Core Identity lets you provide validation rules for the password, to try and prevent users from using passwords that are too simple.

In this post, I'll talk about the default password validation settings and how to customise them. Finally, I'll show how you can write your own password validator for ASP.NET Core Identity.

The default settings

By default, if you don't customise anything, Identity configures a default set of validation rules for new passwords:

  • Passwords must be at least 6 characters
  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

If you want to change these values, to increase the minimum length for example, you can do so when you add Identity to the DI container in ConfigureServices. In the following example I've increased the minimum password length from 6 to 10, and disabled the other validations:

Disclaimer: I'm not saying you should do this, it's just an example!

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequireLowercase = false;
    options.Password.RequireUppercase = false;
    options.Password.RequireNonAlphanumeric = false;
    options.Password.RequireDigit = false;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

Coming in ASP.NET Core 2.0

In ASP.NET Core Identity 2.0, which uses ASP.NET Core 2.0 (available as 2.0.0-preview2 at time of writing) you get another configurable default setting:

  • Passwords must use at least n different characters

This lets you guard against the (stupidly popular) password "111111" for example. By default, this setting is disabled for compatibility reasons (you only need 1 unique character), but you can enable it in a similar way. The following example requires passwords of length 10, with at least 6 unique characters, one upper, one lower, one digit, and one special character.

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequiredUniqueChars = 6;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

When the default validators aren't good enough..

Whether having all of these rules when creating a password is a good idea is up for debate, but it's certainly nice to have the options there. Unfortunately, sometimes these rules aren't enough to really protect users from themselves.

For example, it's quite common for a sub-set of users to use their username/email as their password. This is obviously a bad idea, but unfortunately the default password rules won't necessarily catch it! For example, in the following example I've used my username as my password:

Creating custom password validators for ASP.NET Core Identity

and it meets all the rules: more than 6 characters, upper and lower, number, even a special character @!

And voilà, we're logged in...

Creating custom password validators for ASP.NET Core Identity

Luckily, ASP.NET Core Identity lets you write your own password validators. Let's create a validator to catch this common no-no.

Writing a custom validator for ASP.NET Core Identity

You can create a custom validator for ASP.NET Core Identity by implementing the IPasswordValidator<TUser> interface:

public interface IPasswordValidator<TUser> where TUser : class  
{
    Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password);
}

One thing to note about this interface is that the TUser type parameter is only limited to class - that means that if you create the most generic implementation of this interface, you won't be able to use properties of the user parameter.

That's fine if you're validating the password by looking at the password itself, checking the length and which character types are in it etc. Unfortunately, it's no good for the validator we're trying to create - we need access to the UserName property so we can check if the password matches.

We can get round this by implementing the validator and restricting the TUser type parameter to an IdentityUser. This is the default Identity user type created by the templates (which use EF Core under the hood), so it's still pretty generic, and it means we can now build our validator.

public class UsernameAsPasswordValidator<TUser> : IPasswordValidator<TUser>  
    where TUser : IdentityUser
{
    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
    {
        if (string.Equals(user.UserName, password, StringComparison.OrdinalIgnoreCase))
        {
            return Task.FromResult(IdentityResult.Failed(new IdentityError
            {
                Code = "UsernameAsPassword",
                Description = "You cannot use your username as your password"
            }));
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator checks if the UserName of the new TUser object passed in matches the password (ignoring case). If they match, then it rejects the password using the IdentityResult.Failed method, passing in an IdentityError (and wrapping in a Task<>).

The IdentityError class has both a Code and a Description - the Code property is used by the Identity system internally to localise the errors, and the Description is obviously an English description of the error which is used by default.

Note: Your errors won't be localised by default - I'll write a follow up post about this soon.

If the password and username are different, then the validator returns IdentityResult.Success, indicating it has no problems.

Note: The default templates use the email address for both the UserName and Email properties. If your user entities are configured differently, the username is separate from the email for example, you could check the password doesn't match either property by updating the ValidateAsync method accordingly.

Now we have a validator, we just need to make Identity aware of it. You do this with the AddPasswordValidator<> method exposed on IdentityBuilder when configuring your app:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator<ApplicationUser>>();

    // EF Core, MVC service config etc
}

It looks a bit long-winded because we need to pass in the TUser generic parameter. If we're just building the validator for a single app, we could always remove the parameter altogether and simplify the signature somewhat:

public class UsernameAsPasswordValidator : IPasswordValidator<ApplicationUser>  
{
    public Task<IdentityResult> ValidateAsync(UserManager<ApplicationUser> manager, ApplicationUser user, string password)
    {
        // as before
    }
}

And then our Identity configuration becomes:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator>();

    // EF Core, MVC service config etc
}

Now when you try and use your username as a password to register a new user you'll get a nice friendly warning to tell you to stop being stupid!

Creating custom password validators for ASP.NET Core Identity

Summary

The default password validation in ASP.NET Core Identity includes a variety of password rules that you configure, such as password length, and required character types.

You can write your own password validators by implementing IPasswordValidator<TUser> and calling .AddPasswordValidator<T> when configuring Identity.

I have created a small NuGet package containing the validator from this blog post, a similar validator for validating the password does not equal the email, and one that looks for specific phrases (for example the URL or domain of your website - another popular choice for security-lacking users!).

You can find the package NetEscapades.AspNetCore.Identity.Validators on Nuget, with instructions on how to get started on GitHub. Hope you find it useful!


Damien Bowden: Adding an external Microsoft login to IdentityServer4

This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.

Code https://github.com/damienbod/AspNetCoreID4External

Setting up the App Platform for the Microsoft Account

To setup the app, login using your Microsoft account and open the My Applications link

https://apps.dev.microsoft.com/?mkt=en-gb#/appList

Click the ‘Add an app’ button

Give the application a name and add your email. This app is called ‘microsoft_id4_damienbod’

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

Now Add a new platform. Choose a Web type.

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

Add the permissions as required

Application configuration

Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.

The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.

services.AddDbContext<ApplicationDbContext>(options =>
  options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
  .AddEntityFrameworkStores<ApplicationDbContext>()
  .AddDefaultTokenProviders();

Now the UseMicrosoftAccountAuthentication extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.

app.UseIdentity();
app.UseIdentityServer();

app.UseMicrosoftAccountAuthentication(new MicrosoftAccountOptions
{
	AuthenticationScheme = "Microsoft",
	DisplayName = "Microsoft",
	SignInScheme = "Identity.External",
	ClientId = _clientId,
	ClientSecret = _clientSecret
});

The application can now be tested. An Angular client using OpenID Connect sends a login request to the server. The ClientId and the ClientSecret are saved using user secrets, so that the password is not committed in the src code.

Click the Microsoft button to login.

This redirects the user to the Microsoft Account login for the microsoft_id4_damienbod application.

After a successful login, the user is redirected to the consent page.

Click yes, and the user is redirected back to the IdentityServer4 application. If it’s a new user, a register page will be opened.

Click register and the ID4 consent page is opened.

Then the application opens.

What’s nice about the IdentityServer4 application is that it’s a simple ASP.NET Core application with standard Views and Controllers. This makes it really easy to change the flow, for example, if a user is not allowed to register or whatever.

Links

https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-how-to-configure-microsoft-authentication

http://docs.identityserver.io/en/release/topics/signin_external_providers.html



Anuraj Parameswaran: ASP.NET Core Gravatar Tag Helper

This post is about creating a tag helper in ASP.NET Core for displaying Gravatar images based on the email address. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog.


Andrew Lock: Localising the DisplayAttribute in ASP.NET Core 1.1

Localising the DisplayAttribute in ASP.NET Core 1.1

This is a very quick post in response to a comment asking about the state of localisation for the DisplayAttribute in ASP.NET Core 1.1. A while ago I wrote a series of posts about localising your ASP.NET Core application, using the IStringLocalizer abstraction.

  1. Adding Localisation to an ASP.NET Core application
  2. Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core
  3. Url culture provider using middleware as filters in ASP.NET Core 1.1.0
  4. Applying the RouteDataRequest CultureProvider globally with middleware as filters
  5. Using a culture constraint and catching 404s with the url culture provider
  6. Redirecting unknown cultures to the default culture when using the url culture provider

The IStringLocalizer is a new way of localising your validation messages, view templates, and arbitrary strings. If you're not familiar with localisation in ASP.NET Core, I suggest checking out my first post on localisation for the benefits and pitfalls it brings, but I'll give a quick refresher here.

Brief Recap

Localisation is handled in ASP.NET Core through two main abstractions IStringLocalizer and IStringLocalizer<T>. These allow you to retrieve the localised version of a string by essentially using it as the key into a dictionary; if the key does not exist for that resource, or you are using the default culture, the key itself is returned as the resource:

public class ExampleClass  
{
    public ExampleClass(IStringLocalizer<ExampleClass> localizer)
    {
        // If the resource exists, this returns the localised string
        var localisedString1 = _localizer["I exist"]; // "J'existe"

        // If the resource does not exist, the key itself  is returned
        var localisedString2 = _localizer["I don't exist"]; // "I don't exist"
    }
}

Resources are stored in .resx files that are named according to the class they are localising. So for example, the IStringLocalizer<ExampleClass> localiser would look for a file named (something similar to) ExampleClass.fr-FR.resx. Microsoft recommends that the resource keys/names in the .resx files are the localised values in the default culture. That way you can write your application without having to create any resource files - the supplied string will be used as the resource.

As well as arbitrary strings like this, DataAnnotations which derive from ValidationAttribute also have their ErrorMessage property localised automatically.

Finally, you can localise your Views, either providing whole replacements for your View by using filenames of the form Index.fr-FR.cshtml, or by localising specific strings in your view with another abstraction, the IViewLocalizer, which acts as a view-specific wrapper around IStringLocalizer.

Localising the DisplayAttribute in ASP.NET Core 1.0

Unfortunately, in ASP.NET Core 1.0, there was one big elephant in the room… You could localise the ValidationAttributes on your view models, but you couldn't localise the DisplayAttribute which would generate the associated labels!

Localising the DisplayAttribute in ASP.NET Core 1.1

That's a little unfair - you could localise the DisplayAttribute, it just required jumping through some significant hoops. You had to fall back to the ResourceManager class, use Visual Studio to generate .resx designer files, and move away from the simpler localisation approach adopted everywhere else. Instead of just passing a key to the Name or ErrorMessage property, you had to remember to set the ResourceType too:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address", ResourceType = typeof(Resources.ViewModels_HomeViewModel))]
    public string Email { get; set; }
}

Localising the DisplayAttribute in ASP.NET Core 1.1

Luckily, that issue has all gone away now. In ASP.NET Core 1.1 you can now localise the DisplayAttribute in the same way you do your ValidationAttributes:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address")]
    public string Email { get; set; }
}

These values will be used by the IStringLocalizer as keys in the resx files for localisation (or as the localised value itself if a value can't be found in the .resx). No fuss, no muss.

Note As an aside, I still really dislike this idea of using the English phrase as the key in the dictionary - it's too fragile for my liking, but at least you can easily fix that.

Summary

Localisation in ASP.NET Core still requires a lot of effort, but it's certainly easier than in the previous version of ASP.NET. With the update to the DisplayAttribute in ASP.NET Core 1.1, one of the annoying differences in behaviour was fixed, so that you localize it the same way you would your other DataAnnotations.

As with most of my blog posts, there's a small sample project demonstrating this on GitHub


Dominick Baier: Authorization is hard! Slides and Video from NDC Oslo 2017

A while ago I wrote a controversial article about the problems that can arise when mixing authentication and authorization systems – especially when using identity/access tokens to transmit authorization data – you can read it here.

In the meanwhile Brock and I sat down to prototype a possible solution (or at least an improvement) to the problem and presented it to various customers and at conferences.

Also many people asked me for a more detailed version of my blog post – and finally there is now a recording of our talk from NDC – video here – and slides here. HTH!

 


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: When you use the Polly circuit-breaker, make sure you share your Policy instances!

When you use the Polly circuit-breaker, make sure you share your Policy instances!

This post is somewhat of PSA about using the excellent open source Polly library for handling resiliency to your application. Recently, I was tasked with adding a circuit-breaker implementation to some code calling an external API, and I figured Polly would be perfect, especially as we already used it in our solution!

I hadn't used Polly directly in a little while, but the excellent design makes it easy to add retry handling, timeouts, or circuit-breaking to your application. Unfortunately, my initial implementation had one particular flaw, which meant that my circuit-breaker never actually worked!

In this post I'll outline the scenario I was working with, my initial implementation, the subsequent issues, and what I should have done!

tl;dr; Policy is thread safe, and for the circuit-breaker to work correctly, it must be shared so that you call Execute on the same Policy instance every time!

The scenario - dealing with a flakey external API

A common requirement when working with currencies is dealing with exchange rates. We have happily been using the Open Exchange Rates API to fetch a JSON list of exchange rates for a while now.

The existing implementation consists of three classes:

  • OpenExchangeRatesClient - Responsible for fetching the exchange rates from the API, and parsing the JSON into a strongly typed .NET object.
  • OpenExchangeRatesCache - We don't want to fetch exchange rates every time we need them, so this class caches the latest exchange rates for a day before calling the OpenExchangeRatesClient to get up-to-date rates.
  • FallbackExchangeRateProvider - If the call to fetch the latest rates using the OpenExchangeRatesClient fails, we fallback to a somewhat recent copy of the data, loaded from an embedded resource in the assembly.

All of these classes are registered as Singletons with the IoC container, so there isonly a single instance of each. This setup has been working fine for a while, but there was an issue where the Open Exchange Rates API went down, just as the local cache of exchange rates expired. The series of events was:

  1. A request was made to our internal API, which called a service that required exchange rates.
  2. The service called the OpenExchangeRatesCache which realised the current rates were out of date.
  3. The cache called the OpenExchangeRatesClient to fetch the latest rates.
  4. Unfortunately the service was down, and eventually caused a timeout (after 100 seconds!)
  5. At this point the cache used the FallbackExchangeRateProvider to use the stale rates for this single request.
  6. A separate request was made to our internal API - repeat steps 2-6!

An issue in the external dependency, the exchange rate API going down, was causing our internal services to take 100s to respond to requests, which in turn was causing other requests to timeout. Effectively we had a cascading failure, even though we thought we had accounted for this by providing a fallback.

Note I realise updating cached exchange rates should probably be a background task. This would stop requests failing if there are issues updating, but the general problem is common to many scenarios, especially if you're using micro-services.

Luckily, this outage didn't happen at a peak time, so by the time we came to investigate the issue, the problem had passed, and relatively few people were affected. However, it obviously flagged up a problem, so I set about trying to ensure this wouldn't happen again if the API had issues at a later date!

Fix 1 - Reduce the timeouts

The first fix was a relatively simple one. The OpenExchangeRatesClient was using an HttpClient to call the API and fetch the exchange rate data. This was instantiated in the constructor, and reused for the lifetime of the class. As the client was used as a singleton, the HttpClient was also a singleton (so we didn't have any of these issues).

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }
}

The first fix I made was to set the Timeout property on the HttpClient. In the failure scenario, it was taking 100s to get back an error response. Why 100s? Because that's the default timeout for HttpClient!

Checking our metrics of previous calls to the service, I could see that prior to the failure, virtually all calls were taking approximately 0.25s. Based on that, a 100s timeout was clearly overkill! Setting the timeout to something more modest, but still conservative, say 5s, should help prevent the scenario happening again.

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
            Timeout = TimeSpan.FromSeconds(5),
        };
    }
}

Fix 2 - Add a circuit breaker

The second fix was to add a circuit-breaker implementation to the API calls. The Polly documentation has a great explanation of the circuit-breaker pattern, but I'll give a brief summary here.

Circuit-breakers in brief

Circuit-breakers make sense when calling a somewhat unreliable API. They use a fail-fast approach when a method has failed several times in a row. As an example, in my scenario, there was no point repeatedly calling the API when it hadn't worked several times in a row, and was very likely to fail. All we were doing was adding additional delays to the method calls, when it's pretty likely you're going to have to use the fallback anyway.

The circuit-breaker tracks the number of times an API call has failed. Once it crosses a threshold number of failures in a row, it doesn't even try to call the API for subsequent requests. Instead, it fails immediately, as though the API had failed.

After some timeout, the circuit-breaker will let one method call through to "test" the API and see if it succeeds. If it fails, it goes back to just failing immediately. If it succeeds then the circuit is closed again, and it will go back to calling the API for every request.

When you use the Polly circuit-breaker, make sure you share your Policy instances!

Circuit breaker state diagram taken from the Polly documentation

The circuit-breaker was a perfect fit for the failure scenario in our app, so I set about adding it to the OpenExchangeRatesClient.

Creating a circuit breaker policy

You can create a circuit-breaker Policy in Polly using the CircuitBreakerSyntax. As we're going to be making requests with the HttpClient, I used the async methods for setting up the policy and for calling the API:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var rates = await circuitBreaker  
    .ExecuteAsync(() => CallRatesApi());

This configuration creates a new circuit breaker policy, defines the number of consecutive exceptions to allow before marking the API as broken and opening the breaker, and the amount of time the breaker should stay open for before moving to the half-closed state.

Once you have a policy in place, circuitBreaker, you can call ExecuteAsync and pass in the method to execute. At runtime, if an exception occurs executing CallRatesApi() the circuit breaker will catch it, and keep track of how many exceptions it has raised to control the breaker's state.

Adding a fallback

When an exception occurs in the CallRatesApi() method, the breaker will catch it, but it will re-throw the exception. In my case, I wanted to catch those exceptions and use the FallbackExchangeRateProvider. I could have used a try-catch block, but I decided to stay in the Polly-spirit and use a Fallback policy.

A fallback policy is effectively a try catch block - it simply executes an alternative method if CallRatesApi() throws. You can then wrap the fallback policy around the breaker policy to combine the two. If the circuit breaker fails, the fallback will run instead:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var fallback = Policy  
    .Handle<Exception>()
    .FallbackAsync(()=> GetFallbackRates())
    .WrapAsync(circuitBreaker);


var results = await fallback  
    .ExecuteAsync(() => CallRatesApi());

Putting it together - my failed attempt!

This all looked like it would work as best I could see, so I set about replacing the OpenExchangeRatesClient implementation, and testing it out.

*Note * This isn't correct, don't copy it!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        var fallback = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .WrapAsync(circuitBreaker);


        return fallback
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

In theory, this is the flow I was aiming for when the API goes down:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  4. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  5. ... etc

What I actually saw was:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  4. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  5. ... etc

It was as though the circuit breaker wasn't there at all! No matter how many times the CallRatesApi() method threw, the circuit was never breaking.

Can you see what I did wrong?

Using circuit breakers properly

Every time you call GetLatestRates(), I'm creating a new circuit breaker (and fallback) policy, and then calling ExecuteAsync on that!

The circuit breaker, by it's nature, has state that must be persisted between calls (the number of exceptions that have previously happened, the open/closed state of the breaker etc). By creating new Policy objects inside the GetLatestRates() method, I was effectively resetting the policy back to its initial state, hence why nothing was working!

The answer is simple - make sure the Policy persists between calls to GetLatestRates() so that its state persists. The Policy is thread safe, so there's no issues to worry about there either. As the client is implemented in our app as a singleton, I simply moved the policy configuration to the class constructor, and everything proceeded to work as expected!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    private readonly Policy _policy;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };

        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        _policy = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .Wrap(circuitBreaker);
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        return _policy
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

And that's all it takes! It works brilliantly when you actually use it properly ūüėČ

Summary

This post ended up a lot longer than I intended, as it was a bit of a post-incident brain-dump, so apologies for that! It serves as somewhat of a cautionary tale about having blinkers on when coding something. When I implemented the fix initially I was so caught up in how I was solving the problem I completely overlooked this simple, but crucial, difference in how policies can be implemented.

I'm not entirely sure if in general it's best to use shared policies, or if it's better to create and discard policies as I did originally. Obviously, the latter doesn't work for circuit breaker but what about Retry or WaitAndRetry? Also, creating a new policy each time is probably more "allocate-y", but is it faster due to not having to be thread safe?

I don't know the answer, but personally, and based on this episode, I'm inclined to go with shared policies everywhere. If you know otherwise, do let me know in the comments, thanks!


Damien Bowden: Using Protobuf Media Formatters with ASP.NET Core

Theis article shows how to use Protobuf with an ASP.NET Core MVC application. The API uses the WebApiContrib.Core.Formatter.Protobuf Nuget package to add support for Protobuf. This package uses the protobuf-net Nuget package from Marc Gravell, which makes it really easy to use a really fast serializer, deserializer for your APIs.

Code: https://github.com/damienbod/AspNetCoreWebApiContribProtobufSample

History

2017-08-19 Updated to ASP.NET Core 2.0, WebApiContrib.Core.Formatter.Protobuf 2.0

Setting up te ASP.NET Core MVC API

To use Protobuf with ASP.NET Core, the WebApiContrib.Core.Formatter.Protobuf Nuget package can be used in your project. You can add this using the Nuget manager in Visual Studio.

Or you can add it directly in your project file.

<PackageReference Include="WebApiContrib.Core.Formatter.Protobuf" Version="2.0.0" />

Now the formatters can be added in the Startup file.

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc()
		.AddProtobufFormatters();
}

A model now needs to be defined. The protobuf-net attributes are used to define the model class.

using ProtoBuf;

namespace Model
{
    [ProtoContract]
    public class Table
    {
        [ProtoMember(1)]
        public string Name {get;set;}

        [ProtoMember(2)]
        public string Description { get; set; }


        [ProtoMember(3)]
        public string Dimensions { get; set; }
    }
}

The ASP.NET Core MVC API can then be used with the Table class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Model;

namespace AspNetCoreWebApiContribProtobufSample.Controllers
{
    [Route("api/[controller]")]
    public class TablesController : Controller
    {
        // GET api/tables
        [HttpGet]
        public IActionResult Get()
        {
            List<Table> tables = new List<Table>
            {
                new Table{Name= "jim", Dimensions="190x80x90", Description="top of the range from Migro"},
                new Table{Name= "jim large", Dimensions="220x100x90", Description="top of the range from Migro"}
            };

            return Ok(tables);
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            var table = new Table { Name = "jim", Dimensions = "190x80x90", Description = "top of the range from Migro" };
            return Ok(table);
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]Table value)
        {
            var got = value;
            return Created("api/tables", got);
        }
    }
}

Creating a simple Protobuf HttpClient

A HttpClient using the same Table class with the protobuf-net definitions can be used to access the API and request the data with “application/x-protobuf” header.

static async System.Threading.Tasks.Task<Table[]> CallServerAsync()
{
	var client = new HttpClient();

	var request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:31004/api/tables");
	request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var result = await client.SendAsync(request);
	var tables = ProtoBuf.Serializer.Deserialize<Table[]>(await result.Content.ReadAsStreamAsync());
	return tables;
}

The data is returned in the response using Protobuf seriailzation.

If you want to post some data using Protobuf, you can serialize the data to Protobuf and post it to the server using the HttpClient. This example uses “application/x-protobuf”.

static async System.Threading.Tasks.Task<Table> PostStreamDataToServerAsync()
{
	HttpClient client = new HttpClient();
	client.DefaultRequestHeaders
		  .Accept
		  .Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));

	HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post,
		"http://localhost:31004/api/tables");

	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<Table>(stream, new Table
	{
		Name = "jim",
		Dimensions = "190x80x90",
		Description = "top of the range from Migro"
	});

	request.Content = new ByteArrayContent(stream.ToArray());

	// HTTP POST with Protobuf Request Body
	var responseForPost = client.SendAsync(request).Result;

	var resultData = ProtoBuf.Serializer.Deserialize<Table>(await responseForPost.Content.ReadAsStreamAsync());
	return resultData;
}

Links:

https://www.nuget.org/packages/WebApiContrib.Core.Formatter.Protobuf/

https://github.com/mgravell/protobuf-net



Anuraj Parameswaran: ASP.NET Core No authentication handler is configured to handle the scheme Cookies

This post is about ASP.NET Core authentication, which throws an InvalidOperationException - No authentication handler is configured to handle the scheme Cookies. In ASP.NET Core 1.x version, the runtime will throw this exception when you are running ASP.NET Cookie authentication. This can be fixed by setting options.AutomaticChallenge = true in the Configure method.


Andrew Lock: Controller activation and dependency injection in ASP.NET Core MVC

Controller activation and dependency injection in ASP.NET Core MVC

In my last post about disposing IDsiposables in ASP.NET Core, Mark Rendle pointed out that MVC controllers are also disposed at the end of a request. On first glance, this may seem obvious given that scoped resources are disposed at the end of a request, but MVC controllers are actually handled in a slightly different way to most services.

In this post, I'll describe how controllers are created in ASP.NET Core MVC using the IControllerActivator, the options available out of the box, and their differences when it comes to dependency injection.

The default IControllerActivator

In ASP.NET Core, when a request is received by the MvcMiddleware, routing - either conventional or attribute routing - is used to select the controller and action method to execute. In order to actually execute the action, the MvcMiddleware must create an instance of the selected controller.

The process of creating the controller depends on a number of different provider and factory classes, culminating in an instance of the IControllerActivator. This class implements just two methods:

public interface IControllerActivator  
{
    object Create(ControllerContext context);
    void Release(ControllerContext context, object controller);
}

As you can see, the IControllerActivator.Create method is passed a ControllerContext which defines the controller to be created. How the controller is created depends on the particular implementation.

Out of the box, ASP.NET Core uses the DefaultControllerActivator, which uses the TypeActivatorCache to create the controller. The TypeActivatorCache creates instances of objects by calling the constructor of the Type, and attempting to resolve the required constructor argument dependencies from the DI container.

This is an important point. The DefaultControllerActivator doesn't attempt to resolve the Controller instance from the DI container itself, only the Controller's dependencies.

Example of the default controller activator

To demonstrate this behaviour, I've created a simple MVC application, consisting of a single service, and a single controller. The service instance has a name property, that is set in the constructor. By default, it will have the value "default".

public class TestService  
{
    public TestService(string name = "default")
    {
        Name = name;
    }

    public string Name { get; }
}

The HomeController for the app takes a dependency on the TestService, and returns the Name property:

public class HomeController : Controller  
{
    private readonly TestService _testService;
    public HomeController(TestService testService)
    {
        _testService = testService;
    }

    public string Index()
    {
        return "TestService.Name: " + _testService.Name;
    }
}

The final piece of the puzzle is the Startup file. Here I register the TestService as a scoped service in the DI container, and set up the MvcMiddleware and services:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

You'll also notice I've defined a factory method for creating an instance of the HomeController. This registers the HomeController type in the DI container, injecting an instance of the TestService with a custom Name property.

So what do you get if you run the app?

Controller activation and dependency injection in ASP.NET Core MVC

As you can see, the TestService.Name property has the default value, indicating the TestService instance has been sourced directly from the DI container. The factory method we registered to create the HomeController has clearly been ignored.

This makes sense when you remember that the DefaultControllerActivator is creating the controller. It doesn't request the HomeController from the DI container, it just requests its constructor dependencies.

Most of the time, using the DefaultControllerActivator will be fine, but sometimes you may want to create your controllers by using the DI container directly. This is especially true when you are using third-party containers with features such as interceptors or decorators.

Luckily, the MVC framework includes an implementation of IControllerActivator to do just this, and even provides a handy extension method to enable it.

The ServiceBasedControllerActivator

As you've seen, the DefaultControllerActivator uses the TypeActivatorCache to create controllers, but MVC includes an alternative implementation, the ServiceBasedControllerActivator, which can be used to directly obtain controllers from the DI container. The implementation itself is trivial:

public class ServiceBasedControllerActivator : IControllerActivator  
{
    public object Create(ControllerContext actionContext)
    {
        var controllerType = actionContext.ActionDescriptor.ControllerTypeInfo.AsType();

        return actionContext.HttpContext.RequestServices.GetRequiredService(controllerType);
    }

    public virtual void Release(ControllerContext context, object controller)
    {
    }
}

You can configure the DI-based activator with the AddControllersAsServices() extension method, when you add the MVC services to your application:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc()
                .AddControllersAsServices();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

With this in place, hitting the home page will create a controller by loading it from the DI container. As we've registered a factory method for the HomeController, our custom TestService configuration will be honoured, and the alternative Name will be used:

Controller activation and dependency injection in ASP.NET Core MVC

The AddControllersAsServices method does two things - it registers all of the Controllers in your application with the DI container (if they haven't already been registered) and replaces the IControllerActivator registration with the ServiceBasedControllerActivator:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)  
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);

    foreach (var controller in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(controller, controller);
    }

    builder.Services.Replace(ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());

    return builder;
}

If you need to do something esoteric, you can always implement IControllerActivator yourself, but I can't think of any reason that these two implementations wouldn't satisfy all your requirements!

Summary

  • By default, the DefaultControllerActivator is configured as the IControllerActivator for ASP.NET Core MVC.
  • The DefaultControllerActivator uses the TypeActivatorCache to create controllers. This creates an instance of the controller, and loads constructor arguments from the DI container.
  • You can use an alternative activator, the ServiceBasedControllerActivator, which loads controllers directly from the DI container. You can configure this activator by using the AddControllersAsServices() extension method on the MvcBuilder instance in Startup.ConfigureServices.


Anuraj Parameswaran: How to Deploy Multiple Apps on Azure WebApps

This post is about deploying multiple applications on an Azure Web App. App Service Web Apps is a fully managed compute platform that is optimized for hosting websites and web applications. This platform-as-a-service (PaaS) offering of Microsoft Azure lets you focus on your business logic while Azure takes care of the infrastructure to run and scale your apps.


Anuraj Parameswaran: Develop and Run Azure Functions locally

This post is about developing, running and debugging azure functions locally. Trigger on events in Azure and debug C# and JavaScript functions. Azure functions is a new service offered by Microsoft. Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems.


Andrew Lock: Four ways to dispose IDisposables in ASP.NET Core

Four ways to dispose IDisposables in ASP.NET Core

One of the most commonly implemented interfaces in .NET is the IDisposable interface. Classes implement IDisposable when they contain references to unmanaged resources, such as window handles, files or sockets. The garbage collector automatically releases memory for managed (i.e. .NET) objects, but it doesn't know about how to handle the unmanaged resources. Implementing IDisposable provides a hook, so you can properly clean up those resources when your class is disposed.

This post goes over some of the options available to you for disposing services in ASP.NET Core applications, especially when using the built-in dependency injection container.

For the purposes of this post, I'll use the following class that implements IDisposable in the examples. I'm just writing to the console instead of doing any actual cleanup, but it'll serve our purposes for this post.

public class MyDisposable : IDisposable  
{
    public MyDisposable()
    {
        Console.WriteLine("+ {0} was created", this.GetType().Name);
    }

    public void Dispose()
    {
        Console.WriteLine("- {0} was disposed!", this.GetType().Name);
    }
}

Now let's look at our options.

The simple case - a using statement

The typical suggested approach when consuming an IDisposable in your code, is with a using block:

using(var myObject = new MyDisposable())  
{
    // myObject.DoSomething();
}

Using IDisposables in this way ensures they are disposed correctly, whether or not they throw an exception. You could also use a try-finally block instead if necessary:

MyDisposable myObject;  
try  
{
    myObject = new MyDisposable();
    // myObject.DoSomething();
}
finally  
{
    myObject?.Dispose();
}

You'll often find this pattern when working with files or streams - things that you only need transiently, and are finished with in the same scope. Unfortunately, sometimes this won't suit your situation, and you might need to dispose of the object from somewhere else. Depending on your exact situation, there are a number of other options available to you.

Note: Wherever possible, it's best practice to dispose of objects in the same scope they were created. This will help prevent memory leaks and unexpected file locks in your application, where objects go accidentally undisposed.

Disposing at the end of a request - using RegisterForDispose

When you're working in ASP.NET Core, or any web application, it's very common for your objects to be scoped to a single request. That is, anything you create to handle a request you want to dispose when the request finishes.

There are a number of ways to do this. The most common way is to leverage the DI container which I'll come to in a minute, but sometimes that's not possible, and you need to create the disposable in your own code.

If you are manually creating an instance of an IDisposable, then you can register that disposable with the HttpContext, so that when the request ends, the instance will be disposed automatically. Simply pass the instance to HttpContext.Response.RegisterForDispose:

public class HomeController : Controller  
{
    readonly Disposable _disposable;

    public HomeController()
    {
        _disposable = new RegisteredForDispose();
    }

    public IActionResult Index()
    {
        // register the instance so that it is disposed when request ends
        HttpContext.Response.RegisterForDispose(_disposable);
        Console.Writeline("Running index...");
        return View();
    }
}

In this example, I'm creating the Disposable during the constructor of the HomeController, and then registering for its disposal in the action method. This is a little bit contrived, but it shows the mechanism at least.

If you execute this action method, you'll see the following:

$ dotnet run
Hosting environment: Development  
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ MyDisposable was created
Running index...  
- MyDisposable was disposed!

The HttpContext takes care of disposing our object for us!

Warning: I registered the instance in the action method, instead of the constructor method because HttpContext might be null in the constructor!

RegisterForDispose is useful when you are new-ing up services in your code. But given that Dispose is only required for classes using unmanaged resources, you'll probably find that more often than not, your IDisposable classes are encapsulated in services that are registered with the DI container.

As Mark Rendle pointed out, the Controller itself will also be disposed at the end of the request, so you can use that mechanism to dispose any objects you create.

Automatically disposing services - leveraging the built-in DI container

ASP.NET Core comes with a simple, built-in DI container that you can register your services with as either Transient, Scoped, or Singleton. You can read about it here, so I'll assume you already know how to use it to register your services.

Note, this article only discusses the built-in container - third-party containers might have other rules around automatic disposal of services.

Any service that the built-in container creates in order to fill a dependency, which also implements IDisposable, will be disposed by the container at the appropriate point. So Transient and Scoped instances will be disposed at the end of the request (or more accurately, at the end of a scope), and Singleton services will be disposed when the application is torn down and the ServiceProvider itself is disposed.

To clarify, that means the provider will dispose any service you register with it, as long as you don't provide a specific instance. For example, I'll create a number of disposable classes:

public class TransientCreatedByContainer: MyDisposable { }  
public class ScopedCreatedByFactory : MyDisposable { }  
public class SingletonCreatedByContainer: MyDisposable {}  
public class SingletonAddedManually: MyDisposable {}  

And register each of them in a different way in Startup.ConfigureServices. I am registering

  • TransientCreatedByContainer as a transient
  • ScopedCreatedByFactory as scoped, using a lambda function as a factory
  • SingletonCreatedByContainer as a singleton
  • SingletonAddedManually as a singleton by passing in a specific instance of the object.
public void ConfigureServices(IServiceCollection services)  
{
    // other services

    // these will be disposed
    services.AddTransient<TransientCreatedByContainer>();
    services.AddScoped(ctx => new ScopedCreatedByFactory());
    services.AddSingleton<SingletonCreatedByContainer>();

    // this one won't be disposed
    services.AddSingleton(new SingletonAddedManually());
}

Finally, I'll inject an instance of each into the HomeController, so the DI container will create / inject instances as necessary:

public class HomeController : Controller  
{
    public HomeController(
        TransientCreatedByContainer transient,
        ScopedCreatedByFactory scoped,
        SingletonCreatedByContainer createdByContainer,
        SingletonAddedManually manually)
    { }

    public IActionResult Index()
    {
        return View();
    }
}

When I run the application, hit the home page, and then stop the application, I get the following output:

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonCreatedByContainer was disposed!

There's a few things to note here:

  • SingletonAddedManually was created before the web host had finished being set up, so it writes to the console before logging starts
  • SingletonCreatedByContainer is disposed after we have started shutting down the server
  • SingletonAddedManually is never disposed as it was provided a specific instance!

Note the behaviour whereby only objects created by the DI container are disposed applies to ASP.NET Core 1.1 and above. In ASP.NET Core 1.0, all objects registered with the container are disposed.

Letting the container handle your IDisposables for you is obviously convenient, especially as you are probably registering your services with it anyway! The only obvious hole here is if you need to dispose an object that you create yourself. As I said originally, if possible, you should favour a using statement, but that's not always possible. Luckily, ASP.NET Core provides hooks into the application lifetime, so you can do some clean up when the application is shutting down.

Disposing when the application ends - hooking into IApplicationLifetime events

ASP.NET Core exposes an interface called IApplicationLifetime that can be used to execute code when an application is starting up or shutting down:

public interface IApplicationLifetime  
{
    CancellationToken ApplicationStarted { get; }
    CancellationToken ApplicationStopping { get; }
    CancellationToken ApplicationStopped { get; }
    void StopApplication();
}

You can inject this into your Startup class (or elsewhere) and register to the events you need. Extending the previous example, we can inject both the IApplicationLifetime and our singleton SingletonAddedManually instance into the Configure method of Startup.cs:

public void Configure(  
    IApplicationBuilder app, 
    IApplicationLifetime applicationLifetime,
    SingletonAddedManually toDispose)
{
    applicationLifetime.ApplicationStopping.Register(OnShutdown, toDispose);

    // configure middleware etc
}

private void OnShutdown(object toDispose)  
{
    ((IDisposable)toDispose).Dispose();
}

I've created a simple helper method that takes the state passed in (the SingletonAddedManually instance), casts it to an IDisposable, and disposes it. This helper method is registered with the CancellationToken called ApplicationStopping, which is fired when closing down the application.

If we run the application again, with this additional registration, you can see that the SingletonAddedManually instance is now disposed, just after the application shutting down trigger.

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonAddedManually was disposed!
- SingletonCreatedByContainer was disposed!

Summary

So there you have it, four different ways to dispose of your IDisposable objects. Wherever possible, you should either use the using statement, or let the DI container handle disposing objects for you. For cases where that's not possible, ASP.NET Core provides two mechanisms you can hook in to: RegisterForDispose and IApplicationLifetime.

Oh, and apparently I missed one final way:


Damien Bowden: Angular OIDC OAuth2 client with Google Identity Platform

This article shows how an Angular client could implement a login for a SPA application using Google Identity Platform OpenID. The Angular application uses the npm package angular-auth-oidc-client to implement the OpenID Connect Implicit Flow to connect with the google identity platform.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

History

2017-07-09 Updated to version 1.1.4, new configuration

Setting up Google Identity Platform

The Google Identity Platform provides good documentation on how to set up its OpenID Connect implementation.

You need to login into google using a gmail account.
https://accounts.google.com

Now open the OpenID Connect google documentation page

https://developers.google.com/identity/protocols/OpenIDConnect

Open the credentials page provided as a link.

https://console.developers.google.com/apis/credentials

Create new credentials for your application, select OAuth Client ID in the drop down:

Select a web application and configure the parameters to match your client application URLs.

Implementing the Angular OpenID Connect client

The client application is implemtented using ASP.NET Core and Angular.

The npm package angular-auth-oidc-client is used to connect to the OpenID server. The package can be added to the package.json file in the dependencies.

"dependencies": {
    ...
    "angular-auth-oidc-client": "1.1.4"
},

Now the AuthModule, OidcSecurityService, AuthConfiguration can be imported. The AuthModule.forRoot() is used and added to the root module imports, the OidcSecurityService is added to the providers and the AuthConfiguration is the configuration class which is used to set up the OpenID Connect Implicit Flow.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { AuthModule, OidcSecurityService, OpenIDImplicitFlowConfiguration } from 'angular-auth-oidc-client';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent
    ],
    providers: [
        OidcSecurityService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

The AuthConfiguration class is used to configure the module.

stsServer
This is the URL where the STS server is located. We use https://accounts.google.com in this example.

redirect_url
This is the redirect_url which was configured on the google client ID on the server.

client_id
The client_id must match the Client ID for Web application which was configured on the google server.

response_type
This must be ‘id_token token’ or ‘id_token’. If you want to use the user service, or access data using using APIs, you must use the ‘id_token token’ configuration. This is the OpenID Connect Implicit Flow. The possible values are defined in the well known configuration URL from the OpenID Connect server.

scope
Scope which are used by the client. The openid must be defined: ‘openid email profile’

post_logout_redirect_uri
Url after a server logout if using the end session API. This is not supported by google OpenID.

start_checksession
Checks the session using OpenID session management. Not supported by google OpenID

silent_renew
Renews the client tokens, once the token_id expires.

startup_route
Angular route after a successful login.

forbidden_route
HTTP 403

unauthorized_route
HTTP 401

log_console_warning_active
Logs all module warnings to the console.

log_console_debug_active
Logs all module debug messages to the console.


export class AppModule {
    constructor(public oidcSecurityService: OidcSecurityService) {

        let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
        openIDImplicitFlowConfiguration.stsServer = 'https://accounts.google.com';
        openIDImplicitFlowConfiguration.redirect_url = 'https://localhost:44386';
        openIDImplicitFlowConfiguration.client_id = '188968487735-b1hh7k87nkkh6vv84548sinju2kpr7gn.apps.googleusercontent.com';
        openIDImplicitFlowConfiguration.response_type = 'id_token token';
        openIDImplicitFlowConfiguration.scope = 'openid email profile';
        openIDImplicitFlowConfiguration.post_logout_redirect_uri = 'https://localhost:44386/Unauthorized';
        openIDImplicitFlowConfiguration.startup_route = '/home';
        openIDImplicitFlowConfiguration.forbidden_route = '/Forbidden';
        openIDImplicitFlowConfiguration.unauthorized_route = '/Unauthorized';
        openIDImplicitFlowConfiguration.log_console_warning_active = true;
        openIDImplicitFlowConfiguration.log_console_debug_active = true;
        openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = 10;
        openIDImplicitFlowConfiguration.override_well_known_configuration = true;
        openIDImplicitFlowConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

        // this.oidcSecurityService.setStorage(localStorage);
        this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);
    }
}

Google OpenID does not support the .well-known/openid-configuration API as defined by OpenID. Google blocks this when using it due to a CORS security restriction, so it can not be used from a browser application. As a workaround, the well known configuration can be configured locally when using angular-auth-oidc-client. The goole OpenID configuration can be downloaded using the following URL:

https://accounts.google.com/.well-known/openid-configuration

The json file can then be downloaded and saved locally on your server and this can then be configured in the authConfiguration class using the override_well_known_configuration_url property.

this.authConfiguration.override_well_known_configuration = true;
this.authConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

The following json is the actual configuration for the google well known configuration. What’s really interesting is that the end session endpoint is not supported, which is strange I think.
It’s also interesting to see that the response_types_supported supports a type which is not supported “token id_token”, this should be “id_token token”.

See: http://openid.net/specs/openid-connect-core-1_0.html

{
  "issuer": "https://accounts.google.com",
  "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
  "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
  "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
  "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
  "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
  "response_types_supported": [
    "code",
    "token",
    "id_token",
    "code token",
    "code id_token",
    "token id_token",
    "code token id_token",
    "none"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid",
    "email",
    "profile"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_post",
    "client_secret_basic"
  ],
  "claims_supported": [
    "aud",
    "email",
    "email_verified",
    "exp",
    "family_name",
    "given_name",
    "iat",
    "iss",
    "locale",
    "name",
    "picture",
    "sub"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}

The AppComponent implements the authorize and the authorizedCallback functions from the OidcSecurityService provider.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { Configuration } from './app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';


import './app.component.css';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html'
})

export class AppComponent implements OnInit {

    constructor(public securityService: OidcSecurityService) {
    }

    ngOnInit() {
        if (window.location.hash) {
            this.securityService.authorizedCallback();
        }
    }

    login() {
        console.log('start login');
        this.securityService.authorize();
    }

    refreshSession() {
        console.log('start refreshSession');
        this.securityService.authorize();
    }

    logout() {
        console.log('start logoff');
        this.securityService.logoff();
    }
}

Running the application

Start the application using IIS Express in Visual Studio 2017. This starts with https://localhost:44386 which is configured in the launch settings file. If you use a differnt URL, you need to change this in the client application and also the servers client credentials configuration.

Login then with your gmail.

And you are redirected back to the SPA.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://developers.google.com/identity/protocols/OpenIDConnect



Damien Bowden: angular-auth-oidc-client Release, an OpenID Implicit Flow client in Angular

I have been blogging and writing code for Angular and OpenID Connect since Nov 1, 2015. Now after all this time, I have decided to create my first npm package for Angular: angular-auth-oidc-client, which makes it easier to use the Angular Auth OpenID client. This is now available on npm.

npm package: https://www.npmjs.com/package/angular-auth-oidc-client

github code: https://github.com/damienbod/angular-auth-oidc-client

issues: https://github.com/damienbod/angular-auth-oidc-client/issues

Using the npm package: see the readme

Samples: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/npm-lib-test/src/AngularClient

OpenID Certification

This library is certified by OpenID Foundation. (Implicit RP)

Features:

Notes:

FabianGosebrink and Roberto Simonetti have decided to help and further develop this npm package which I’m very grateful. Anyone wishing to get involved, please do and create some issues and pull-requests. Help is always most welcome.

The next step is to do the OpenID Relying Parties certification.



Andrew Lock: Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

This quick post is a response to a question about anti-forgery tokens I saw on twitter:

Anti-forgery tokens are a security mechanism to defend against cross-site request forgery (CSRF) attacks. Marius Schulz shared a solution to this problem in a blog post in which he creates a simple middleware to automatically validate the tokens sent in the request. This works, but there's actually an even simpler solution I wanted to share: the built-in AutoValidateAntiforgeryTokenAttribute.

tl;dr Add the AutoValidateAntiforgeryTokenAttribute as a global MVC filter to automatically validate all appropriate action methods.

Defending against cross-site request forgery in ASP.NET Core

I won't go into CSRF attacks in detail - I recommend you check out the docs for details if this is all new to you. In essence, when you send a form to the user, you add an extra hidden field that includes one half of a cryptographic token. Additionally, a cookie is set with the other half of the token. When the form is posted to the server, the two halves of the token are verified, ensuring only valid requests can be made.

Note, this post assumes you are using Razor and server-side rendering. You can use a similar approach to protect your API calls, but I won't go into that here.

In ASP.NET Core, the tokens are added to your forms automatically when you use the asp-* tag helpers, for example:

<form asp-controller="Manage" asp-action="ChangePassword" method="post">  
   <!-- Form details -->
</form>  

This will generate markup similar to the following. You can see the hidden input with the anti-forgery token:

<form method="post" action="/Manage/ChangePassword">  
  <!-- Form details -->
  <input name="__RequestVerificationToken" type="hidden" value="CfDJ8NrAkSldwD9CpLR...LongValueHere!" />
</form>  

Note, in ASP.NET Core 2.0, ASP.NET Core will add anti-forgery tokens to all your forms, whether you have use the asp-* tag helpers or not.

Adding the form field is just one part of the requirement, you also need to actually check that the tokens are valid on the server side. You can do this by decorating your controller actions with the [ValidateAntiForgeryToken] attribute. You'll need to add it to all of your POST actions to properly protect your application:

public class ManageController  
{
  [HttpPost]
  [ValidateAntiForgeryToken]
  public IActionResult ChangePassword()
  {
    // ...
    return View();
  }
}

This works, but is a bit of a pain - you have to decorate each of your POST action methods with the attribute. If you forget, you won't get an error, the action just won't be protected.

Automatically validating all appropriate actions

MVC has the concept of "Global filters", which are applied to every action in your MVC app. Unfortunately we can't just add the [ValidateAntiForgeryToken] attribute globally. We won't receive anti-forgery tokens for certain types of requests like GET or HEAD - if we applied [ValidateAntiForgeryToken] globally, then all of those requests would throw validation errors.

Luckily, ASP.NET Core provides another attribute for just such a use, the [AutoValidateAntiForgeryToken] attribute. This works identically to the [ValidateAntiForgeryToken] attribute, except that it ignores "safe" methods like GET and HEAD.

Adding the attribute to your application is simple - just add it to the global filters collection in your Startup class, when calling AddMvc().

public class Startup  
{
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddMvc(options =>
    {
        options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute());
    });
  }
}

Job done! No need to decorate all your action methods with attributes, you will just be protected automatically!


Damien Bowden: OpenID Connect Session Management using an Angular application and IdentityServer4

The article shows how the OpenID Connect Session Management can be implemented in an Angular application. The OpenID Connect Session Management 1.0 provides a way of monitoring the user session on the server using iframes. IdentityServer4 implements the server side of the specification. This does not monitor the lifecycle of the tokens used in the browser application. This session only monitors the server session. This has nothing to do with the OpenID tokens used by the SPA application.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Code: Angular auth module

Other posts in this series:

The OidcSecurityCheckSession class implements the Session Management from the specification. The init function creates an iframe and adds it to the window document in the DOM. The iframe uses the ‘authWellKnownEndpoints.check_session_iframe’ value, which is the connect/checksession API got from the ‘.well-known/openid-configuration’ service.

The init function also adds the event for the message, which is specified in the OpenID Connect Session Management documentation.

init() {
	this.sessionIframe = window.document.createElement('iframe');
	this.oidcSecurityCommon.logDebug(this.sessionIframe);
	this.sessionIframe.style.display = 'none';
	this.sessionIframe.src = this.authWellKnownEndpoints.check_session_iframe;

	window.document.body.appendChild(this.sessionIframe);
	this.iframeMessageEvent = this.messageHandler.bind(this);
	window.addEventListener('message', this.iframeMessageEvent, false);

	return Observable.create((observer: Observer<any>) => {
		this.sessionIframe.onload = () => {
			observer.next(this);
			observer.complete();
		}
	});
}

The pollServerSession function, posts a message every 3 seconds to the iframe which checks if the session on the server has been changed. The session_state is the value returned in the HTTP callback from a successful authorization.

pollServerSession(session_state: any, clientId: any) {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
			this.oidcSecurityCommon.logDebug(this.sessionIframe);
			this.sessionIframe.contentWindow.postMessage(clientId + ' ' + session_state, this.authConfiguration.stsServer);
		},
		(err: any) => {
			this.oidcSecurityCommon.logError('pollServerSession error: ' + err);
		},
		() => {
			this.oidcSecurityCommon.logDebug('checksession pollServerSession completed');
		});
}

The messageHandler handles the callback from the iframe. If the server session has changed, the output onCheckSessionChanged event is triggered.

private messageHandler(e: any) {
	if (e.origin === this.authConfiguration.stsServer &&
		e.source === this.sessionIframe.contentWindow
	) {
		if (e.data === 'error') {
			this.oidcSecurityCommon.logWarning('error from checksession messageHandler');
		} else if (e.data === 'changed') {
			this.onCheckSessionChanged.emit();
		} else {
			this.oidcSecurityCommon.logDebug(e.data + ' from checksession messageHandler');
		}
	}
}

The onCheckSessionChanged is a public EventEmitter output for this provider.

@Output() onCheckSessionChanged: EventEmitter<any> = new EventEmitter<any>(true);

The OidcSecurityService provider subscribes to the onCheckSessionChanged event and uses its onCheckSessionChanged function to handle this event.

this.oidcSecurityCheckSession.onCheckSessionChanged.subscribe(() => { this.onCheckSessionChanged(); });

After a successful login, and if the tokens are valid, the client application checks if the checksession should be used, and calls the init method and subscribes to it. When ready, it uses the pollServerSession function to activate the monitoring.

if (this.authConfiguration.start_checksession) {
  this.oidcSecurityCheckSession.init().subscribe(() => {
    this.oidcSecurityCheckSession.pollServerSession(
      result.session_state,
      this.authConfiguration.client_id
    );
  });
}

The onCheckSessionChanged function sets a public boolean which can be used to implement the required application logic when the server sesion has changed.

private onCheckSessionChanged() {
  this.oidcSecurityCommon.logDebug('onCheckSessionChanged');
  this.checkSessionChanged = true;
}

In this demo, the navigation bar allows to Angular application to refresh the session if the server session has changed.

<li>
  <a class="navigationLinkButton" *ngIf="securityService.checkSessionChanged" (click)="refreshSession()">Refresh Session</a>
</li>

When the application is started, the unchanged message is returned.

Then open the server application in a tab in the same browser session, and logout.

And the client application notices tht the server session has changed and can react as required.

Links:

http://openid.net/specs/openid-connect-session-1_0-ID4.html

http://docs.identityserver.io/en/release/



Anuraj Parameswaran: Connecting to Azure Cosmos DB emulator from RoboMongo

This post is about connecting to Azure Cosmos DB emulator from RoboMongo. Azure Cosmos DB is Microsoft’s globally distributed multi-model database. It is superset of Azure Document DB. Due to some challenges, one of our team decided to try some new No SQL databases. One of the option was Document Db. I found it quite good option, since it supports Mongo protocol so existing app can work without much change. So I decided to explore that. First step I downloaded the Document Db emulator, now it is Azure Cosmos DB emulator. Installed and started the emulator, it is opening the Data Explorer web page (https://localhost:8081/_explorer/index.html), which helps to explore the Documents inside the database. Then I tried to connect to the same with Robo Mongo (It is a free Mongo Db client, can be downloaded from here). But is was not working. I was getting some errors. Later I spent some time to find some similar issues, blog post on how to connect from Robo Mongo to Document Db emulator. But I couldn’t find anything useful. After spenting almost a day, I finally figured out the solution. Here is the steps.


Andrew Lock: Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

One of the nice features introduced in ASP.NET Core is the universal logging infrastructure. In this post I take a look at one of the helper methods in the ASP.NET Core logging library, and how you can use it to efficiently log messages in your libraries.

Before we get into it, I'll give a quick overview of the ASP.NET Core logging infrastructure. Fell free to skip down to the section on helper methods if you already know the basics of how it works.

Logging overview

The logging infrastructure is exposed in the form of the ILogger<T> and ILoggerFactory interfaces, which you can inject into your services using dependency injection to log messages in a number of ways. For example, in the following ProductController, we log a message when the View action is invoked.

public class ProductController : Controller  
{
    private readonly ILogger _logger;

    public ProductController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<ProductController>();
    }

    public IActionResult View(int id)
    {
        _logger.LogDebug("View Product called with id {Id}", id);
        return View();
    }
}

The ILogger can log messages at a number of different levels given by LogLevel:

public enum LogLevel  
{
    Trace = 0,
    Debug = 1,
    Information = 2,
    Warning = 3,
    Error = 4,
    Critical = 5,
}

The final aspect to the logging infrastructure are logging providers. These are the "sinks" that the logs are actually written to. You can plug in multiple providers, and write logs to a variety of different locations, for example the console, to a file, to Serilog etc.

One of the nice things about the logging infrastructure, and the ubiquitous use of DI in the ASP.NET Core libraries is that these same interfaces and classes are used throughout the libraries themselves, as well as in your application.

Controlling the logs produced by different categories

When creating a logger with CreateLogger<T>, the type name you pass in is used to create a category for the logs. At the application level, you can choose which LogLevels are actually output for a given category.

For example, you could specify that by default, Debug or higher level logs are written to providers, but for logs written by services in the Microsoft namespace, only logs of at least Warning level or above are written.

With this approach you can control the amount of logging produced by the various libraries in your application, increasing logging levels for only those areas that need them.

For example, the following screenshot shows the logs generated by default in an MVC application when we first hit the home page:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

That's a lot of logs! And notice that most of them are coming from internal components, from classes in the Microsoft namespace. It's basically just noise in this case. We can filter out Warning logs in the Microsoft namespace, but keep other logs at the Debug level:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

With the default ASP.NET Core 1.X template, all you need to do is change the appsettings.json file, and set the loglevels to Warning as appropriate:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Warning",
      "Microsoft": "Warning"
    }
  }
}

Note, in ASP.NET Core 1.X, filtering is a little bit of an afterthought. Some logging providers, such as the Console provider let you specify how to filter the categories. Alternatively, you can apply filters to all providers together using the WithFilter method. The logging in ASP.NET Core 2.0 will likely tidy up this approach - it is due to be updated in preview 2, so I won't dwell on it here.

Filtering considerations

It's a good idea to instrument your code with as many log messages are useful, and you can filter out the most verbose Trace and Debug log levels. These filtering capabilities are a really useful way of cutting through the cruft, but there's one particular downside.

If you add thousands of logging statements, that will inevitably start having a performance impact on your libraries, simply by the virtue of the fact you're running more code. One solution to this is to check whether the particular log level is enabled for the current logger, before trying to write to it. For example:

public class ProductController : Controller  
{
    public IActionResult Index()
    {
        if(_logger.IsEnabled(LogLevel.Debug))
        {
            _logger.LogDebug("Calling HomeController.Index");
        }
        return View();
    }
}

That's a bit of a pain to have to do every time you write a log though right? Luckily ASP.NET Core comes with a helper class, LoggerMessage to make using this pattern easier.

Creating logging delegates with the LoggerMessage Helper

The static LoggerMessage class is found in the Microsoft.Extensions.Logging.Abstractions package, and contains a number of static, generic Define methods that return an Action<> which in turn can be used to create strongly-typed logging extensions. That probably all sounds a bit confusing so lets break it down from the top.

The strongly-typed logging extension methods

We'll start with the logging code we want to use in our application. In this really simple example, we're just going to log the time that the HomeController.Index action method executes:

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        _logger.HomeControllerIndexExecuting(DateTimeOffset.Now);
        return View();
    }
}

The HomeControllerIndexExecuting method is a custom extension method that takes a DateTimeOffset parameter. We can define it as follows:

internal static class LoggerExtensions  
{
    private static Action<ILogger, DateTimeOffset, Exception> _homeControllerIndexExecuting;

    static LoggerExtensions()
    {
        _homeControllerIndexExecuting = LoggerMessage.Define<DateTimeOffset>(
            logLevel: LogLevel.Debug,
            eventId: 1,
            formatString: "Executing 'Index' action at '{StartTime}'");
    }

    public static void HomeControllerIndexExecuting(
        this ILogger logger, DateTimeOffset executeTime)
    {
        _homeControllerIndexExecuting(logger, executeTime, null);
    }
}

The HomeControllerIndexExecuting method is an ILogger extension method that invokes a static Action field on our static LoggerExtensions method. The _homeControllerIndexExecuting field is initialised using the ASP.NET Core LoggerMessage.Define method, by providing a logLevel, an eventId and the formatString to use to create the log.

That probably seems like a lot of effort right? Why not just call _logger.LogDebug() directly in the HomeControllerIndexExecuting extension method?

Remember, our goal was to improve performance by only logging messages for unfiltered categories, without having to explicitly write: if(_logger.IsEnabled(LogLevel.Debug). The answer lies in the LoggerMessage.Define<T> method.

The LoggerHelper.Define method

The purpose of the static LoggerMessage.Define<T> method is three-fold:

  1. Encapsulate the if statement to allow performant logging
  2. Enforce the correct strongly-typed parameters are passed when logging the message
  3. Ensure the log message contains the correct number of placeholders for parameters

This post is long enough, so I won't go into to much detail, but in summary, this gives you an idea of what the method looks like:

public static class LoggerMessage  
{
    public static Action<ILogger, T1, Exception> Define<T1>(
        LogLevel logLevel, EventId eventId, string formatString)
    {
        var formatter = CreateLogValuesFormatter(
            formatString, expectedNamedParameterCount: 1);

        return (logger, arg1, exception) =>
        {
            if (logger.IsEnabled(logLevel))
            {
                logger.Log(logLevel, eventId, new LogValues<T1>(formatter, arg1), exception, LogValues<T1>.Callback);
            }
        };
    }
}

First, this method performs a check that the provided format string, ("Executing 'Index' action at '{StartTime}'") contains the correct number of named parameters. Then, it returns an action method with the necessary number of generic parameters, including the if(logger.IsEnabled) clause. There are multiple overloads of the Define method, that take 0-6 generic parameters, depending on the number you need for your custom logging message.

If you want to see the details, including how the LogValues<> class works, check out the source code on GitHub.

Summary

If you take one thing away from the post, consider the way you log messages in your own application or libraries. Consider creating a static LoggerExtensions class, using LoggerMessage.Define to create a set of static fields, and adding strongly typed extension methods like HomeControllerIndexExecuting using the static Action<> fields.

If you want to see the logger messages in action, check out the sample app in the logging repo, or take a look at the ImageSharp library, which puts them to good effect.


Anuraj Parameswaran: Using Node Services in ASP.NET Core

This post is about running Javascript code in Server. Because a huge number of useful, high-quality Web-related open source packages are in the form of Node Package Manager (NPM) modules. NPM is the largest repository of open-source software packages in the world, and the Microsoft.AspNetCore.NodeServices package means that you can use any of them in your ASP.NET Core application.


Anuraj Parameswaran: Detecting AJAX Requests in ASP.NET Core

This post is about detecting Ajax Requests in ASP.NET Core. In earlier versions of ASP.NET MVC, developers could easily determine whether the request is made via AJAX or not with IsAjaxRequest() method which is part of Request method. In this post I am implementing the similar functionlity in ASP.NET Core.


Damien Bowden: Implementing a silent token renew in Angular for the OpenID Connect Implicit flow

This article shows how to implement a silent token renew in Angular using IdentityServer4 as the security token service server. The SPA Angular client implements the OpenID Connect Implicit Flow ‘id_token token’. When the id_token expires, the client requests new tokens from the server, so that the user does not need to authorise again.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Other posts in this series:

When a user of the client app authorises for the first time, after a successful login on the STS server, the AuthorizedCallback function is called in the Angular application. If the server response and the tokens are successfully validated, as defined in the OpenID Connect specification, the silent renew is initialized, and the token validation method is called.

 public AuthorizedCallback() {
        
	...
		
	if (authResponseIsValid) {
		
		...

		if (this._configuration.silent_renew) {
			this._oidcSecuritySilentRenew.initRenew();
		}

		this.runTokenValidatation();

		this._router.navigate([this._configuration.startupRoute]);
	} else {
		this.ResetAuthorizationData();
		this._router.navigate(['/Unauthorized']);
	}

}

The OidcSecuritySilentRenew Typescript class implements the iframe which is used for the silent token renew. This iframe is added to the parent HTML page. The renew is implemented in an iframe, because we do not want the Angular application to refresh, otherwise for example we would lose form data.

...

@Injectable()
export class OidcSecuritySilentRenew {

    private expiresIn: number;
    private authorizationTime: number;
    private renewInSeconds = 30;

    private _sessionIframe: any;

    constructor(private _configuration: AuthConfiguration) {
    }

    public initRenew() {
        this._sessionIframe = window.document.createElement('iframe');
        console.log(this._sessionIframe);
        this._sessionIframe.style.display = 'none';

        window.document.body.appendChild(this._sessionIframe);
    }

    ...
}

The runTokenValidatation function starts an Observable timer. The application subscribes to the Observable which executes every 3 seconds. The id_token is validated, or more precise, checks that the id_token has not expired. If the token has expired and the silent_renew configuration has been activated, the RefreshSession function will be called, to get new tokens.

private runTokenValidatation() {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
		if (this._isAuthorized) {
			if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
				console.log('IsAuthorized: isTokenExpired');

				if (this._configuration.silent_renew) {
					this.RefreshSession();
				} else {
					this.ResetAuthorizationData();
				}
			}
		}
	},
	function (err: any) {
		console.log('Error: ' + err);
	},
	function () {
		console.log('Completed');
	});
}

The RefreshSession function creates the required nonce and state which is used for the OpenID Implicit Flow validation and starts an authentication and authorization of the client application and the user.

public RefreshSession() {
        console.log('BEGIN refresh session Authorize');

        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('RefreshSession created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url = this.createAuthorizeUrl(nonce, state);

        this._oidcSecuritySilentRenew.startRenew(url);
    }

The startRenew sets the iframe src to the url for the OpenID Connect flow. If successful, the id_token and the access_token are returned and the application runs without any interupt.

public startRenew(url: string) {
        this._sessionIframe.src = url;

        return new Promise((resolve) => {
            this._sessionIframe.onload = () => {
                resolve();
            }
        });
}

IdentityServer4 Implicit Flow configuration

The STS server, using IdentityServer4 implements the server side of the OpenID Implicit flow. The AccessTokenLifetime and the IdentityTokenLifetime properties are set to 30s and 10s. After 10s the id_token will expire and the client application will request new tokens. The access_token is valid for 30s, so that any client API requests will not fail. If you set these values to the same value, then the client will have to request new tokens before the id_token expires.

new Client
{
	ClientName = "angularclient",
	ClientId = "angularclient",
	AccessTokenType = AccessTokenType.Reference,
	AccessTokenLifetime = 30,
	IdentityTokenLifetime = 10,
	AllowedGrantTypes = GrantTypes.Implicit,
	AllowAccessTokensViaBrowser = true,
	RedirectUris = new List<string>
	{
		"https://localhost:44311"

	},
	PostLogoutRedirectUris = new List<string>
	{
		"https://localhost:44311/Unauthorized"
	},
	AllowedCorsOrigins = new List<string>
	{
		"https://localhost:44311",
		"http://localhost:44311"
	},
	AllowedScopes = new List<string>
	{
		"openid",
		"dataEventRecords",
		"dataeventrecordsscope",
		"securedFiles",
		"securedfilesscope",
		"role"
	}
}

When the application is run, the user can login, and the tokens are refreshed every ten seconds as configured on the server.

Links:

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/IdentityServer/IdentityServer4

https://identityserver4.readthedocs.io/en/release/



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

This is the next in a series of posts on using ImageSharp to resize images in an ASP.NET Core application. I showed how you could define an MVC action that takes a path to a file stored in the wwwroot folder, resizes it, and serves the resized file.

The biggest problem with this is that resizing an image is relatively expensive, taking multiple seconds to process large images. In the previous post I showed how you could use the IDistributedCache interface to cache the resized image, and use that for subsequent requests.

This works pretty well, and avoids the need to process the image multiple times, but in the implementation I showed, there were a couple of drawbacks. The main issue was the lack of caching headers and features at the HTTP level - whenever the image is requested, the MVC action will return the whole data to the browser, even though nothing has changed.

In the following image, you can see that every request returns a 200 response and the full image data. The subsequent requests are all much faster than the original because we're using data cached in the IDistributedCache, but the browser is not caching our resized image.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

In this post I show a different approach to caching the data - instead of storing the file in an IDistributedCache, we instead write the file to disk in the wwwroot folder. We then use StaticFileMiddleware to serve the file directly, without ever hitting the MVC middleware after the initial request. This lets us take advantage of the built in caching headers and etag behaviour that comes with the StaticFileMiddleware.

Note: James Jackson-South has been working hard on some extensible ImageSharp middleware to provide the functionality in these blog posts. He's even written a post blog introducing it, so check it out!

The system design

The approach I'm using in this post is shown in the following figure:

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

With this design a request for resizing an image, e.g. to /resized/200/120/original.jpg, would go through a number of steps:

  1. A request arrives for /resized/200/120/original.jpg
  2. The StaticFileMiddleware looks for the original.jpg file in the folder wwwroot/resized/200/120/, but it doesn't exist, so the request passes on to the MvcMiddleware
  3. The MvcMiddleware invokes the ResizeImage middleware, and saves the resized file in the folder wwwroot/resized/200/120/.
  4. On the next request, the StaticFileMiddleware finds the resized image in the wwwroot folder, and serves it as usual, short-circuiting the middleware pipeline before the MvcMiddleware can run.
  5. All subsequent requests for the resized file are served by the StaticFileMiddleware.

Writing a resized file to the wwwroot folder

After we first resize an image using the MvcMiddleware, we need to store the resized image in the wwwroot folder. In ASP.NET Core there is an abstraction called IFileProvider which can be used to obtain information about files. The IHostingEnvironment includes two such IFileProvders:

  • ContentRootFileProvider - an `IFileProvider for the Content Root, where your application files are stored, usually the project root or publish folder.
  • WebRootFileProvider - an IFileProvider for the wwwroot folder

We can use the WebRootFileProvider to open a stream to our destination file, which we will write the resized image to. The outline of the method is as follows, with preconditions, and the DOS protection code removed for brevity:`

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;
    public HomeController(IHostingEnvironment env)
    {
        _fileProvider = env.WebRootFileProvider;
    }

    [Route("/resized/{width}/{height}/{*url}")]
    public IActionResult ResizeImage(string url, int width, int height)
    {
        // Preconditions and sanitsation 
        // Check the original image exists
        var originalPath = PathString.FromUriComponent("/" + url);
        var fileInfo = _fileProvider.GetFileInfo(originalPath);
        if (!fileInfo.Exists) { return NotFound(); }

        // Replace the extension on the file (we only resize to jpg currently) 
        var resizedPath = ReplaceExtension($"/resized/{width}/{height}/{url}");

        // Use the IFileProvider to get an IFileInfo
        var resizedInfo = _fileProvider.GetFileInfo(resizedPath);
        // Create the destination folder tree if it doesn't already exist
        Directory.CreateDirectory(Path.GetDirectoryName(resizedInfo.PhysicalPath));

        // resize the image and save it to the output stream
        using (var outputStream = new FileStream(resizedInfo.PhysicalPath, FileMode.CreateNew))
        using (var inputStream = fileInfo.CreateReadStream())
        using (var image = Image.Load(inputStream))
        {
            image
                .Resize(width, height)
                .SaveAsJpeg(outputStream);
        }

        return PhysicalFile(resizedInfo.PhysicalPath, "image/jpg");
    }

    private static string ReplaceExtension(string wwwRelativePath)
    {
        return Path.Combine(
            Path.GetDirectoryName(wwwRelativePath),
            Path.GetFileNameWithoutExtension(wwwRelativePath)) + ".jpg";
    }
}

The overall design of this method is pretty simple.

  1. Check the original file exists.
  2. Create the destination file path. We're replacing the file extension with jpg at the moment because we are always resizing to a jpeg.
  3. Obtain an IFileInfo for the destination file. This is relative to the wwwroot folder as we are using the WebRootFileProvider on IHostingEnvironment.
  4. Open a file stream for the destination file.
  5. Open the original image, resize it, and save it to the output file stream.

With this method, we have everything we need to cache files in the wwwroot folder. Even better, nothing else needs to change in our Startup file, or anywhere else in our program.

Trying it out

Time to take it for a spin! If we make a number of requests for the same page again, and compare it to the first image in this post, you can see that we still have the fast response times for requests after the first, as we only resize the image once. However, you can also see the some of the requests now return a 304 response, and just 208 bytes of data. The browser uses its standard HTTP caching mechanisms on the client side, rather than caching only on the server.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk This is made possible by the etag and Last-Modified headers sent automatically by the StaticFileMiddleware.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Note, we are not actually sending any caching headers by default - I wrote a post on how to do this here, which gives you control over how much caching browsers should do.

It might seem a little odd that there are three 200 requests before we start getting 304s. This is because:

  1. The first request is handled by the ResizeImage MVC method, but we are not adding any cache-related headers like ETag etc - we are just serving the file using the PhysicalFileResult.
  2. The second request is handled by the StaticFileMiddleware. It returns the file from disk, including an ETag and a Last-Modified header.
  3. The third request is made with additional headers - If-Modified-Since and If-None-Match headers. This returns the image data with a new ETag.
  4. Subsequent requests send the new ETag in the If-None-Match header, and the server responds with 304s.

I'm not entirely sure why we need three requests for the whole data here - it seems like two would suffice, given that the third request is made with the If-Modified-Since and If-None-Match headers. Why would the ETag need to change between requests two and three? I presume this is just standard behaviour though, and something I need to look at in more detail when I have time!

Summary

This post takes an alternative approach to caching compared to my last post on ImageSharp. Instead of caching the resized images in an IDistributedCache, we save them directly to the wwwroot folder. That way we can use all of the built in file response capabilities of the StaticFileMiddleware, without having to write it ourselves.

Having said that, James Jackson-South has written some middleware to take a similar approach, which handles all the caching headers for you. If this series has been of interest, I encourage you to check it out!


Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an¬†interview¬†– so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Anuraj Parameswaran: What is new in ASP.NET Core 2.0 Preview

This post is about new features of ASP.NET Core 2.0 Preview. Microsoft announced ASP.NET Core 2.0 Preview 1 at Build 2017. This post will introduce some ASP.NET 2.0 features.


Damien Bowden: Anti-Forgery Validation with ASP.NET Core MVC and Angular

This article shows how API requests from an Angular SPA inside an ASP.NET Core MVC application can be protected against XSRF by adding an anti-forgery cookie. This is required, if using Angular, when using cookies to persist the auth token.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

Cross Site Request Forgery

XSRF is an attack where a hacker makes malicious requests to a web app, when the user of the website is already authenticated. This can happen when a website uses cookies to persist the token of an trusted website, user. A pure SPA should not use cookies to as it is hard to protect against this. With a server side rendered application, like ASP.NET Core MVC, anti-forgery cookies can be used to protect against this, which makes it safer, when using cookies.

Angular automatically adds the X-XSRF-TOKEN HTTP Header with the anti-forgery cookie value for each request if the XSRF-TOKEN cookie is present. ASP.NET Core needs to know, that it must use this to validate the request. This can be added to the ConfigureServices method in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddAntiforgery(options => options.HeaderName = "X-XSRF-TOKEN");
	services.AddMvc();
}

The XSRF-TOKEN cookie is added to the response of the HTTP request. The cookie is a secure cookie so this is only sent with HTTPS and not HTTP. All HTTP (Not HTTPS) requests will fail and return a 400 response. The cookie is created and added each time a new server url is called, but not for an API call.

app.Use(async (context, next) =>
{
	string path = context.Request.Path.Value;
	if (path != null && !path.ToLower().Contains("/api"))
	{
		// XSRF-TOKEN used by angular in the $http if provided
		var tokens = antiforgery.GetAndStoreTokens(context);
		context.Response.Cookies.Append("XSRF-TOKEN", 
		  tokens.RequestToken, new CookieOptions { 
		    HttpOnly = false, 
		    Secure = true 
		  }
		);
	}

	...

	await next();
});

The API uses the ValidateAntiForgeryToken attribute to check if the request contains the correct value for the XSRF-TOKEN cookie. If this is incorrect, or not sent, the request is rejected with a 400 response. The attribute is required when data is changed. HTTP GET requests should not require this attribute.

[HttpPut]
[ValidateAntiForgeryToken]
[Route("{id:int}")]
public IActionResult Update(int id, [FromBody]Thing thing)
{
	...

	return Ok(updatedThing);
}

You can check the cookies in the chrome browser.

Or in Firefox using Firebug (Cookies Tab).

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

http://www.fiyazhasan.me/angularjs-anti-forgery-with-asp-net-core/

http://www.dotnetcurry.com/aspnet/1343/aspnet-core-csrf-antiforgery-token

http://stackoverflow.com/questions/43312973/how-to-implement-x-xsrf-token-with-angular2-app-and-net-core-app/43313402

https://en.wikipedia.org/wiki/Cross-site_request_forgery

https://stormpath.com/blog/angular-xsrf



Damien Bowden: Secure ASP.NET Core MVC with Angular using IdentityServer4 OpenID Connect Hybrid Flow

This article shows how an ASP.NET Core MVC application using Angular in the razor views can be secured using IdentityServer4 and the OpenID Connect Hybrid Flow. The user interface uses server side rendering for the MVC views and the Angular app is then implemented in the razor view. The required security features can be added to the application easily using ASP.NET Core, which makes it safe to use the OpenID Connect Hybrid flow, which once authenticated and authorised, saves the token in a secure cookie. This is not an SPA application, it is an ASP.NET Core MVC application with Angular in the razor view. If you are implementing an SPA application, you should use the OpenID Connect Implicit Flow.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

IdentityServer4 configuration for OpenID Connect Hybrid Flow

IdentityServer4 is implemented using ASP.NET Core Identity with SQLite. The application implements the OpenID Connect Hybrid flow. The client is configured to allow the required scopes, for example the ‘openid’ scope must be added and also the RedirectUris property which implements the URL which is implemented on the client using the ASP.NET Core OpenID middleware.

using IdentityServer4;
using IdentityServer4.Models;
using System.Collections.Generic;

namespace QuickstartIdentityServer
{
    public class Config
    {
        public static IEnumerable<IdentityResource> GetIdentityResources()
        {
            return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
                new IdentityResources.Email(),
                new IdentityResource("thingsscope",new []{ "role", "admin", "user", "thingsapi" } )
            };
        }

        public static IEnumerable<ApiResource> GetApiResources()
        {
            return new List<ApiResource>
            {
                new ApiResource("thingsscope")
                {
                    ApiSecrets =
                    {
                        new Secret("thingsscopeSecret".Sha256())
                    },
                    Scopes =
                    {
                        new Scope
                        {
                            Name = "thingsscope",
                            DisplayName = "Scope for the thingsscope ApiResource"
                        }
                    },
                    UserClaims = { "role", "admin", "user", "thingsapi" }
                }
            };
        }

        // clients want to access resources (aka scopes)
        public static IEnumerable<Client> GetClients()
        {
            // client credentials client
            return new List<Client>
            {
                new Client
                {
                    ClientName = "angularmvcmixedclient",
                    ClientId = "angularmvcmixedclient",
                    ClientSecrets = {new Secret("thingsscopeSecret".Sha256()) },
                    AllowedGrantTypes = GrantTypes.Hybrid,
                    AllowOfflineAccess = true,
                    RedirectUris = { "https://localhost:44341/signin-oidc" },
                    PostLogoutRedirectUris = { "https://localhost:44341/signout-callback-oidc" },
                    AllowedCorsOrigins = new List<string>
                    {
                        "https://localhost:44341/"
                    },
                    AllowedScopes = new List<string>
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        IdentityServerConstants.StandardScopes.OfflineAccess,
                        "thingsscope",
                        "role"

                    }
                }
            };
        }
    }
}

MVC Angular Client Configuration

The ASP.NET Core MVC application with Angular is implemented as shown in this post: Using Angular in an ASP.NET Core View with Webpack

The cookie authentication middleware is used to store the access token in a cookie, once authorised and authenticated. The OpenIdConnectAuthentication middleware is used to redirect the user to the STS server, if the user is not authenticated. The SaveTokens property is set, so that the token is persisted in the secure cookie.

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
	AuthenticationScheme = "Cookies"
});

app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
	AuthenticationScheme = "oidc",
	SignInScheme = "Cookies",

	Authority = "https://localhost:44348",
	RequireHttpsMetadata = true,

	ClientId = "angularmvcmixedclient",
	ClientSecret = "thingsscopeSecret",

	ResponseType = "code id_token",
	Scope = { "openid", "profile", "thingsscope" },

	GetClaimsFromUserInfoEndpoint = true,
	SaveTokens = true
});

The Authorize attribute is used to secure the MVC controller or API.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Authorization;

namespace AspNetCoreMvcAngular.Controllers
{
    [Authorize]
    public class HomeController : Microsoft.AspNetCore.Mvc.Controller
    {
        public IActionResult Index()
        {
            return View();
        }

        public IActionResult Error()
        {
            return View();
        }
    }
}

CSP: Content Security Policy in the HTTP Headers

Content Security Policy helps you reduce XSS risks. The really brilliant NWebSec middleware can be used to implement this as required. Thanks to André N. Klingsheim for this excellent library. The middleware adds the headers to the HTTP responses.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

In this configuration, mixed content is not allowed and unsafe inline styles are allowed.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
	.StyleSources(s => s.UnsafeInline())
);

Set the Referrer-Policy in the HTTP Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Again NWebSec middleware is used to implement this.

           
app.UseReferrerPolicy(opts => opts.NoReferrer());

Redirect Validation

You can secure that application so that only redirects to your sites are allowed. For example, only a redirect to IdentityServer4 is allowed.

// Register this earlier if there's middleware that might redirect.
// The IdentityServer4 port needs to be added here. 
// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

Secure Cookies

Only secure cookies should be used to store the session information.

You can check this in the Chrome browser:

XFO: X-Frame-Options

The X-Frame-Options Headers can be used to prevent an IFrame from being used from within the UI. This helps protect against click jacking.

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

app.UseXfo(xfo => xfo.Deny());

Configuring HSTS: Http Strict Transport Security

The HTTP Header tells the browser to force HTTPS for a length of time.

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

TOFU (Trust on first use) or first time loading.

Once you have a proper cert and a fixed URL, you can configure that the browser to preload HSTS settings for your website.

https://hstspreload.org/

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

X-Xss-Protection NWebSec

Adds a middleware to the ASP.NET Core pipeline that sets the X-Xss-Protection (Docs from NWebSec)

 app.UseXXssProtection(options => options.EnabledWithBlockMode());

CORS

Only the allowed CORS should be enabled when implementing this. Disabled this as much as possible.

Cross Site Request Forgery XSRF

See this blog:
Anti-Forgery Validation with ASP.NET Core MVC and Angular

Validating the security Headers

Once you start the application, you can check that all the security headers are added as required:

Here’s the Configure method with all the NWebsec app settings as well as the authentication middleware for the client MVC application.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IAntiforgery antiforgery)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	loggerFactory.AddSerilog();

	//Registered before static files to always set header
	app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
	app.UseXContentTypeOptions();
	app.UseReferrerPolicy(opts => opts.NoReferrer());

	app.UseCsp(opts => opts
		.BlockAllMixedContent()
		.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
		.StyleSources(s => s.UnsafeInline())
	);

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCookieAuthentication(new CookieAuthenticationOptions
	{
		AuthenticationScheme = "Cookies"
	});

	app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
	{
		AuthenticationScheme = "oidc",
		SignInScheme = "Cookies",

		Authority = "https://localhost:44348",
		RequireHttpsMetadata = true,

		ClientId = "angularmvcmixedclient",
		ClientSecret = "thingsscopeSecret",

		ResponseType = "code id_token",
		Scope = { "openid", "profile", "thingsscope" },

		GetClaimsFromUserInfoEndpoint = true,
		SaveTokens = true
	});

	var angularRoutes = new[] {
		 "/default",
		 "/about"
	 };

	app.Use(async (context, next) =>
	{
		string path = context.Request.Path.Value;
		if (path != null && !path.ToLower().Contains("/api"))
		{
			// XSRF-TOKEN used by angular in the $http if provided
			  var tokens = antiforgery.GetAndStoreTokens(context);
			context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken, new CookieOptions { HttpOnly = false, Secure = true });
		}

		if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
			(ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
		{
			context.Request.Path = new PathString("/");
		}

		await next();
	});

	app.UseDefaultFiles();
	app.UseStaticFiles();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseStaticFiles();

	//Registered after static files, to set headers for dynamic content.
	app.UseXfo(xfo => xfo.Deny());

	// Register this earlier if there's middleware that might redirect.
	// The IdentityServer4 port needs to be added here. 
	// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
	app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

	app.UseXXssProtection(options => options.EnabledWithBlockMode());

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});  
}

Links:

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows

https://docs.nwebsec.com/en/latest/index.html

https://www.nwebsec.com/

https://github.com/NWebsec/NWebsec

https://content-security-policy.com/

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

https://scotthelme.co.uk/a-new-security-header-referrer-policy/

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

https://gun.io/blog/tofu-web-security/

https://en.wikipedia.org/wiki/Trust_on_first_use

http://www.dotnetnoob.com/2013/07/ramping-up-aspnet-session-security.html

http://openid.net/specs/openid-connect-core-1_0.html

https://www.ssllabs.com/



Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at¬†banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile”¬†is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events.¬†While logging is more like low level ‚Äúprintf‚ÄĚ style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here¬†<use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) Рor get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must¬†be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually¬†a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.