Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Currently, one of the significant missing features in .NET Core and .NET Standard are the System.Drawing APIs that you can use, among other things, for server-side image processing in ASP.NET Core. Bertrand Le Roy gave a great run down of the various alternatives available in Jan 2017, each with different pros and cons.

I was reading a post by Dmitry Sikorsky yesterday describing how to use one of these libraries, the CoreCompat.System.Drawing package, to resize an image in ASP.NET Core. This package is designed to mimic the existing System.Drawing APIs (it's a .NET Core port, of the Mono port, of System.Drawing!) so if you need a drop in replacement for System.Drawing then it's a good place to start.

I'm going to need to start doing some image processing soon, so I wanted to take a look at how the code for working with CoreCompat.System.Drawing would compare to using the ImageSharp package. This is a brand new library that is designed from the ground up to be cross-platform by using only managed-code. This means it will probably not be as performant as libraries that use OS-specific features, but on the plus side, it is completely cross platform.

For the purposes of this comparison, I'm going to start with the code presented by Dmitry in his post and convert it to use ImageSharp.

The sample app

This post will as based on the code from Dimitri's post, so it uses the same sample app. This contains a single controller, the ImageController, which you can use to crop and resize an image from a given URL.

For example, a request might look like the following:

/image?url=https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png&sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360&destinationwidth=100&destinationheight=100

This will downloaded the GitHub logo from tps://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png:

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

It will then crop it using the rectangle specified by sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360, and resize the output to 100×100. Finally, it will render the result in the response as a jpeg.

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

This is the same functionality Dimitri described, I will just convert his code to use ImageSharp instead.

Installing ImageSharp

The first step is to add the ImageSharp package to your project. Currently, this is not quite as smooth as it will be, as it is not yet published on Nuget, but instead just to a MyGet feed. This is only a temporary situation while the code-base stabilises - it will be published to NuGet at that point - but at the moment it is a bit of a barrier to adding it to your project.

Note, ImageSharp actually is published on NuGet, but that package is currently just a placeholder for when the package is eventually published. Don't use it!

To install the package from the MyGet feed, add a NuGet.config file to your solution folder, specifying the location of the feed:

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="ImageSharp Nightly" value="https://www.myget.org/F/imagesharp/api/v3/index.json" />
  </packageSources>
</configuration>  

You can now add the ImageSharp package to your csproj file, and run a restore. I specified the version 1.0.0-* to fetch the latest version from the feed (1.0.0-alpha7 in my case).

<PackageReference Include="ImageSharp" Version="1.0.0-*" />  

When you run dotnet restore you should see that the CLI has used the ImageSharp MyGet feed, where it lists the config files used:

$ dotnet restore
  Restoring packages for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj...
  Installing ImageSharp 1.0.0-alpha7-00006.
  Generating MSBuild file C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\AspNetCoreImageResizingService.csproj.nuget.g.props.
  Writing lock file to disk. Path: C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\project.assets.json
  Restore completed in 2.76 sec for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj.

  NuGet Config files used:
      C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\NuGet.Config
      C:\Users\Sock\AppData\Roaming\NuGet\NuGet.Config
      C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config

  Feeds used:
      https://www.myget.org/F/imagesharp/api/v3/index.json
      https://api.nuget.org/v3/index.json
      C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

  Installed:
      1 package(s) to C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj

Adding the NuGet.config file is a bit of a pain, but it's a step that will hopefully go away soon, when the package makes its way onto NuGet.org. On the plus side, you only need to add a single package to your project for this example.

In contrast, to add the CoreCompat.System.Drawing packages you have to include three different packages when writing cross-platform code - the library itself and the run time components for both Linux and OS X:

<PackageReference Include="CoreCompat.System.Drawing" Version="1.0.0-beta006" />  
<PackageReference Include="runtime.linux-x64.CoreCompat.System.Drawing" Version="1.0.0-beta009" />  
<PackageReference Include="runtime.osx.10.10-x64.CoreCompat.System.Drawing" Version="1.0.1-beta004" />  

Obviously, if you are running on only a single platform, then this probably won't be an issue for you, but it's something to take into consideration.

Loading an image from a stream

Now the library is installed, we can start converting the code. The first step in the app is to download the image provided in the URL.

Note that this code is very much sample only - downloading files sent to you in query arguments is probably not advisable, plus you should probably be using a static HttpClient, disposing correctly etc!

For the CoreCompat.System.Drawing library, the code doing the work reads the stream into a Bitmap, which is then set to the Image object.

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

using (Bitmap temp = new Bitmap(inputStream))  
{
    image = new Bitmap(temp);
}

While for ImageSharp we have the following:

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

image = Image.Load(inputStream);  

Obviously the HttpClient code is identical here, but there is less faffing required to actually read an image from the response stream. The ImageSharp API is much more intuitive - I have to admit I always have to refresh my memory on how the System.Drawing Imageand Bitmap classes interact! Definitely a win to ImageSharp I think.

It's worth noting that the Image classes in these two examples are completely different types, in different namespaces, so are not interoperable in general.

Cropping and Resizing an image

Once the image is in memory, the next step is to crop and resize it to create our output image. The CropImage function for the CoreCompat.System.Drawing is as follows:

private Image CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
  Image destinationImage = new Bitmap(destinationWidth, destinationHeight);
  Graphics g = Graphics.FromImage(destinationImage);

  g.DrawImage(
    sourceImage,
    new Rectangle(0, 0, destinationWidth, destinationHeight),
    new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight),
    GraphicsUnit.Pixel
  );

  return destinationImage;
}

This code creates the destination image first, generates a Graphics object to allow manipulating the content, and then draws a region from the first image onto the second, resizing as it does so.

This does the job, but it's not exactly simple to follow - if I hadn't told you, would you have spotted that the image is being resized as well as cropped? Maybe, given we set the destinationImage size, but possibly not if you were just looking at the DrawImage function.

In contrast, the ImageSharp version of this method would look something like the following:

private Image<Rgba32> CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
    return sourceImage
        .Crop(new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight))
        .Resize(destinationWidth, destinationHeight);
}

I think you'd agree, this is much easier to understand! Instead of using a mapping from one coordinate system to another, handling both the crop and resize in one operation, it has two well-named methods that are easy to understand.

One slight quirk in the ImageSharp version is that this method returns an Image<Rgba32> when we gave it an Image. The definition for this Image object is:

public sealed class Image : Image<Rgba32> { }  

so the Image is-an Image<Rgba32>. This isn't a big issue, I guess it would just be nice if you were working with the Image class to get back an Image from the manipulation functions. I still count this as a win for ImageSharp.

Saving the image to a stream

The final part of the app is to save the cropped image to the response stream and return it to the browser.

The CoreCompat.System.Drawing version of saving the image to a stream looks like the following. We first download the image, crop it and then save it to a MemoryStream. This stream can then be used to create a FileResponse object in the browser (check the example source code or Dimitri's post for details.

Image sourceImage = await this.LoadImageFromUrl(url);

Image destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, ImageFormat.Jpeg);  

The ImageSharp equivalent is very similar. It just involves changing the type of the destination image to be Image<Rgba32> (as mentioned in the previous section), and updating the last line, in which we save the image to a stream.

Image sourceImage = await this.LoadImageFromUrl(url);

Image<Rgba32> destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, new JpegEncoder());  

Instead of using an Enum to specify the output formatting, you pass an instance of an IImageEncoder, in this case the JpegEncoder. This approach is more extensible, though it is slightly less discoverable then the System.Drawing approach.

Note, there are many different overloads to Image<T>.Save() that you can use to specify all sorts of different encoding options etc.

Wrapping up

And that's it. Everything you need to convert from CoreCompat.System.Drawing to ImageSharp. Personally, I really like how ImageSharp is shaping up - it has a nice API, is fully managed cross-platform and even targets .NET Standard 1.1 - no mean feat! It may not currently hit the performance of other libraries that rely on native code, but with all the improvements and progress around Spans<T>, it may be able to come close to parity down the line.

If you're interested in the project, do check it out on GitHub and consider contributing - it will be great to get the project to an RTM state.

Thanks are due to James South for creating the ImageSharp project, and also to Dmitry Sikorsky for inspiring me to write this post! You can find the source code for his project on GitHub here, and the source for my version here.


Anuraj Parameswaran: Post requests from Azure Logic apps

This post is about sending post request to services from Azure Logic Apps. Logic Apps provide a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow. There are many connectors across the cloud and on-premises to quickly integrate across services and protocols. A logic app begins with a trigger (like ‘When an account is added to Dynamics CRM’) and after firing can begin many combinations actions, conversions, and condition logic.


Andrew Lock: Creating a basic Web API template using dotnet new custom templates

Creating a basic Web API template using dotnet new custom templates

In my last post, I showed a simple, stripped-down version of the Web API template with the Razor dependencies removed.

As an excuse to play with the new CLI templating functionality, I decided to turn this template into a dotnet new template.

For details on this new capability, check out the announcement blog post, or the excellent series by Muhammed Rehan Saeed. In brief, the .NET CLI includes functionality to let you create your own templates using dotnet new, which can be distributed as zip files, installed from the source code project folder or as NuGet packages.

The Basic Web API template

I decided to wrap the basic web API template I created in my last post so that you can easily use it to create your own Web API projects without Razor templates.

To do so, I followed Muhammed Rehan Saeed's blog posts (and borrowed heavily from his example template!) to create a version of the basic Web API template you can install from NuGet.

This template creates a very stripped-down version of the web API project, with the Razor functionality removed. If you are looking for a more fully-featured template, I recommend checking out the ASP.NET MVC Boilerplate project.

If you have installed Visual Studio 2017, you can use the .NET CLI to install new templates and use them to create projects:

  1. Run dotnet new --install "NetEscapades.Templates::*" to install the project template
  2. Run dotnet new basicwebapi --help to see how to select the various features to include in the project
  3. Run dotnet new basicwebapi --name "MyTemplate" along with any other custom options to create a project from the template.

This will create a new basic Web API project in the current folder.

Options and feature selection

One of the great features in the .NET CLI templates are the ability to do feature selection . This lets you add or remove features from the template at the time it is generated.

I added a number of options to the template (again, heavily inspired by the ASP.NET Boilerplate project). This lets you add features that will be commonly included in a Web API project, such as CORS, DataAnnotations, and the ApiExplorer.

You can view these options by running dotnet new basicwebapi --help:

$dotnet new basicwebapi --help
Template Instantiation Commands for .NET Core CLI.

Usage: dotnet new [arguments] [options]

Arguments:  
  template  The template to instantiate.

Options:  
  -l|--list         List templates containing the specified name.
  -lang|--language  Specifies the language of the template to create
  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is used.
  -o|--output       Location to place the generated output.
  -h|--help         Displays help for this command.
  -all|--show-all   Shows all templates


Basic ASP.NET Core Web API (C#)  
Author: Andrew Lock  
Options:

  -A|--ApiExplorer                 The ApiExplorer functionality allows you to expose metadata about your API endpoints. You can use it to generate documentation about your application. Enabling this option will add the ApiExplorer libraries and services to your project.
                                   bool - Optional
                                   Default: false

  -C|--Controller                  If true, this will generate an example ValuesController in your project.
                                   bool - Optional
                                   Default: false

  -D|--DataAnnotations             DataAnnotations provide declarative metadata and validations for models in ASP.NET Core.
                                   bool - Optional
                                   Default: false

  -CO|--CORS                       Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sensitive data from another site. CORS is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly allow some cross-origin requests while rejecting others.
                                   bool - Optional
                                   Default: true

  -T|--Title                       The name of the project which determines the assembly product name. If the Swagger feature is enabled, shows the title on the Swagger UI.
                                   string - Optional
                                   Default: BasicWebApi

 -De|--Description                A description of the project which determines the assembly description. If the Swagger feature is enabled, shows the description on the Swagger UI.

                                   string - Optional
                                   Default: BasicWebApi

  -Au|--Author                     The name of the author of the project which determines the assembly author, company and copyright information.

                                   string - Optional
                                   Default: Project Author

  -F|--Framework                   Decide which version of the .NET Framework to target.

                                       .NET Core         - Run cross platform (on Windows, Mac and Linux). The framework is made up of NuGet packages which can be shipped with the application so it is fully stand-alone.

                                       .NET Framework    - Gives you access to the full breadth of libraries available in .NET instead of the subset available in .NET Core but requires it to be pre-installed.

                                       Both              - Target both .NET Core and .NET Framework.

                                   Default: Both

  -I|--IncludeApplicationInsights  Whether or not to include Application Insights in the project
                                   bool - Optional
                                   Default: false

You can invoke the template with any or all of these options, for example:

$dotnet new basicwebapi --Controller false --DataAnnotations true -Au "Andrew Lock"
Content generation time: 762.9798 ms  
The template "Basic ASP.NET Core Web API" created successfully.  

Source code for the template

If you're interested to see the source for the template, you can view it on GitHub. There you will find an example of the templates.json file that describes the template, as well as a full CI build using cake and AppVeyor to automatically publish the NuGet templates.

If you have any suggestions, bugs or comments, then do let me know on the GitHub!


Andrew Lock: Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In this article I'll show how to add the minimal required dependencies to create an ASP.NET Core Web API project, without including the additional MVC/Razor functionality and packages.

Note: I have since created a custom dotnet new template for the code described in this post, plus a few extra features. You can read about it here, or view it on GitHub and NuGet.

MVC vs Web API

In the previous version of ASP.NET, the MVC and Web API stacks were completely separate. Even though there were many similar concepts shared between them, the actual types were distinct. This was generally a little awkward, and often resulted in confusing error messages when you accidentally references the wrong namespace.

In ASP.NET Core, this is no longer an issue - MVC and Web API have been unified under the auspices of ASP.NET Core MVC, in which there is fundamentally no real difference between an MVC controller and a Web API controller. All of your controllers can act both as MVC controllers, serving server-side rendered Razor templates, and as Web API controllers returning formatted (e.g. JSON or XML) data.

This unification is great and definitely reduces the mental overhead required when working with both previously. Even if you are not using both aspects in a single application, the fact the types are all familiar is just a smoother experience.

Having said that, if you only need to use the Web API features (e.g. you're building an API client without any server-side rendering requirements), then you may not want/need the additional MVC capabilities in your app. Currently, the default templates include these by default.

The default templates

When you create a new MVC project from a template in Visual Studio or via the command line, you can choose whether to create an empty ASP.NET Core project, a Web API project or an MVC web app project:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

If you create an 'empty' project, then the resulting app really is super-lightweight. It has no dependencies on any MVC constructs, and just produces a very simple 'Hello World' response when run:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

At the other end of the scale, the 'MVC web app' gives you a more 'complete' application. Depending on the authentication options you select, this could include ASP.NET Core Identity, EF Core, and SQL server integration, in addition to all the MVC configuration and Razor view templating:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In between these two templates is the Web API template. This includes the necessary MVC dependencies for creating a Web API, and the simplest version just includes a single example ValuesController:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

However, while this looks stripped back, it also adds all the necessary packages for creating full MVC applications too, i.e. the server-side Razor packages. This is because it includes the same Microsoft.AspNetCore.Mvc package that the full MVC web app does, and calls AddMvc() in Startup.ConfigureServices.

As described in Steve Gordon's post on the AddMvc function, this adds a bunch of various services to the service collection. Some of these are required to allow you to use Web API, but some of them - the Razor-related services in particular - are unnecessary for a web API.

In most cases, using the Microsoft.AspNetCore.Mvc package is the easiest thing to do, but sometimes you want to trim your dependencies as much as possible, and make your APIs as lightweight as you can. In those cases you may find it useful to specifically add only the MVC packages and services you need for your app.

Adding the package dependencies

We'll start with the 'Empty' web application template, and add the packages necessary for Web API to it.

The exact packages you will need will depend on what features you need in your application. By default, the Empty ASP.NET Core template includes ApplicationInsights and the Microsoft.AspNetCore meta package, so I'll leave those in the project.

On top of those, I'll add the MVC.Core package, the JSON formatter package, and the CORS package:

  • The MVC Core package adds all the essential MVC types such as ControllerBase and RouteAttribute, as well as a host of dependencies such as Microsoft.AspNetCore.Mvc.Abstractions and Microsoft.AspNetCore.Authorization.
  • The JSON formatter package ensures we can actually render our Web API action results
  • The CORS package adds Cross Origin Resource Sharing (CORS) support - a common requirement for web APIs that will be hosted on a different domain to the client calling them.

    The final .csproj file should look something like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Json" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Cors" Version="1.1.2" />
  </ItemGroup>

</Project>  

Once you've restored the packages, we can update Startup to add our Web API services.

Adding the necessary services to Startup.cs

In most cases, adding the Web API services to a project would be as simple as calling AddMvc() in your ConfigureServices method. However, that method adds a whole load of functionality that I don't currently need. by default, it would add the ApiExplorer, the Razor view engine, Razor views, tag helpers and DataAnnotations - none of which we are using at the moment (We might well want to add the ApiExplorer and DataAnnotations back at a later date, but right now, I don't need them).

Instead, I'm left with just the following services:

public void ConfigureServices(IServiceCollection services)  
{
    var builder = services.AddMvcCore();
    builder.AddAuthorization();
    builder.AddFormatterMappings();
    builder.AddJsonFormatters();
    builder.AddCors();
}

That's all the services we need for now - next stop, middleware.

Adding the MvcMiddleware

Adding the MvcMiddleware to the pipeline is simple. I just replace the "Hello World" run call with UseMvc(). Note that I'm using the unparameterised version of the method, which does not add any conventional routes to the application. As this is a web API, I will just be using attribute routing, so there's no need to setup the conventional routes.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseMvc();
}

That's all the MVC configuration we need - the final step is to add a controller to show off our new API.

Adding an MVC Controller

There's one important caveat to be aware of when creating a web API in this way - you must use the ControllerBase class, not Controller. The latter is defined in the Microsoft.AspNetCore.Mvc package, which we haven't added. Luckily, it mostly contains methods related to rendering Razor, so it's not a problem for us here. The ControllerBase class includes all the various StatusCodeResult helper methods you will likely use, such as Ok used below.

[Route("api/[controller]")]
public class ValuesController : ControllerBase  
{
    // GET api/values
    [HttpGet]
    public IActionResult Get()
    {
        return Ok(new string[] { "value1", "value2" });
    }
}

And if we take it for a spin:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Voila! A stripped down web API controller, with minimal dependencies.

Bonus: AddWebApi extension method

As a final little piece of tidying up - our ConfigureServices call looks a bit messy now. Personally I'm a fan of the "Clean Startup.cs" approach espoused by K. Scott Allen, in which you reduce the clutter in your Startup.cs class by creating wrapper extension methods for your configuration.

We can do the same with our simplified web API project by adding an extension method called AddWebApi(). I've even created a parameterised overload that takes an Action<MvcOptions>, synonymous with the AddMvc() equivalent that you are likely already using.

using System;  
using Microsoft.AspNetCore.Mvc;  
using Microsoft.AspNetCore.Mvc.Internal;

// ReSharper disable once CheckNamespace
namespace Microsoft.Extensions.DependencyInjection  
{
    public static class WebApiServiceCollectionExtensions
    {
        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));

            var builder = services.AddMvcCore();
            builder.AddAuthorization();

            builder.AddFormatterMappings();

            // +10 order
            builder.AddJsonFormatters();

            builder.AddCors();

            return new MvcBuilder(builder.Services, builder.PartManager);
        }

        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <param name="setupAction">An <see cref="Action{MvcOptions}"/> to configure the provided <see cref="MvcOptions"/>.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services, Action<MvcOptions> setupAction)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));
            if (setupAction == null) throw new ArgumentNullException(nameof(setupAction));

            var builder = services.AddWebApi();
            builder.Services.Configure(setupAction);

            return builder;
        }

    }
}

Finally, we can use this extension method to tidy up our ConfigureServices method:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddWebApi();
}

Much better!

Summary

This post showed how you could trim the Razor dependencies from your application, when you know you are not going to need them. This represents pretty much the most bare-bones web API template you might use in your application. Obviously mileage may vary, but luckily adding extra capabilities (validation, ApiExplorer for example) is easy!


Damien Bowden: ASP.NET Core IdentityServer4 Resource Owner Password Flow with custom UserRepository

This article shows how a custom user store or repository can be used in IdentityServer4. This can be used for an existing user management system which doesn’t use Identity or request user data from a custom source. The Resource Owner Flow using refresh tokens is used to access the protected data on the resource server. The client is implemented using IdentityModel.

Code: https://github.com/damienbod/AspNetCoreIdentityServer4ResourceOwnerPassword

Setting up a custom User Repository in IdentityServer4

To create a custom user store, an extension method needs to be created which can be added to the AddIdentityServer() builder. The .AddCustomUserStore() adds everything required for the custom user management.

services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddInMemoryIdentityResources(Config.GetIdentityResources())
		.AddInMemoryApiResources(Config.GetApiResources())
		.AddInMemoryClients(Config.GetClients())
		.AddCustomUserStore();
}

The extension method adds the required classes to the ASP.NET Core dependency injection services. A user respository is used to access the user data, a custom profile service is added to add the required claims to the tokens, and a validator is also added to validate the user credentials.

using CustomIdentityServer4.UserServices;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class CustomIdentityServerBuilderExtensions
    {
        public static IIdentityServerBuilder AddCustomUserStore(this IIdentityServerBuilder builder)
        {
            builder.Services.AddSingleton<IUserRepository, UserRepository>();
            builder.AddProfileService<CustomProfileService>();
            builder.AddResourceOwnerValidator<CustomResourceOwnerPasswordValidator>();

            return builder;
        }
    }
}

The IUserRepository interface adds everything required by the application to use the custom user store throughout the IdentityServer4 application. The different views, controllers, use this interface as required. This can then be changed as required.

namespace CustomIdentityServer4.UserServices
{
    public interface IUserRepository
    {
        bool ValidateCredentials(string username, string password);

        CustomUser FindBySubjectId(string subjectId);

        CustomUser FindByUsername(string username);
    }
}

The CustomUser class is the the user class. This class can be changed to map the user data defined in the persistence medium.

namespace CustomIdentityServer4.UserServices
{
    public class CustomUser
    {
            public string SubjectId { get; set; }
            public string Email { get; set; }
            public string UserName { get; set; }
            public string Password { get; set; }
    }
}

The UserRepository implements the IUserRepository interface. Dummy users are added in this example to test. If you using a custom database, or dapper, or whatever, you could implement the data access logic in this class.

using System.Collections.Generic;
using System.Linq;
using System;

namespace CustomIdentityServer4.UserServices
{
    public class UserRepository : IUserRepository
    {
        // some dummy data. Replce this with your user persistence. 
        private readonly List<CustomUser> _users = new List<CustomUser>
        {
            new CustomUser{
                SubjectId = "123",
                UserName = "damienbod",
                Password = "damienbod",
                Email = "damienbod@email.ch"
            },
            new CustomUser{
                SubjectId = "124",
                UserName = "raphael",
                Password = "raphael",
                Email = "raphael@email.ch"
            },
        };

        public bool ValidateCredentials(string username, string password)
        {
            var user = FindByUsername(username);
            if (user != null)
            {
                return user.Password.Equals(password);
            }

            return false;
        }

        public CustomUser FindBySubjectId(string subjectId)
        {
            return _users.FirstOrDefault(x => x.SubjectId == subjectId);
        }

        public CustomUser FindByUsername(string username)
        {
            return _users.FirstOrDefault(x => x.UserName.Equals(username, StringComparison.OrdinalIgnoreCase));
        }
    }
}

The CustomProfileService uses the IUserRepository to get the user data, and adds the claims for the user to the tokens, which are returned to the client, if the user/application was validated.

using System.Security.Claims;
using System.Threading.Tasks;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;

namespace CustomIdentityServer4.UserServices
{
    public class CustomProfileService : IProfileService
    {
        protected readonly ILogger Logger;


        protected readonly IUserRepository _userRepository;

        public CustomProfileService(IUserRepository userRepository, ILogger<CustomProfileService> logger)
        {
            _userRepository = userRepository;
            Logger = logger;
        }


        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();

            Logger.LogDebug("Get profile called for subject {subject} from client {client} with claim types {claimTypes} via {caller}",
                context.Subject.GetSubjectId(),
                context.Client.ClientName ?? context.Client.ClientId,
                context.RequestedClaimTypes,
                context.Caller);

            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());

            var claims = new List<Claim>
            {
                new Claim("role", "dataEventRecords.admin"),
                new Claim("role", "dataEventRecords.user"),
                new Claim("username", user.UserName),
                new Claim("email", user.Email)
            };

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());
            context.IsActive = user != null;
        }
    }
}

The CustomResourceOwnerPasswordValidator implements the validation.

using IdentityServer4.Validation;
using IdentityModel;
using System.Threading.Tasks;

namespace CustomIdentityServer4.UserServices
{
    public class CustomResourceOwnerPasswordValidator : IResourceOwnerPasswordValidator
    {
        private readonly IUserRepository _userRepository;

        public CustomResourceOwnerPasswordValidator(IUserRepository userRepository)
        {
            _userRepository = userRepository;
        }

        public Task ValidateAsync(ResourceOwnerPasswordValidationContext context)
        {
            if (_userRepository.ValidateCredentials(context.UserName, context.Password))
            {
                var user = _userRepository.FindByUsername(context.UserName);
                context.Result = new GrantValidationResult(user.SubjectId, OidcConstants.AuthenticationMethods.Password);
            }

            return Task.FromResult(0);
        }
    }
}

The AccountController is configured to use the IUserRepository interface.

   public class AccountController : Controller
    {
        private readonly IIdentityServerInteractionService _interaction;
        private readonly AccountService _account;
        private readonly IUserRepository _userRepository;

        public AccountController(
            IIdentityServerInteractionService interaction,
            IClientStore clientStore,
            IHttpContextAccessor httpContextAccessor,
            IUserRepository userRepository)
        {
            _interaction = interaction;
            _account = new AccountService(interaction, httpContextAccessor, clientStore);
            _userRepository = userRepository;
        }

        /// <summary>
        /// Show login page
        /// </summary>
        [HttpGet]

Setting up a grant type ResourceOwnerPasswordAndClientCredentials to use refresh tokens

The grant type ResourceOwnerPasswordAndClientCredentials is configured in the GetClients method in the IdentityServer4 application. To use refresh tokens, you must add the IdentityServerConstants.StandardScopes.OfflineAccess to the allowed scopes. Then the other refresh token settings can be set as required.

public static IEnumerable<Client> GetClients()
{
	return new List<Client>
	{
		new Client
		{
			ClientId = "resourceownerclient",

			AllowedGrantTypes = GrantTypes.ResourceOwnerPasswordAndClientCredentials,
			AccessTokenType = AccessTokenType.Jwt,
			AccessTokenLifetime = 120, //86400,
			IdentityTokenLifetime = 120, //86400,
			UpdateAccessTokenClaimsOnRefresh = true,
			SlidingRefreshTokenLifetime = 30,
			AllowOfflineAccess = true,
			RefreshTokenExpiration = TokenExpiration.Absolute,
			RefreshTokenUsage = TokenUsage.OneTimeOnly,
			AlwaysSendClientClaims = true,
			Enabled = true,
			ClientSecrets=  new List<Secret> { new Secret("dataEventRecordsSecret".Sha256()) },
			AllowedScopes = {
				IdentityServerConstants.StandardScopes.OpenId, 
				IdentityServerConstants.StandardScopes.Profile,
				IdentityServerConstants.StandardScopes.Email,
				IdentityServerConstants.StandardScopes.OfflineAccess,
				"dataEventRecords"
			}
		}
	};
}

When the token client requests a token, the offline_access must be sent in the HTTP request, to recieve a refresh token.

private static async Task<TokenResponse> RequestTokenAsync(string user, string password)
{
	return await _tokenClient.RequestResourceOwnerPasswordAsync(
		user,
		password,
		"email openid dataEventRecords offline_access");
}

Running the application

When all three applications are started, the console application gets the tokens from the IdentityServer4 application and the required claims are returned to the console application in the token. Not all the claims need to be added to the access_token, only the ones which are required on the resource server. If the user info is required in the UI, a separate request can be made for this info.

Here’s the token payload returned from the server to the client in the token. You can see the extra data added in the profile service, for example the role array.

{
  "nbf": 1492161131,
  "exp": 1492161251,
  "iss": "https://localhost:44318",
  "aud": [
    "https://localhost:44318/resources",
    "dataEventRecords"
  ],
  "client_id": "resourceownerclient",
  "sub": "123",
  "auth_time": 1492161130,
  "idp": "local",
  "role": [
    "dataEventRecords.admin",
    "dataEventRecords.user"
  ],
  "username": "damienbod",
  "email": "damienbod@email.ch",
  "scope": [
    "email",
    "openid",
    "dataEventRecords",
    "offline_access"
  ],
  "amr": [
    "pwd"
  ]
}

The token is used to get the data from the resource server. The client uses the access_token and adds it to the header of the HTTP request.

HttpClient httpClient = new HttpClient();
httpClient.SetBearerToken(access_token);

var payloadFromResourceServer = await httpClient.GetAsync("https://localhost:44365/api/DataEventRecords");
if (!payloadFromResourceServer.IsSuccessStatusCode)
{
	Console.WriteLine(payloadFromResourceServer.StatusCode);
}
else
{
	var content = await payloadFromResourceServer.Content.ReadAsStringAsync();
	Console.WriteLine(JArray.Parse(content));
}

The resource server validates each request using the UseIdentityServerAuthentication middleware extension method.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
IdentityServerAuthenticationOptions identityServerValidationOptions = new IdentityServerAuthenticationOptions
{
	Authority = "https://localhost:44318/",
	AllowedScopes = new List<string> { "dataEventRecords" },
	ApiSecret = "dataEventRecordsSecret",
	ApiName = "dataEventRecords",
	AutomaticAuthenticate = true,
	SupportedTokens = SupportedTokens.Both,
	// TokenRetriever = _tokenRetriever,
	// required if you want to return a 403 and not a 401 for forbidden responses
	AutomaticChallenge = true,
};

app.UseIdentityServerAuthentication(identityServerValidationOptions);

Each API is protected using the Authorize attribute with policies if needed. The HttpContext can be used to get the claims sent with the token, if required. The username is sent with the access_token in the header.

[Authorize("dataEventRecordsUser")]
[HttpGet]
public IActionResult Get()
{
	var userName = HttpContext.User.FindFirst("username")?.Value;
	return Ok(_dataEventRecordRepository.GetAll());
}

The client gets a refresh token and updates periodically in the client. You could use a background task to implement this in a desktop or mobile application.

public static async Task RunRefreshAsync(TokenResponse response, int milliseconds)
{
	var refresh_token = response.RefreshToken;

	while (true)
	{
		response = await RefreshTokenAsync(refresh_token);

		// Get the resource data using the new tokens...
		await ResourceDataClient.GetDataAndDisplayInConsoleAsync(response.AccessToken);

		if (response.RefreshToken != refresh_token)
		{
			ShowResponse(response);
			refresh_token = response.RefreshToken;
		}

		Task.Delay(milliseconds).Wait();
	}
}

The application then loops forever.

Links:

https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

https://github.com/IdentityModel/IdentityModel2

https://github.com/IdentityServer/IdentityServer4

https://github.com/IdentityServer/IdentityServer4.Samples



Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: Implementing OpenID Implicit Flow using OpenIddict and Angular

This article shows how to implement the OpenID Connect Implicit Flow using OpenIddict hosted in an ASP.NET Core application, an ASP.NET Core web API and an Angular application as the client.

Code: https://github.com/damienbod/AspNetCoreOpeniddictAngularImplicitFlow

Three different projects are used to implement the application. The OpenIddict Implicit Flow Server is used to authenticate and authorise, the resource server is used to provide the API, and the Angular application implements the UI.

OpenIddict Server implementing the Implicit Flow

To use the OpenIddict NuGet packages to implement an OpenID Connect server, you need to use the myget server. You can add a NuGet.config file to your project to configure this, or add it to the package sources in Visual Studio 2017.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet" value="https://api.nuget.org/v3/index.json" />
    <add key="aspnet-contrib" value="https://www.myget.org/F/aspnet-contrib/api/v3/index.json" />
  </packageSources>
</configuration>

Then you can use the NuGet package manager to download the required packages. You need to select the key for the correct source in the drop down on the right hand side, and select the required pre-release packages.

Or you can just add them directly to the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <OutputType>Exe</OutputType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="AspNet.Security.OAuth.Validation" Version="1.0.0-rtm-0241" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Google" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Twitter" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Cors" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Openiddict" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.EntityFrameworkCore" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.Mvc" Version="1.0.0-beta2-0598" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite.Design" Version="1.1.1" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0" />
    <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0" />
  </ItemGroup>

  <ItemGroup>
    <None Update="damienbodserver.pfx">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>

</Project>

The OpenIddict packages are configured in the ConfigureServices and the Configure methods in the Startup class. The following code configures the OpenID Connect Implicit Flow with a SQLite database using Entity Framework Core. The required endpoints are enabled, and Json Web tokens are used.

public void ConfigureServices(IServiceCollection services)
{
	services.AddDbContext<ApplicationDbContext>(options =>
	{
		options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"));
		options.UseOpenIddict();
	});

	services.AddIdentity<ApplicationUser, IdentityRole>()
		.AddEntityFrameworkStores<ApplicationDbContext>();

	services.Configure<IdentityOptions>(options =>
	{
		options.ClaimsIdentity.UserNameClaimType = OpenIdConnectConstants.Claims.Name;
		options.ClaimsIdentity.UserIdClaimType = OpenIdConnectConstants.Claims.Subject;
		options.ClaimsIdentity.RoleClaimType = OpenIdConnectConstants.Claims.Role;
	});

	services.AddOpenIddict(options =>
	{
		options.AddEntityFrameworkCoreStores<ApplicationDbContext>();
		options.AddMvcBinders();
		options.EnableAuthorizationEndpoint("/connect/authorize")
			   .EnableLogoutEndpoint("/connect/logout")
			   .EnableIntrospectionEndpoint("/connect/introspect")
			   .EnableUserinfoEndpoint("/api/userinfo");

		options.AllowImplicitFlow();
		options.AddSigningCertificate(_cert);
		options.UseJsonWebTokens();
	});

	var policy = new Microsoft.AspNetCore.Cors.Infrastructure.CorsPolicy();

	policy.Headers.Add("*");
	policy.Methods.Add("*");
	policy.Origins.Add("*");
	policy.SupportsCredentials = true;

	services.AddCors(x => x.AddPolicy("corsGlobalPolicy", policy));

	services.AddMvc();

	services.AddTransient<IEmailSender, AuthMessageSender>();
	services.AddTransient<ISmsSender, AuthMessageSender>();
}

The Configure method defines JwtBearerAuthentication so the userinfo API can be used, or any other authorisered API. The OpenIddict middlware is also added. The commented out method InitializeAsync is used to add OpenIddict data to the existing database. The database was created using Entity Framework Core migrations from the command line.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
		app.UseDatabaseErrorPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCors("corsGlobalPolicy");

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();

	var jwtOptions = new JwtBearerOptions()
	{
		AutomaticAuthenticate = true,
		AutomaticChallenge = true,
		RequireHttpsMetadata = true,
		Audience = "dataEventRecords",
		ClaimsIssuer = "https://localhost:44319/",
		TokenValidationParameters = new TokenValidationParameters
		{
			NameClaimType = OpenIdConnectConstants.Claims.Name,
			RoleClaimType = OpenIdConnectConstants.Claims.Role
		}
	};

	jwtOptions.TokenValidationParameters.ValidAudience = "dataEventRecords";
	jwtOptions.TokenValidationParameters.ValidIssuer = "https://localhost:44319/";
	jwtOptions.TokenValidationParameters.IssuerSigningKey = new RsaSecurityKey(_cert.GetRSAPrivateKey().ExportParameters(false));
	app.UseJwtBearerAuthentication(jwtOptions);

	app.UseIdentity();

	app.UseOpenIddict();

	app.UseMvcWithDefaultRoute();

	// Seed the database with the sample applications.
	// Note: in a real world application, this step should be part of a setup script.
	// InitializeAsync(app.ApplicationServices, CancellationToken.None).GetAwaiter().GetResult();
}

Entity Framework Core database migrations:

> dotnet ef migrations add test
> dotnet ef database update test

The UserinfoController controller is used to return user data to the client. The API requires a token which is validated using the JWT Bearer token validation, configured in the Startup class.
The required claims need to be added here, as the application requires. This example adds some extra role claims which are used in the Angular SPA.

using System.Threading.Tasks;
using AspNet.Security.OAuth.Validation;
using AspNet.Security.OpenIdConnect.Primitives;
using OpeniddictServer.Models;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace OpeniddictServer.Controllers
{
    [Route("api")]
    public class UserinfoController : Controller
    {
        private readonly UserManager<ApplicationUser> _userManager;

        public UserinfoController(UserManager<ApplicationUser> userManager)
        {
            _userManager = userManager;
        }

        //
        // GET: /api/userinfo
        [Authorize(ActiveAuthenticationSchemes = OAuthValidationDefaults.AuthenticationScheme)]
        [HttpGet("userinfo"), Produces("application/json")]
        public async Task<IActionResult> Userinfo()
        {
            var user = await _userManager.GetUserAsync(User);
            if (user == null)
            {
                return BadRequest(new OpenIdConnectResponse
                {
                    Error = OpenIdConnectConstants.Errors.InvalidGrant,
                    ErrorDescription = "The user profile is no longer available."
                });
            }

            var claims = new JObject();
            claims[OpenIdConnectConstants.Claims.Subject] = await _userManager.GetUserIdAsync(user);

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Email))
            {
                claims[OpenIdConnectConstants.Claims.Email] = await _userManager.GetEmailAsync(user);
                claims[OpenIdConnectConstants.Claims.EmailVerified] = await _userManager.IsEmailConfirmedAsync(user);
            }

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Phone))
            {
                claims[OpenIdConnectConstants.Claims.PhoneNumber] = await _userManager.GetPhoneNumberAsync(user);
                claims[OpenIdConnectConstants.Claims.PhoneNumberVerified] = await _userManager.IsPhoneNumberConfirmedAsync(user);
            }

            List<string> roles = new List<string> { "dataEventRecords", "dataEventRecords.admin", "admin", "dataEventRecords.user" };
            claims["role"] = JArray.FromObject(roles);

            return Json(claims);
        }
    }
}

The AuthorizationController controller implements the CreateTicketAsync method where the claims can be added to the tokens as required. The Implict Flow in this example requires both the id_token and the access_token and extra claims are added to the access_token. These are the claims used by the resource server to set the policies.

private async Task<AuthenticationTicket> CreateTicketAsync(OpenIdConnectRequest request, ApplicationUser user)
{
	var identity = new ClaimsIdentity(OpenIdConnectServerDefaults.AuthenticationScheme);

	var principal = await _signInManager.CreateUserPrincipalAsync(user);
	foreach (var claim in principal.Claims)
	{
		if (claim.Type == _identityOptions.Value.ClaimsIdentity.SecurityStampClaimType)
		{
			continue;
		}

		var destinations = new List<string>
		{
			OpenIdConnectConstants.Destinations.AccessToken
		};

		if ((claim.Type == OpenIdConnectConstants.Claims.Name) ||
			(claim.Type == OpenIdConnectConstants.Claims.Email) ||
			(claim.Type == OpenIdConnectConstants.Claims.Role)  )
		{
			destinations.Add(OpenIdConnectConstants.Destinations.IdentityToken);
		}

		claim.SetDestinations(destinations);

		identity.AddClaim(claim);
	}

	// Add custom claims
	var claimdataEventRecordsAdmin = new Claim("role", "dataEventRecords.admin");
	claimdataEventRecordsAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimAdmin = new Claim("role", "admin");
	claimAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimUser = new Claim("role", "dataEventRecords.user");
	claimUser.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	identity.AddClaim(claimdataEventRecordsAdmin);
	identity.AddClaim(claimAdmin);
	identity.AddClaim(claimUser);

	// Create a new authentication ticket holding the user identity.
	var ticket = new AuthenticationTicket(new ClaimsPrincipal(identity),
	new AuthenticationProperties(),
	OpenIdConnectServerDefaults.AuthenticationScheme);

	// Set the list of scopes granted to the client application.
	ticket.SetScopes(new[]
	{
		OpenIdConnectConstants.Scopes.OpenId,
		OpenIdConnectConstants.Scopes.Email,
		OpenIdConnectConstants.Scopes.Profile,
		"role",
		"dataEventRecords"
	}.Intersect(request.GetScopes()));

	ticket.SetResources("dataEventRecords");

	return ticket;
}

If you require more examples, or different flows, refer to the excellent openiddict-samples .

Angular Implicit Flow client

The Angular application uses the AuthConfiguration class to set the options required for the OpenID Connect Implicit Flow. The ‘id_token token’ is defined as the response type so that an access_token is returned as well as the id_token. The jwks_url is required so that the client can ge the signiture from the server to validate the token. The userinfo_url and the logoutEndSession_url are used to define the user data url and the logout url. These could be removed and the data from the jwks_url could be ued to get these parameters. The configuration here has to match the configuration on the server.

import { Injectable } from '@angular/core';

@Injectable()
export class AuthConfiguration {

    // The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public iss = 'https://localhost:44319/';

    public server = 'https://localhost:44319';

    public redirect_url = 'https://localhost:44308';

    // This is required to get the signing keys so that the signiture of the Jwt can be validated.
    public jwks_url = 'https://localhost:44319/.well-known/jwks';

    public userinfo_url = 'https://localhost:44319/api/userinfo';

    public logoutEndSession_url = 'https://localhost:44319/connect/logout';

    // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public client_id = 'angular4client';

    public response_type = 'id_token token';

    public scope = 'dataEventRecords openid';

    public post_logout_redirect_uri = 'https://localhost:44308/Unauthorized';
}

The OidcSecurityService is used to send the login request to the server and also handle the callback which validates the tokens. This class also persists the token data to the local storage.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import { Observable } from 'rxjs/Rx';
import { Router } from '@angular/router';
import { AuthConfiguration } from '../auth.configuration';
import { OidcSecurityValidation } from './oidc.security.validation';
import { JwtKeys } from './jwtkeys';

@Injectable()
export class OidcSecurityService {

    public HasAdminRole: boolean;
    public HasUserAdminRole: boolean;
    public UserData: any;

    private _isAuthorized: boolean;
    private actionUrl: string;
    private headers: Headers;
    private storage: any;
    private oidcSecurityValidation: OidcSecurityValidation;

    private errorMessage: string;
    private jwtKeys: JwtKeys;

    constructor(private _http: Http, private _configuration: AuthConfiguration, private _router: Router) {

        this.actionUrl = _configuration.server + 'api/DataEventRecords/';
        this.oidcSecurityValidation = new OidcSecurityValidation();

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.storage = sessionStorage; //localStorage;

        if (this.retrieve('_isAuthorized') !== '') {
            this.HasAdminRole = this.retrieve('HasAdminRole');
            this._isAuthorized = this.retrieve('_isAuthorized');
        }
    }

    public IsAuthorized(): boolean {
        if (this._isAuthorized) {
            if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
                console.log('IsAuthorized: isTokenExpired');
                this.ResetAuthorizationData();
                return false;
            }

            return true;
        }

        return false;
    }

    public GetToken(): any {
        return this.retrieve('authorizationData');
    }

    public ResetAuthorizationData() {
        this.store('authorizationData', '');
        this.store('authorizationDataIdToken', '');

        this._isAuthorized = false;
        this.HasAdminRole = false;
        this.store('HasAdminRole', false);
        this.store('_isAuthorized', false);
    }

    public SetAuthorizationData(token: any, id_token: any) {
        if (this.retrieve('authorizationData') !== '') {
            this.store('authorizationData', '');
        }

        console.log(token);
        console.log(id_token);
        console.log('storing to storage, getting the roles');
        this.store('authorizationData', token);
        this.store('authorizationDataIdToken', id_token);
        this._isAuthorized = true;
        this.store('_isAuthorized', true);

        this.getUserData()
            .subscribe(data => this.UserData = data,
            error => this.HandleError(error),
            () => {
                for (let i = 0; i < this.UserData.role.length; i++) {
                    console.log(this.UserData.role[i]);
                    if (this.UserData.role[i] === 'dataEventRecords.admin') {
                        this.HasAdminRole = true;
                        this.store('HasAdminRole', true);
                    }
                    if (this.UserData.role[i] === 'admin') {
                        this.HasUserAdminRole = true;
                        this.store('HasUserAdminRole', true);
                    }
                }
            });
    }

    public Authorize() {
        this.ResetAuthorizationData();

        console.log('BEGIN Authorize, no auth data');

        let authorizationUrl = this._configuration.server + '/connect/authorize';
        let client_id = this._configuration.client_id;
        let redirect_uri = this._configuration.redirect_url;
        let response_type = this._configuration.response_type;
        let scope = this._configuration.scope;
        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('AuthorizedController created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url =
            authorizationUrl + '?' +
            'response_type=' + encodeURI(response_type) + '&' +
            'client_id=' + encodeURI(client_id) + '&' +
            'redirect_uri=' + encodeURI(redirect_uri) + '&' +
            'scope=' + encodeURI(scope) + '&' +
            'nonce=' + encodeURI(nonce) + '&' +
            'state=' + encodeURI(state);

        window.location.href = url;
    }

    public AuthorizedCallback() {
        console.log('BEGIN AuthorizedCallback, no auth data');
        this.ResetAuthorizationData();

        let hash = window.location.hash.substr(1);

        let result: any = hash.split('&').reduce(function (result: any, item: string) {
            let parts = item.split('=');
            result[parts[0]] = parts[1];
            return result;
        }, {});

        console.log(result);
        console.log('AuthorizedCallback created, begin token validation');

        let token = '';
        let id_token = '';
        let authResponseIsValid = false;

        this.getSigningKeys()
            .subscribe(jwtKeys => {
                this.jwtKeys = jwtKeys;

                if (!result.error) {

                    // validate state
                    if (this.oidcSecurityValidation.ValidateStateFromHashCallback(result.state, this.retrieve('authStateControl'))) {
                        token = result.access_token;
                        id_token = result.id_token;
                        let decoded: any;
                        let headerDecoded;
                        decoded = this.oidcSecurityValidation.GetPayloadFromToken(id_token, false);
                        headerDecoded = this.oidcSecurityValidation.GetHeaderFromToken(id_token, false);

                        // validate jwt signature
                        if (this.oidcSecurityValidation.Validate_signature_id_token(id_token, this.jwtKeys)) {
                            // validate nonce
                            if (this.oidcSecurityValidation.Validate_id_token_nonce(decoded, this.retrieve('authNonce'))) {
                                // validate iss
                                if (this.oidcSecurityValidation.Validate_id_token_iss(decoded, this._configuration.iss)) {
                                    // validate aud
                                    if (this.oidcSecurityValidation.Validate_id_token_aud(decoded, this._configuration.client_id)) {
                                        // valiadate at_hash and access_token
                                        if (this.oidcSecurityValidation.Validate_id_token_at_hash(token, decoded.at_hash) || !token) {
                                            this.store('authNonce', '');
                                            this.store('authStateControl', '');

                                            authResponseIsValid = true;
                                            console.log('AuthorizedCallback state, nonce, iss, aud, signature validated, returning token');
                                        } else {
                                            console.log('AuthorizedCallback incorrect aud');
                                        }
                                    } else {
                                        console.log('AuthorizedCallback incorrect aud');
                                    }
                                } else {
                                    console.log('AuthorizedCallback incorrect iss');
                                }
                            } else {
                                console.log('AuthorizedCallback incorrect nonce');
                            }
                        } else {
                            console.log('AuthorizedCallback incorrect Signature id_token');
                        }
                    } else {
                        console.log('AuthorizedCallback incorrect state');
                    }
                }

                if (authResponseIsValid) {
                    this.SetAuthorizationData(token, id_token);
                    console.log(this.retrieve('authorizationData'));

                    // router navigate to DataEventRecordsList
                    this._router.navigate(['/dataeventrecords/list']);
                } else {
                    this.ResetAuthorizationData();
                    this._router.navigate(['/Unauthorized']);
                }
            });
    }

    public Logoff() {
        // /connect/endsession?id_token_hint=...&post_logout_redirect_uri=https://myapp.com
        console.log('BEGIN Authorize, no auth data');

        let authorizationEndsessionUrl = this._configuration.logoutEndSession_url;

        let id_token_hint = this.retrieve('authorizationDataIdToken');
        let post_logout_redirect_uri = this._configuration.post_logout_redirect_uri;

        let url =
            authorizationEndsessionUrl + '?' +
            'id_token_hint=' + encodeURI(id_token_hint) + '&' +
            'post_logout_redirect_uri=' + encodeURI(post_logout_redirect_uri);

        this.ResetAuthorizationData();

        window.location.href = url;
    }

    private runGetSigningKeys() {
        this.getSigningKeys()
            .subscribe(
            jwtKeys => this.jwtKeys = jwtKeys,
            error => this.errorMessage = <any>error);
    }

    private getSigningKeys(): Observable<JwtKeys> {
        return this._http.get(this._configuration.jwks_url)
            .map(this.extractData)
            .catch(this.handleError);
    }

    private extractData(res: Response) {
        let body = res.json();
        return body;
    }

    private handleError(error: Response | any) {
        // In a real world app, you might use a remote logging infrastructure
        let errMsg: string;
        if (error instanceof Response) {
            const body = error.json() || '';
            const err = body.error || JSON.stringify(body);
            errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
        } else {
            errMsg = error.message ? error.message : error.toString();
        }
        console.error(errMsg);
        return Observable.throw(errMsg);
    }

    public HandleError(error: any) {
        console.log(error);
        if (error.status == 403) {
            this._router.navigate(['/Forbidden']);
        } else if (error.status == 401) {
            this.ResetAuthorizationData();
            this._router.navigate(['/Unauthorized']);
        }
    }

    private retrieve(key: string): any {
        let item = this.storage.getItem(key);

        if (item && item !== 'undefined') {
            return JSON.parse(this.storage.getItem(key));
        }

        return;
    }

    private store(key: string, value: any) {
        this.storage.setItem(key, JSON.stringify(value));
    }

    private getUserData = (): Observable<string[]> => {
        this.setHeaders();
        return this._http.get(this._configuration.userinfo_url, {
            headers: this.headers,
            body: ''
        }).map(res => res.json());
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        let token = this.GetToken();

        if (token !== '') {
            this.headers.append('Authorization', 'Bearer ' + token);
        }
    }
}

The OidcSecurityValidation class defines the functions used to validate the tokens defined in the OpenID Connect specification for the Implicit Flow.

import { Injectable } from '@angular/core';

// from jsrasiign
declare var KJUR: any;
declare var KEYUTIL: any;
declare var hextob64u: any;

// http://openid.net/specs/openid-connect-implicit-1_0.html

// id_token
//// id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
//// id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
// id_token C3: If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.
// id_token C4: If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
//// id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
//// id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
//// id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
// id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time, limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
//// id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
// id_token C10: If the acr Claim was requested, the Client SHOULD check that the asserted Claim Value is appropriate.The meaning and processing of acr Claim Values is out of scope for this document.
// id_token C11: When a max_age request is made, the Client SHOULD check the auth_time Claim value and request re- authentication if it determines too much time has elapsed since the last End- User authentication.

//// Access Token Validation
//// access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
//// access_token C2: Take the left- most half of the hash and base64url- encode it.
//// access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.

@Injectable()
export class OidcSecurityValidation {

    // id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
    public IsTokenExpired(token: string, offsetSeconds?: number): boolean {

        let decoded: any;
        decoded = this.GetPayloadFromToken(token, false);

        let tokenExpirationDate = this.getTokenExpirationDate(decoded);
        offsetSeconds = offsetSeconds || 0;

        if (tokenExpirationDate == null) {
            return false;
        }

        // Token expired?
        return !(tokenExpirationDate.valueOf() > (new Date().valueOf() + (offsetSeconds * 1000)));
    }

    // id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
    public Validate_id_token_nonce(dataIdToken: any, local_nonce: any): boolean {
        if (dataIdToken.nonce !== local_nonce) {
            console.log('Validate_id_token_nonce failed');
            return false;
        }

        return true;
    }

    // id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public Validate_id_token_iss(dataIdToken: any, client_id: any): boolean {
        if (dataIdToken.iss !== client_id) {
            console.log('Validate_id_token_iss failed');
            return false;
        }

        return true;
    }

    // id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public Validate_id_token_aud(dataIdToken: any, aud: any): boolean {
        if (dataIdToken.aud !== aud) {
            console.log('Validate_id_token_aud failed');
            return false;
        }

        return true;
    }

    public ValidateStateFromHashCallback(state: any, local_state: any): boolean {
        if (state !== local_state) {
            console.log('ValidateStateFromHashCallback failed');
            return false;
        }

        return true;
    }

    public GetPayloadFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[1];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetHeaderFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[0];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetSignatureFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[2];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    // id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
    // id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
    public Validate_signature_id_token(id_token: any, jwtkeys: any): boolean {

        if (!jwtkeys || !jwtkeys.keys) {
            return false;
        }

        let header_data = this.GetHeaderFromToken(id_token, false);
        let kid = header_data.kid;
        let alg = header_data.alg;

        if ('RS256' != alg) {
            console.log('Only RS256 supported');
            return false;
        }

        let isValid = false;

        for (let key of jwtkeys.keys) {
            if (key.kid === kid) {
                let publickey = KEYUTIL.getKey(key);
                isValid = KJUR.jws.JWS.verify(id_token, publickey, ['RS256']);
                return isValid;
            }
        }

        return isValid;
    }

    // Access Token Validation
    // access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
    // access_token C2: Take the left- most half of the hash and base64url- encode it.
    // access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.
    public Validate_id_token_at_hash(access_token: any, at_hash: any): boolean {

        let hash = KJUR.crypto.Util.hashString(access_token, 'sha256');
        let first128bits = hash.substr(0, hash.length / 2);
        let testdata = hextob64u(first128bits);

        if (testdata === at_hash) {
            return true; // isValid;
        }

        return false;
    }

    private getTokenExpirationDate(dataIdToken: any): Date {
        if (!dataIdToken.hasOwnProperty('exp')) {
            return null;
        }

        let date = new Date(0); // The 0 here is the key, which sets the date to the epoch
        date.setUTCSeconds(dataIdToken.exp);

        return date;
    }


    private urlBase64Decode(str: string) {
        let output = str.replace('-', '+').replace('_', '/');
        switch (output.length % 4) {
            case 0:
                break;
            case 2:
                output += '==';
                break;
            case 3:
                output += '=';
                break;
            default:
                throw 'Illegal base64url string!';
        }

        return window.atob(output);
    }
}

The jsrsasign is used to validate the token signature and is added to the html file as a link.

!doctype html>
<html>
<head>
    <base href="./">
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>ASP.NET Core 1.0 Angular IdentityServer4 Client</title>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
	
	<script src="assets/jsrsasign.min.js"></script>
</head>
<body>
    <my-app>Loading...</my-app>
</body>
</html>

Once logged into the application, the access_token is added to the header of each request and sent to the resource server or the required APIs on the OpenIddict server.

 private setHeaders() {

        console.log('setHeaders started');

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.headers.append('Cache-Control', 'no-cache');

        let token = this._securityService.GetToken();
        if (token !== '') {
            let tokenValue = 'Bearer ' + token;
            console.log('tokenValue:' + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

ASP.NET Core Resource Server API

The resource server provides an API protected by security policies, dataEventRecordsUser and dataEventRecordsAdmin.

using AspNet5SQLite.Model;
using AspNet5SQLite.Repositories;

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace AspNet5SQLite.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class DataEventRecordsController : Controller
    {
        private readonly IDataEventRecordRepository _dataEventRecordRepository;

        public DataEventRecordsController(IDataEventRecordRepository dataEventRecordRepository)
        {
            _dataEventRecordRepository = dataEventRecordRepository;
        }

        [Authorize("dataEventRecordsUser")]
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_dataEventRecordRepository.GetAll());
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpGet("{id}")]
        public IActionResult Get(long id)
        {
            return Ok(_dataEventRecordRepository.Get(id));
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPost]
        public void Post([FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Post(value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPut("{id}")]
        public void Put(long id, [FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Put(id, value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpDelete("{id}")]
        public void Delete(long id)
        {
            _dataEventRecordRepository.Delete(id);
        }
    }
}

The policies are implemented in the Startup class and are implemented using the role claims dataEventRecords.user, dataEventRecords.admin and the scope dataEventRecords.

var guestPolicy = new AuthorizationPolicyBuilder()
	.RequireAuthenticatedUser()
	.RequireClaim("scope", "dataEventRecords")
	.Build();

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role",  "dataEventRecords.user");
	});

});

Jwt Bearer Authentication is used to validate the API HTTP requests.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();
			
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
	Authority = "https://localhost:44319/",
	Audience = "dataEventRecords",
	RequireHttpsMetadata = true,
	TokenValidationParameters = new TokenValidationParameters
	{
		NameClaimType = OpenIdConnectConstants.Claims.Subject,
		RoleClaimType = OpenIdConnectConstants.Claims.Role
	}
});

Running the application

When the application is started, all 3 applications are run, using the Visual Studio 2017 multiple project start option.

After the user clicks the login button, the user is redirected to the OpenIddict server to login.

After a successful login, the user is redirected back to the Angular application.

Links:

https://github.com/openiddict/openiddict-core

http://kevinchalet.com/2016/07/13/creating-your-own-openid-connect-server-with-asos-implementing-the-authorization-code-and-implicit-flows/

https://github.com/openiddict/openiddict-core/issues/49

https://github.com/openiddict/openiddict-samples

https://blogs.msdn.microsoft.com/webdev/2017/01/23/asp-net-core-authentication-with-identityserver4/

https://blogs.msdn.microsoft.com/webdev/2016/10/27/bearer-token-authentication-in-asp-net-core/

https://blogs.msdn.microsoft.com/webdev/2017/04/06/jwt-validation-and-authorization-in-asp-net-core/

https://jwt.io/

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows



Andrew Lock: Creating and editing solution files with the .NET CLI

Creating and editing solution files with the .NET CLI

With the release of Visual Studio 2017 and the RTM .NET Core tooling, the .NET command line has gone through a transformation. The project.json format is no more, and instead we have returned back to .csproj files. It's not your grand-daddy's .csproj however - the new .csproj is far leaner than previous MSBuild files, and massively reduces the reliance on GUIDs.

One of the biggest reasons for this is the need to make the files easily editable by hand. With .NET Core being cross platform, relying on Visual Studio to edit the files correctly with the magic GUIDs is no longer acceptable.

As well as a switch from project.json to csproj, the global.json file is no more - instead we're back to .sln files. these are primarily for when you're working with Visual Studio, and they're not entirely necessary for building .NET Core applications. In some cases though, if you're working in a cross-platform environment, you may need to edit sln files on mac/Linux.

Unfortunately, GUIDs in .sln files have survived the great .NET Core purge of 2017, so editing the files by hand isn't particularly fun. For example, the following .sln file contains two projects - a source code project and a test project:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "src", "src", "{2C14C847-9839-4C69-A5A0-C95D64DAECF2}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliApp", "src\CliApp\CliApp.csproj", "{D4DDD205-C160-4179-B8CF-B98E5066A187}"  
EndProject  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "test", "test", "{08F7408C-CA01-4495-A30C-F16F3FCBFDF2}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliAppTests", "test\CliAppTests\CliAppTests.csproj", "{4283F8CC-9575-48E5-AD4C-B628DB5D6301}"  
EndProject  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
    GlobalSection(ProjectConfigurationPlatforms) = postSolution
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x64.ActiveCfg = Debug|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x64.Build.0 = Debug|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x86.ActiveCfg = Debug|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x86.Build.0 = Debug|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|Any CPU.Build.0 = Release|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x64.ActiveCfg = Release|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x64.Build.0 = Release|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x86.ActiveCfg = Release|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x86.Build.0 = Release|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x64.ActiveCfg = Debug|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x64.Build.0 = Debug|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x86.ActiveCfg = Debug|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x86.Build.0 = Debug|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|Any CPU.Build.0 = Release|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x64.ActiveCfg = Release|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x64.Build.0 = Release|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x86.ActiveCfg = Release|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x86.Build.0 = Release|x86
    EndGlobalSection
    GlobalSection(NestedProjects) = preSolution
        {D4DDD205-C160-4179-B8CF-B98E5066A187} = {2C14C847-9839-4C69-A5A0-C95D64DAECF2}
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301} = {08F7408C-CA01-4495-A30C-F16F3FCBFDF2}
    EndGlobalSection
EndGlobal

A bit overwhelming right? Luckily, the .NET Core command line provides a number of commands for creating and editing these files, so you don't have to dive into them with a text editor directly.

Creating a new solution file

In this example, I'll assume you've already created a couple of projects. You can use dotnet new to achieve this, whether you're creating a command line app, web app, library or test project.

You can also create your own dotnet new templates using new experimental features, that should be available in stable form for .NET Core 2.0. You can read about these features here.

You can create a new solution file in the current directory using:

dotnet new sln

You can also provide an optional name for the .sln file using --name filename, otherwise it will have the same name as the current folder.

$ dotnet new sln --name test
Content generation time: 20.8484 ms  
The template "Solution File" created successfully.  

This will create a new .sln file in the current folder. The solution file currently doesn't have any associated projects, but defines a number of build configurations. The command above creates the following file:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
EndGlobal  

Adding a project to a solution file

Once you have a solution file, you can add a project to it using the sln add command, and provide the path to the project's .csproj file. This will add the project to an existing solution file in the current folder. The path to the project can be absolute or relative, but it will be added as a relative path in the .sln file.

dotnet sln add <path-to-project.csproj>

For example, to add a project located at src/CliApp/CliApp.csproj, when you have a single solution file in your current directory, you can use the following:

$ dotnet sln add "src\CliApp\CliApp.csproj"
Project `src\CliApp\CliApp.csproj` added to the solution.  

After running this, you'll see your project has been added to the .sln file, along with a src solution folder:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "src", "src", "{FFEC406A-FBFB-4737-8C32-1CF34FAF2D6F}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliApp", "src\CliApp\CliApp.csproj", "{92B636D5-2C14-4445-B8C1-BBF93A03FA5D}"  
EndProject  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
    GlobalSection(ProjectConfigurationPlatforms) = postSolution
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x64.ActiveCfg = Debug|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x64.Build.0 = Debug|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x86.ActiveCfg = Debug|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x86.Build.0 = Debug|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|Any CPU.Build.0 = Release|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x64.ActiveCfg = Release|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x64.Build.0 = Release|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x86.ActiveCfg = Release|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x86.Build.0 = Release|x86
    EndGlobalSection
    GlobalSection(NestedProjects) = preSolution
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D} = {FFEC406A-FBFB-4737-8C32-1CF34FAF2D6F}
    EndGlobalSection
EndGlobal  

Adding a project to a specific solution file

If you have multiple solution files in the current directory, then trying to run the previous command will give you an error similar to the following:

$ dotnet sln add "test\CliAppTests\CliAppTests.csproj"
Found more than one solution file in C:\Users\Sock\Repos\andrewlock\example\. Please specify which one to use.  

Instead, you must specify the name of the solution you wish to amend, by placing the path to the solution after sln:

dotnet sln <path-to-solution.sln> add <path-to-project.csproj>

For example,

dotnet sln "example.sln" add "test\CliAppTests\CliAppTests.csproj"

Note, when I first ran this command, I incorrectly placed the add parameter before the solution name, using dotnet sln add <solution> <project>. Unfortunately, this currently gives you a slightly confusing error (tracked here): Unhandled Exception: Microsoft.Build.Exceptions.InvalidProjectFileException: The project file could not be loaded. Data at the root level is invalid. Line 2, position 1.

Removing a project from a solution file

Removing a project from your solution is the mirror of adding a project - just be aware that, as before, you need to use the path to the .csproj file rather than just the name of the project folder, and if you have multiple .sln files in the current folder then you need to specify which one to modify

dotnet sln remove <path-to-project.csproj>

or

dotnet sln <path-to-solution.sln> remove <path-to-project.csproj>

This will remove the specified project, along with any associated sub folder nodes

$ dotnet sln remove src/CliApp/CliApp.csproj
Project reference `src\CliApp\CliApp.csproj` removed.  

Listing the projects in a solution file

The final command exposed by the .NET CLI is the ability to list the projects in the solution file, instead of having to open it up and wade through the litany of GUIDs:

dotnet sln list

Note that this command not only lists the project names (the paths to the .csproj files), it also lists the sub folders in which they reside. For example, the following solution contains two projects, one inside the src folder, one inside the test folder:

Creating and editing solution files with the .NET CLI

Listing the projects in this solution gives the following:

$ dotnet sln list
Project reference(s)  
--------------------
test  
test\CliAppTests\CliAppTests.csproj  
src  
src\CliApp\CliApp.csproj  

Summary

With .NET Core, cross platform development is a genuine first class citizen. With the .NET CLI, you can now manage your .sln files without needing to use Visual Studio or to mess with GUIDs in a text editor. This lets you create, add, remove and list projects. To see all the options available to you, run dotnet sln --help


Andrew Lock: Getting started with ASP.NET Core

Getting started with ASP.NET Core

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post gives you a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

When to choose ASP.NET Core

I’m going to assume that you’ve a general grasp of what ASP.NET Core is and how it was designed, but the question remains – should you use it? Microsoft is heavily promoting ASP.NET Core as their web framework of choice for the foreseeable future, but switching to or learning a new web stack is a big task for any developer or company. This article describes some of the highlights of ASP.NET Core and gives advice on the type of applications to build with it, as well as the type of applications to avoid.

What type of applications can you build?

ASP.NET Core provides a generalised web framework that can be used in a wide variety of applications. It can most obviously be used for building rich, dynamic websites, whether they are e-commerce sites, content-based websites, or n-tier applications – much the same as the previous version of ASP.NET.

A small number of third-party helper libraries are available for building this sort of complex application, but many are under active development. Many developers are working to port their libraries to work with ASP.NET Core – but it’ll take time for more to become available. For example, the open-source content management system (CMS) Orchard (figure 1) is currently available as an alpha version of Orchard 2, running on ASP.NET Core and .NET Core.

Getting started with ASP.NET Core Figure 1. The ASP.NET Community blogs website (https://weblogs.asp.net) is built using the Orchard CMS. Orchard 2 is available as a pre-alpha version for ASP.NET Core development

Traditional, server-side rendered web applications are the bread and butter of ASP.NET development, both with the previous version of ASP.NET and ASP.NET Core. Additionally, single page applications (SPAs), which use a client-side framework that talks to a REST server, are easy to create with ASP.NET Core. Whether you’re using Angular, Ember, React, or some other client-side framework, it’s easy to create an ASP.NET Core application to act as the server-side API.

DEFINITION REST stands for Representational State Transfer. RESTful applications typically use lightweight and stateless HTTP calls to read, post (create/update), and delete data.

ASP.NET Core isn’t restricted to creating RESTful services. It’s also easy to create a web-service or remote procedure call (RPC)-style service for your application, depending on your requirements, as shown in figure 2. In the simplest case, your application might expose only a single endpoint, narrowing its scope to become a microservice. ASP.NET Core is perfectly designed for building simple services thanks to its cross-platform support and lightweight design.

Getting started with ASP.NET Core Figure 2. ASP.NET Core can act as the server side application for a variety of different clients. It can serve HTML pages for traditional web applications, act as a REST API for client-side SPA applications, or act as an ad-hoc RPC service for client applications.

You must consider multiple factors when choosing a platform, not all of which are technical. One example is the level of support you can expect to receive from the creators. For some organizations, this can be one of the main obstacles to adopting open-source software. Luckily, Microsoft has pledged to provide full support for each major and minor point release of the ASP.NET Core framework for three years. Furthermore, as all development takes place in the open, you can sometimes get answers to your questions from the general community, as well as from Microsoft directly.

Two primary dimensions you must consider when deciding whether to use ASP.NET Core are: whether you’re already a .NET developer (or not); and whether you’re creating a new application or looking to convert an existing one.

If you’re new to .NET development

If you’re new to .NET development, and are considering ASP.NET Core, welcome! Microsoft is pushing ASP.NET Core as an attractive option for web development beginners, but taking .NET cross-platform means it’s competing with many other frameworks on their own turf. ASP.NET Core has many selling points when compared to other cross-platform web frameworks:

  • It’s a modern but stable web framework.
  • It uses familiar design-patterns and paradigms.
  • C# is a great language
  • You can build and run on any platform

ASP.NET Core is a re-imagining of the ASP.NET framework, built with modern software design principles on top of the new .NET Core platform. .NET Core is new in one sense, but has drawn significantly from the mature, stable, and reliable .NET Framework, which has been used for well over a decade. You can rest easy choosing ASP.NET Core and .NET Core because you’ll be getting a dependable platform, as well as a fully featured web framework.

Many of the web frameworks available today use similar, well-established design patterns, and ASP.NET Core is no different. For example, Ruby on Rails is known for its use of the Model-View-Controller (MVC) pattern; node.js is known for the way it processes requests using small discrete modules (called a pipeline); and dependency injection is found in a wide variety of frameworks. If these techniques are familiar, it’s easy to transfer them across to ASP.NET Core; if they’re new to you, you can look forward to using industry best practices!

The primary language of .NET development and ASP.NET Core is C#. This language has a huge following, and for good reason! As an object-oriented C-based language it provides a sense of familiarity to those familiar with C, Java, and many other languages. In addition, it has many powerful features, such as Language Integrated Query (LINQ), closures, and asynchronous programming constructs. The C# language is also designed in the open on GitHub, as is Microsoft’s C# compiler, code-named Roslyn 1.

NOTE If you wish to learn C#, I recommend picking up C# in Depth by Jon Skeet, also published by Manning (ISBN 9781617291340).

One of the major selling points of ASP.NET Core and .NET Core is the ability to develop and run on any platform. Whether you’re using a Mac, Windows, or Linux, you can run the same ASP.NET Core apps and develop across multiple environments. As a Linux user, a wide range of distributions are supported (RHEL, Ubuntu, Debian, CentOS, Fedora and openSUSE, to name a few), and you can be confident that your operating system of choice will be a viable option. Work is underway to enable ASP.NET Core to run on the tiny Alpine distribution, for truly compact deployments to containers.

Built with containers in mind

Traditionally, web applications were deployed directly to a server, or in more recent years, to a virtual machine. Virtual machines allow operating systems to be installed in a layer of virtual hardware, abstracting away the underlying hardware. This has advantages over direct installation, like easy maintenance, deployment, and recovery. Unfortunately, they’re also heavy on file size and resource utilisation.

This is where containers come in. Containers are far more lightweight and don’t have the overhead of virtual machines. They’re built in a series of layers and don’t require you to boot a new operating system when starting. That means they’re quick to start and great for quick provision. Containers and Docker are quickly becoming the go-to platform for building large, scalable systems.

Containers have never been an attractive option for ASP.NET applications, but with ASP.NET Core, .NET Core and Docker for Windows, it’s all changing. A lightweight ASP.NET Core application running on the cross-platform .NET Core framework is perfect for thin container deployments.

As well as running on each platform, one of the selling points of .NET is the ability to only need to write and compile once. Your application is compiled to Intermediate Language (IL) code, which is a platform independent format. If a target system has the .NET Core platform installed, you can run compiled IL from any platform. That means you can, for example, develop on a MacBook or a Windows machine, and deploy the exact same files to your production Linux machines. This compile-once run-anywhere promise has finally been realized with ASP.NET Core and .NET Core.

If you’re a .NET Framework developer creating a new application

If you’re currently a .NET developer, then the choice of whether to invest in ASP.NET Core for new applications is a question of timing. Microsoft has pledged to provide continued support for the older ASP.NET framework, but it’s clear their focus is primarily on the newer ASP.NET Core framework. In the long-term, if you wish to take advantage of new features and capabilities, it’s likely that ASP.NET Core will be the route to take.

Whether ASP.NET Core is right for you now largely depends on your requirements, and your comfort with using products that are early in their lifecycle. The main benefits over the previous ASP.NET framework are:

  • Cross-platform development and deployment
  • A focus on performance as a feature
  • A simplified hosting model
  • Faster releases
  • Open-source
  • Modular features

As a .NET developer, if you aren’t using any Windows-specific constructs, such as the Registry, then the ability to build and deploy applications cross-platform opens the door to a whole new avenue of applications. Take advantage of cheaper Linux VM hosting in the cloud; use Docker containers for repeatable continuous integration; or write .NET code on your MacBook without needing to run a Windows virtual machine. ASP.NET Core in combination with .NET Core makes all this possible.

It’s important to be aware of the limitations of cross-platform applications - not all of the .NET Framework APIs are available in .NET Core. It’s likely that most of the APIs you need will make their way to .NET Core over time, but it’s an important point to note.
The hosting model for the previous ASP.NET framework was a relatively complex one, relying on Windows Internet Information Services (IIS) to provide the web server hosting. In cross-platform environments this kind of symbiotic relationship isn’t possible, and an alternative hosting model has been adopted, which separates web applications from the underlying host. This opportunity has led to the development of Kestrel, a fast cross-platform HTTP server on which ASP.NET Core can run.

Instead of the previous design, whereby IIS calls into specific points of your application, ASP.NET Core applications are a form of console application, which self-hosts a web server and handles requests directly, as shown in figure 3. This hosting model is conceptually much simpler, and allows you to test and debug your applications from the command line, though it doesn’t remove the need to run IIS (or equivalent) in production.

Getting started with ASP.NET Core Figure 3. The difference in hosting models between ASP.NET (top) and ASP.NET Core (bottom). With the previous version of ASP.NET, IIS is tightly coupled to the application, calling into specific exposed methods for different stages of a request. The hosting model in ASP.NET Core is simpler; IIS hands off the request to a self-hosted web server in the ASP.NET Core application and receives the response, but has no deeper knowledge of the application.

Changing the hosting model to use a built-in HTTP web server has created another opportunity. Performance has been a sore point for ASP.NET applications in the past. It’s possible to build highly performant applications – Stack Overflow (http://stackoverflow.com) is testament to that – but the web framework itself isn’t designed with performance as a priority, and can end up being somewhat of an obstacle.

To be competitive cross-platform, the ASP.NET team recently focused on making the Kestrel HTTP server as fast as possible. TechEmpower (www.techempower.com/benchmarks) have been running benchmarks on a whole range of web frameworks from various languages for several years now. In round thirteen of the plaintext benchmarks, TechEmpower announced that ASP.NET Core with Kestrel was now the fastest mainstream fullstack web framework, and among the top ten fastest of all frameworks!

Web servers – naming things is hard

One of the difficult aspects of programing for the web these days is the confusing array of, often conflicting, terminology. For example, if you’ve used IIS in the past you may have described it as a web server, or possibly a web host. Conversely, if you’ve ever built an application using node.js, you may have also referred to that application as a web server. Alternatively, you may have called the physical machines on which your application runs a web server!

Similarly, you may have built an application for the Internet and called it a website or a web application, probably somewhat arbitrarily based on the level of dynamism it displayed.

In this article when I say “web server” in the context of ASP.NET Core, I’m referring to the HTTP server that runs as a part of your ASP.NET Core application. By default, this is the Kestrel web server, but it’s not a requirement. It’d be possible to write a replacement web server and substitute it for Kestrel if you desired.

The web server is responsible for receiving HTTP requests and generating responses. In the previous version of ASP.NET, IIS took this role, but in ASP.NET Core, Kestrel is the web server.

I’ll only use the term web application for describing ASP.NET Core applications, regardless of whether they contain only static content or are completely dynamic. Either way, they’re applications that are accessed via the web, and that name seems appropriate!

Many of the performance improvements made to Kestrel didn’t come from the ASP.NET Team themselves, but from contributors to the open-source project on GitHub . Developing in the open means you should typically see fixes and features make their way to production faster than you would for the previous version of ASP.NET, which was dependent on the .NET Framework, and, as such, had long release-cycles.

In contrast ASP.NET Core is completely decoupled from the underlying .NET platform. The entire web framework is implemented as modular NuGet packages, which can be versioned and updated independently (from the underlying platform on which they are built).

NOTE NuGet is a package manager for .NET that enables importing libraries into your projects. It’s equivalent to Ruby Gems, npm for JavaScript, or Maven for Java.

To enable this, ASP.NET Core was designed to be highly modular, with as little coupling to other features as possible. This modularity lends itself to a pay-for-play approach to dependencies, whereby you start from a bare*bones application and only add the additional libraries you require, as opposed to the kitchen-sink approach of previous ASP.NET applications. Even MVC is an optional package! But don’t worry, this approach doesn’t mean that ASP.NET Core is lacking in features; it just means you need to opt-in those features. Some of the key infrastructure improvements include:

  • Middleware “pipeline” for defining your application’s behaviour
  • Built-in support for dependency injection
  • Combined UI (MVC) and API (Web API) infrastructure
  • Highly extensible configuration system
  • Scalable for cloud platforms by default using asynchronous programming

Each of these features was possible in the previous version of ASP.NET, but required a fair amount of additional work to setup. With ASP.NET Core, they’re all there, ready and waiting to be connected!

Microsoft fully supports ASP.NET Core, and if you’ve a new system you wish to build, there’s no significant reason not to. The largest obstacle you’re likely to come across is a third-party library holding you back, either because they only support older ASP.NET features, or they haven’t converted to work with .NET Core.

Converting an existing ASP.NET application to ASP.NET Core

In contrast to new applications, an existing application is presumably already providing value, and there should always be a tangible benefit to performing what may amount to a significant rewrite in converting from ASP.NET to ASP.NET Core. The advantages to adopting ASP.NET Core are much the same as for new applications; cross-platform deployment, modular features, and a focus on performance. Determining whether the benefits are sufficient depends largely on the particulars of your application, but some characteristics are clear indicators against conversion:

  • Your application uses ASP.NET Web Forms
  • Your application is built on Web Pages, WCF, SignalR, or VB.NET
  • Your application is large, with many “advanced” MVC features

If you’ve an ASP.NET Web Forms application, attempting to convert it to ASP.NET Core isn’t advisable. Web Forms is inextricably tied to System.Web.dll, which will likely never be available in ASP.NET Core. Converting an application to ASP.NET Core would effectively involve rewriting the application from scratch, not only shifting frameworks but also shifting design paradigms. A better approach would be to slowly introduce Web API concepts and try to reduce the reliance on legacy Web Forms constructs, such as ViewData. Numerous resources are online to help you with this approach, like the www.asp.net/web-api website.

Similarly, if your application makes heavy use of Web Pages or SignalR, then now may not be the time to consider an upgrade. These features are under active development (currently under the monikers “Controller-less Razor Pages” and “SignalR 2.0”), but haven’t been released as part of the ASP.NET Core framework. Similarly, VB.NET is pegged for future support, but currently isn’t part of the framework.

Windows Communication Foundation (WCF) is also currently not supported, but it’s possible to consume WCF services by jumping through some slightly obscure hoops. Currently there’s no way to host a WCF service from an ASP.NET Core application; if you need the features WCF provides, and can’t use a more conventional REST service, then ASP.NET Core is probably best avoided.

If your application is complex and makes use of the previous MVC extensibility points or message handlers, then porting your application to ASP.NET Core could prove complex. ASP.NET Core is built with many similar features to the previous version of ASP.NET MVC, but the underlying architecture is different. Several previous features don’t have direct replacements, and will require re-thinking.

The larger the application, the greater the difficulty you’re likely to have in converting to ASP.NET Core. Microsoft suggests that porting an application from ASP.NET MVC to ASP.NET Core is at least as big a rewrite as porting from ASP.NET Web Forms to ASP.NET MVC. If that doesn’t scare you then nothing will!

When should you port an application to ASP.NET Core? As I’ve discussed, the best opportunity for getting started is on small, green-field, new projects instead of existing applications. That said, if the application in question is small, with little custom behaviour, then porting might be a viable option. Small implies reduced risk, and probably reduced complexity. If your application consists primarily of MVC or Web API controllers and associated Razor views, then moving to ASP.NET Core may be feasible.

Summary

Hopefully this article has kindled your interest in using ASP.NET Core for building your new applications. For more information, download the free first chapter of ASP.NET Core in Action and see this Slideshare presentation. Don’t forget to save 37% with code lockaspdotnet at manning.com.

  1. C# language and .NET Compiler Platform GitHub source code repository (https://github.com/dotnet/roslyn)


Anuraj Parameswaran: Working with Azure Blob storage in ASP.NET Core

This post is about uploading and downloading images from Azure Blob storage using ASP.NET Core. First you need to create a blob storage account and then a container which you’ll use to store all the images. You can do this from Azure portal. You need to select Storage > Storage Account - Blob, file, Table, Queue > Create a Storage Account.


Andrew Lock: Adding favicons to your ASP.NET Core website with Real Favicon Generator

Adding favicons to your ASP.NET Core website with Real Favicon Generator

In this post I will show how you can add favicons to your ASP.NET Core MVC application, using the site realfavicongenerator.net.

The days of being able to add a simple favicon.ico to the root of your new web app are long gone. There are so many different browsers, platforms and devices, each of which require slightly different sizes and semantics for their favicons, that figuring out what you actually need to implement can be overwhelming.

Luckily, realfavicongenerator.net does most of the hard work for you. All you need is an initial icon to work with, and it will generate all the image files, and even the markup, for you.

I'll walk through the process of creating a favicon and configuring your ASP.NET Core MVC application to serve it. I'll assume you're starting from an existing ASP.NET core application that uses a base layout view, _layout.cshtml, and is already configured to serve static files using the StaticFileMiddleware. I'll mostly stick to the defaults for favicons here, but the realfavicongenerator.net site does a great job of explaining all the favicon requirements and options, so feel free to play and find something that works best for you.

Thanks to @pbernard, author of RealFavIconGenerator, the instructions to add favicons to your ASP.NET Core application can now be found in the final step of on http://realfavicongenerator.net itself, so they're always to hand. Enjoy!

Adding favicons to your ASP.NET Core website with Real Favicon Generator

1. Design your favicon

Before you can create all the various required permutations, you'll first need to create your base favicon. This needs to be at least 70×70, but if possible, use a larger version that's at least 260×260. The generator works by scaling your image down to the appropriate sizes, so in order to generate the largest required favicons you need to start big.

For this post I'll be using a random image from pixabay, that I've exported (from the SVG) at 300×300:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

2. Import your design into RealFaviconGenerator

Now we have an icon to work with, go to realfavicongenerator.net and upload your icon. Click on Select your Favicon picture at the top right of the page, upload your image, and wait for it to finish processing..

Adding favicons to your ASP.NET Core website with Real Favicon Generator

You will now be presented with a breakdown of all the decisions you need to make to support various browsers and devices.

Favicon for iOS

As explained on RealFaviconGenerator, iOS users can pin a site to their home screen, which will then use a version of your favicon as the link image. Generally favicons work well when they contain transparent regions, to give the icon some shape other than a square, but for iOS they have to be solid.

RealFaviconGenerator provides some suggestions with how to generate the appropriate icon here, allowing you to change the background colour to use for transparency, how much padding to add to your icon, whether to generate additional icons, and whether to use an alternate iOS specific image.

In my case, I chose the basic option of generating a new icon, but using a different background colour to avoid the default black fill it would otherwise have. You can see the before and after of this small change below:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Favicon for Android

Android Chrome has a similar feature to iOS whereby you can pin a website to your home screen. Android is much more flexible with regard to icon design, so in this case transparent icons can generally be used as-is.

As before however, there are a lot of customisations you can make such as adding a background, generating additional images, or using an android-specific image.

The one required field at this point is the name of your application, but I'd recommend adding a theme colour too - that way the Chrome tab-bar and task-switcher colour will change to your website's theme colour too:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Windows Metro

Next in the OS list are Windows 8 and 10. As with iOS and Android, you can pin a website to your desktop. You can choose the background colour for your tile and optionally replace then image with a white silhouette, which can work well if you have a complex image outline.

Again, you can choose to generate additional lesser used images, or replace the image completely on Windows.

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Safari Pinned Tab

Safari 9 adds the concept of pinned tabs, which are represented by a small monochrome SVG icon. By default, RealFaviconGenerator generates a silhouette of your image, but it can also automatically generate a monochrome image by applying a threshold to your default image:

In my case, the threshold didn't quite produce a decent outline, so I uploaded an alternative image instead which would work better with the automatic thresholding:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Generator options

Now we're pretty much done, but there's still a bunch of options you can choose to optimise your favicons.

First of all, you can choose the path in your website that you are going to place your favicons. Given the number of icons that are generated, it can be tempting to place them in a subfolder of your application, but it's actually recommended to keep them in the root of your app.

You can choose to add a version string to the generated HTML links (recommended) and set the application name. Finally, you can choose the amount of compression and the scaling algorithms used, along with a preview of the effect on the final images. Generally you'll find you can compress pretty heavily, but choose what works for you.

Generate!

Once all your settings are specified, click the 'Generate' button and let RealFaviconGenerator do it's thing! You'll be given a zip file of all your icons, and a snippet of HTML to include in your website.

Note, in a recent update, RealFaviconGenerator reduced the number of favicons that are produced by default from 28 to 9. If you would like the higher-compatibility pack with additional files you can click on 'Get the old package'.

3. Add the favicons to your website.

We are going to be serving the favicons as static files, so we can add them inside the web-root folder of our ASP.NET Core application. By default, this is the wwwroot folder in your web project. Simply unzip your downloaded favicon package into this directory:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Next, we need to add the HTML snippet to the head element of our website. You can do this directly inside _Layout.cshtml if you wish, or you can take the route I favour and create a partial view to encapsulate your favicon links.

Add a new file inside the Views/Shared folder called _Favicons.cshtml and paste the generated favicon HTML snippet in:

<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">  
<link rel="icon" type="image/png" href="/favicon-32x32.png" sizes="32x32">  
<link rel="icon" type="image/png" href="/favicon-16x16.png" sizes="16x16">  
<link rel="manifest" href="/manifest.json">  
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">  
<meta name="theme-color" content="#00ffff">  

All that's left is to render the partial inside the head tag of _layout.cshtml:

<!DOCTYPE html>  
<html>  
<head>  
    <!-- Other head elements --> 
    @Html.Partial("_Favicons")
</head>  
<body>  
    <!-- Other body elements --> 
</body>  
</html>  

And that's it! Now your site should have a great set of favicons, no matter which browser or device your users are on.

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Summary

RealFaviconGenerator makes it very easy to generate all the favicons required to support modern browsers and devices, with optional high-compatibility for older browsers, following all the required guidelines. By following through the guidelines on the site, it is trivial to generate a compatible set of icons for your website.

Adding the icons to your site is simple, requiring a few lines of html, and pasting the downloaded pack into your webroot.

If you do use the site, consider donating to the creators, to ensure it stays as up-to-date and useful as ever!


Anuraj Parameswaran: Simple Static Websites using Azure Blob service

This post is about hosting a static website on Azure Blob service. To enable online presence, most of the small business will setup a Wordpress blog. A fully functional WordPress site is great, however most websites are pretty static and don’t really need all the bells and whistles that come with it. In this post I am talking about an alternative solution which helps you to host your static websites in Azure and leaverage CDN and bandwidth capabilities of Azure with less cost.


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Andrew Lock: Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the path that generated an error with the StatusCodePages Middleware

In my previous post, I showed how to use the re-execute features of the StatusCodePagesMiddleware to generate custom error pages for status-code errors. This allows you to easily create custom error pages for common error status codes like 404 or 500.

Retrieving the path that generated an error with the StatusCodePages Middleware

The re-executing approach using UseStatusCodePagesWithReExecute is generally a better approach than using UseStatusCodePagesWithRedirects as it generates the custom error page in the same request that caused it. This allows you to return the correct error code in response to the original request. This is more 'correct' from an HTTP/SEO/semantic point of view, but it also means the context of the original request is maintained when you generate the error.

In this quick post, I show how you can use this context to obtain the original path that triggered the error status code when the middleware pipeline is re-executed.

Setting up the status code pages middleware

I'll start by adding the StatusCodePagesMiddleware as I did in my previous post. I'm using the same UseStatusCodePagesWithReExecute as before, and providing the error status code when the pipeline is re-executed using a statusCode querystring parameter:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

The corresponding action method that gets invoked is:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

This gives me customised error pages for 404 and 500 status codes:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the original error path

This technique lets you customise the response returned when a URL generates an error status code, but on occasion you may want to know the original path that actually caused the error. From the flow diagram at the top of the page, I want to know the /Home/Problem URL when the HomeController.Error action is executing.

Luckily, the StatusCodePagesMiddleware stores a request-feature with the original path on the HttpContext. You can access it from the Features property:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();
        ViewData["ErrorUrl"] = feature?.OriginalPath;

        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

Adding this to the Error method means you can display or log the path, depending on your needs:

Retrieving the path that generated an error with the StatusCodePages Middleware

Note that I've used the null propagator syntax ?. to retrieve the path, as the feature will only be added if the StatusCodePagesMiddleware is re-executing the pipeline. This will avoid any null reference exceptions if the action is executed without using the StatusCodePagesMiddleware, for example by directly requesting /Home/Error?statusCode=404:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving additional information

The StatusCodePagesMiddleware sets an IStatusCodeReExecuteFeature on the HttpContext when it re-executes the pipeline. This interface exposes two properties; the original path, as you have already seen along with the PathBase

public interface IStatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
}

The one property it doesn't (currently) expose is the original querystring. However the concrete type that is actually set by the middleware is the StatusCodeReExecuteFeature. This contains an additional property OriginalQuerystring:

public interface StatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
    string OriginalPath { get; set; }
}

If you're willing to add some coupling to this implementation in your code, you can access these properties by safely casting the IStatusCodeReExecuteFeature to a StatusCodeReExecuteFeature. For example:

var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();  
var reExecuteFeature = feature as StatusCodeReExecuteFeature  
ViewData["ErrorPathBase"] = reExecuteFeature?.OriginalPathBase;  
ViewData["ErrorQuerystring"] = reExecuteFeature?.OriginalQueryString;  

This lets you display/log the complete path that gave you the error, including the querystring

Retrieving the path that generated an error with the StatusCodePages Middleware

Note: If you look at the dev branch in the Diagnostics GitHub repo you'll notice that the interface actually does contain OriginalQueryString. This will be coming with .NET Core 2.0 / ASP.NET Core 2.0, as it is a breaking change. It'll make the above scenario that little bit easier though

Summary

The StatusCodePagesMiddleware is just one of the pieces needed to provide graceful handling of errors in your application. The re-execute approach is a great way to include custom layouts in your application, but it can obscure the origin of the error. Obviously, logging the error where it is generated provides the best context, but the IStatusCodeReExecuteFeature can be useful for easily retrieving the source of the error when generating the final response.


Damien Bowden: .NET Core, ASP.NET Core logging with NLog and PostgreSQL

This article shows how .NET Core or ASP.NET Core applications can log to a PostgreSQL database using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

Other posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. .NET Core logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Setting up PostgreSQL

pgAdmin can be used to setup the PostgreSQL database which is used to save the logs. A log database was created for this demo, which matches the connection string in the nlog.config file.

Using the pgAdmin, open a query edit view and execute the following script to create a table in the log database.

CREATE TABLE logs
( 
    Id serial primary key,
    Application character varying(100) NULL,
    Logged text,
    Level character varying(100) NULL,
    Message character varying(8000) NULL,
    Logger character varying(8000) NULL, 
    Callsite character varying(8000) NULL, 
    Exception character varying(8000) NULL
)

At present it is not possible to log a date property to PostgreSQL using NLog, only text fields are supported. A github issue exists for this here. Due to this, the Logged field is defined as a text, and uses the DateTime value when the log is created.

.NET or ASP.NET Core Application

The required packages need to be added to the csproj file. For an ASP.NET Core aplication, add NLog.Web.AspNetCore and Npgsql, for a .NET Core application add NLog and Npgsql.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>ConsoleNLogPostgreSQL</AssemblyName>
    <OutputType>Exe</OutputType>
    <PackageId>ConsoleNLog</PackageId>
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
    <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
    <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
    <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.1" />
    <PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />
    <PackageReference Include="Npgsql" Version="3.2.2" />
    <PackageReference Include="System.Data.SqlClient" Version="4.3.0" />
  </ItemGroup>
</Project>

Or use the NuGet package manager in Visual Studio 2017.

The nlog.config file is then setup to log to PostgreSQL using the database target with the dbProvider configured for Npgsql and the connectionString for the required instance of PostgreSQL. The commandText must match the database setup in the TSQL script. If you add, for example extra properties from the NLog.Web.AspNetCore package to the logs, these also need to be added here.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
  
  <targets>
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

    <target name="database" xsi:type="Database"
              dbProvider="Npgsql.NpgsqlConnection, Npgsql"
              connectionString="User ID=damienbod;Password=damienbod;Host=localhost;Port=5432;Database=log;Pooling=true;"
             >

          <commandText>
              insert into logs (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
      
    <logger name="*" minlevel="Trace" writeTo="database" />
      
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

When using ASP.NET Core, the NLog.Web.AspNetCore can be added to the nlog.config file to use the extra properties provided here.

<extensions>
     <add assembly="NLog.Web.AspNetCore"/>
</extensions>
            

Using the log

The logger can be used using the LogManager or added to the NLog log configuration in the Startup class in an ASP.NET Core application.

Basic example:

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

var logger = LogManager.GetLogger("console");
logger.Warn("console logging is great");
logger.Error(new ArgumentException("oh no"));

Startup configuration in an ASP.NET Core application:

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
	// Add framework services.
	services.AddMvc();

	services.AddScoped<LogFilter>();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	//add NLog.Web
	app.AddNLogWeb();

	////foreach (DatabaseTarget target in LogManager.Configuration.AllTargets.Where(t => t is DatabaseTarget))
	////{
	////	target.ConnectionString = Configuration.GetConnectionString("NLogDb");
	////}
	
	////LogManager.ReconfigExistingLoggers();

	LogManager.Configuration.Variables["connectionString"] = Configuration.GetConnectionString("NLogDb");
	LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

	app.UseMvc();
}

When the application is run, the logs are added to the database.

Links

https://www.postgresql.org/

https://www.pgadmin.org/

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

By default, the ASP.NET Core templates include either the ExceptionHandlerMiddleware or the DeveloperExceptionPage. Both of these catch exceptions thrown by the middleware pipeline, but they don't handle error status codes that are returned by the pipeline (without throwing an exception). For that, there is the StatusCodePagesMiddleware.

There are a number of ways to use the StatusCodePagesMiddleware but in this post I will be focusing on the version that re-executes the pipeline.

Default Status Code Pages

I'll start with the default MVC template, but I'll add a helper method for returning a 500 error:

public class HomeController : Controller  
{
    public IActionResult Problem()
    {
        return StatusCode(500);
    }  
}

To start with, I'll just add the default StatusCodePagesMiddleware implementation:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePages();

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

With this in place, making a request to an unknown URL gives the following response:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

The default StatusCodePagesMiddleware implementation will return the simple text response when it detects a status code between 400 and 599. Similarly, if you make a request to /Home/Problem, invoking the helper action method, then the 500 status code text is returned.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute vs Redirect

In reality, it's unlikely you'll want to use status code pages with this default setting in anything but a development environment. If you want to intercept status codes in production and return custom error pages, you'll want to use one of the alternative extension methods that use redirects or pipeline re-execution to return a user-friendly page:

  • UseStatusCodePagesWithRedirects
  • UseStatusCodePagesWithReExecute

These two methods have a similar outcome, in that they allow you to generate user-friendly custom error pages when an error occurs on the server. Personally, I would suggest always using the re-execute extension method rather than redirects.

The problem with redirects for error pages is that they somewhat abuse the return codes of HTTP, even though the end result for a user is essentially the same. With the redirect method, when an error occurs the pipeline will return a 302 response to the user, with a redirect to a provided error path. This will cause a second response to be made to the the URL that is used to generate the custom error page, which would then return a 200 OK code for the second request:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Semantically this isn't really correct, as you're triggering a second response, and ultimately returning a success code when an error actually occurred. This could also cause issues for SEO. By re-executing the pipeline you keep the correct (error) status code, you just return user-friendly HTML with it.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

You are still in the context of the initial response, but the whole pipeline after the StatusCodePagesMiddleware is executed for a second time. The content generated by this second response is combined with the original Status Code to generate the final response that gets sent to the user. This provides a workflow that is overall more semantically correct, and means you don't completely lose the context of the original request.

Adding re-execute to your pipeline

Hopefully you're swayed by the re-execte approach; luckily it's easy to add this capability to your middleware pipeline. I'll start by updating the Startup class to use the re-execute extension instead of the basic one.

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Note, the order of middleware in the pipeline is important. The StatusCodePagesMiddleware should be one of the earliest middleware in the pipeline, as it can only modify the response of middleware that comes after it in the pipeline

There are two arguments to the UseStatusCodePagesWithReExecute method. The first is a path that will be used to re-execute the request in the pipeline and the second is a querystring that will be used.

Both of these paths can include a placeholder {0} which will be replaced with the status code integer (e.g. 404, 500 etc) when the pipeline is re-executed. This allows you to either execute different action methods depending on the error that occurred, or to have a single method that can handle multiple errors.

The following example takes the latter approach, using a single action method to handle all the error status codes, but with special cases for 404 and 500 errors provided in the querystring:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

When a 404 is generated (by an unknown path for example) the status code middleware catches it, and re-executes the pipeline using /Home/Error?StatusCode=404. The Error action is invoked, and executes the 404.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Similarly, a 500 error is special cased:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Any other error executes the default Error.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Summary

Congratulations, you now have custom error pages in your ASP.NET Core application. This post shows how simple it is to achieve by re-executing the pipeline. I strongly recommend you use this approach instead of trying to use the redirects overload. In the next post, I'll show how you can obtain the original URL that triggered the error code during the second pipeline execution.


Anuraj Parameswaran: Working with dependencies in dotnet core

This post is about working with nuget dependencies and project references in ASP.NET Core or .NET Core. In earlier versions of dotnet core, you can add dependencies by modifying the project.json file directly and project references via global.json. This post is about how to do this better with dotnet add command.


Andrew Lock: Deconstructors for non-tuple types in C# 7.0

Deconstructors for non-tuple types in C# 7.0

As well as finally seeing the RTM of the .NET Core tooling, Visual Studio 2017 brought a whole host of new things to the table. Among these is C# 7.0, which introduces a number of new features to the language.

Many of these features are essentially syntactic sugar over things that were already possible, but were harder work or more cumbersome in earlier versions of the language. Tuples feels like one of these features that I'm going to end up using quite a lot.

Deconstructing tuples

Often you'll find that you want to return more than one value from a method. There's a number of ways you can achieve this currently (out parameters, System.Tuple, custom class) but none of them are particularly smooth. If you really are just returning two pieces of data, without any associated behaviour, then the new tuples added in C# 7 are a great fit.

I won't go into much detail on tuples here, so I suggest you checkout one of the many recent articles introducing the feature if they're new to you. I'm just going to look at one of the associated features of tuples - the ability to deconstruct them.

In the following example, the method GetUser() returns a tuple consisting of an integer and a string:

(int id, string name) GetUser()
{
    return (123, "andrewlock");
}

If I call this method from my code, I can access the id and name values by name - so much cleaner than out parameters or the Item1, Item2 of System.Tuple.

Deconstructors for non-tuple types in C# 7.0

Another feature is the ability to automatically deconstruct the tuple values into separate variables. So for example, I could do:

(var userId, var username) = GetUser();
Console.WriteLine($"The user with id {userId} is {username}");  

This creates two variables, an integer called userId and a string called username. The tuple has been automatically deconstructed into these two variables.

Deconstructing non-tuples

This feature is great, but it is actually not limited to just tuples - you can add deconstructors to all your classes!

The following example shows a User class with a deconstructor that returns the FirstName and LastName properties:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

With this in place I can deconstruct any User object:

var user = new User  
{
    FirstName = "Joe",
    LastName = "Bloggs",
    Email = "joe.bloggs@example.com",
    Age = 23
};

(var firstName, var lastName) = user;

Console.WriteLine($"The user's name is {firstName} {lastName}");  
// The user's name is Joe Bloggs

We are creating a User object, and then deconstructing it into the firstName and lastName variables, which are declared as part of the deconstruction (they don't have to be declared inlcline, you can use existing variables too).

To create a deconstructor, create a function of the following form:

public void Deconstruct(out T var1, ..., out T2 var 2);  

The values that are produced are declared as out parameters. You can have as many arguments as you like, the caller just needs to provide the correct number of variables when calling the deconstructor. You can even have multiple overloads with different numbers of parameters:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }

    public void Deconstruct(out string firstName, out string lastName, out int age)
    {
        firstName = FirstName;
        lastName = LastName;
        age = Age;
    }
}

The same user could be deconstructed in multiple ways, depending on the needs of the caller:

(var firstName1, var lastName1) = user;
(var firstName2, var lastName2, var age) = user;

Ambiguous overloads

One thing that might cross your mind is what happens if you have multiple overloads with the same number of parameters. In the following example I add an additional deconstructor also accepts three parameters, where the third parameter is a string rather than an int:

public partial class User  
{
    // remainder of class as before

    public void Deconstruct(out string firstName, out string lastName, out string email)
    {
        firstName = FirstName;
        lastName = LastName;
        email = Email;
    }
}

This code compiles, but if you try and actually deconstruct the object you'll get some red squigglies:

Deconstructors for non-tuple types in C# 7.0

At first this seems like it's just a standard C# type inference error - there are two candidate method calls so you need to disambiguate between them by providing explicit types instead of var. However, even explicitly declaring the type won't clear this one up:

Deconstructors for non-tuple types in C# 7.0

You'll still get the following error:

The call is ambiguous between the following methods or properties: 'Program.User.Deconstruct(out string, out string, out int)' and 'Program.User.Deconstruct(out string, out string, out string)'  

So make sure not to overload multiple Deconstruct methods in a type with the same numbers of parameters!

Bonus: Predefined type 'System.ValueTuple`2' is not defined or imported

When you first start using tuples, you might get this confusing error:

Predefined type 'System.ValueTuple`2' is not defined or imported  

But don't panic, you just need to add the System.ValueTuple NuGet package to your project, and all will be good again:

Deconstructors for non-tuple types in C# 7.0

Summary

This was just a quick look at the deconstruction feature that came in C# 7.0. For a more detailed look, check out some of the links below:


Andrew Lock: Preventing mass assignment or over posting in ASP.NET Core

Preventing mass assignment or over posting in ASP.NET Core

Mass assignment, also known as over-posting, is an attack used on websites that involve some sort of model-binding to a request. It is used to set values on the server that a developer did not expect to be set. This is a well known attack now, and has been discussed many times before, (it was a famous attack used against GitHub some years ago), but I wanted to go over some of the ways to prevent falling victim to it in your ASP.NET Core applications.

How does it work?

Mass assignment typically occurs during model binding as part of MVC. A simple example would be where you have a form on your website in which you are editing some data. You also have some properties on your model which are not editable as part of the form, but instead are used to control the display of the form, or may not be used at all.

For example, consider this simple model:

public class UserModel  
{
    public string Name { get; set; }
    public bool IsAdmin { get; set; }
}

It has two properties, but we only actually going to allow the user to edit the Name property - the IsAdmin property is just used to control the markup they see:

@model UserModel

<form asp-action="Vulnerable" asp-Controller="Home">  
    <div class="form-group">
        <label asp-for="Name"></label>
        <input class="form-control" type="TextBox" asp-for="Name" />
    </div>
    <div class="form-group">
        @if (Model.IsAdmin)
        {
            <i>You are an admin</i>
        }
        else
        {
            <i>You are a standard user</i>
        }
    </div>
    <button class="btn btn-sm" type="submit">Submit</button>
</form>  

So the idea here is that you only render a single input tag to the markup, but you post this to a method that uses the same model as you used for rendering:

[HttpPost]
public IActionResult Vulnerable(UserModel model)  
{
    return View("Index", model);
}

This might seem OK - in the normal browser flow, a user can only edit the Name field. When they submit the form, only the Name field will be sent to the server. When model binding occurs on the model parameter, the IsAdmin field will be unset, and the Name will have the correct value:

Preventing mass assignment or over posting in ASP.NET Core

However, with a simple bit of HTML manipulation, or by using Postman/Fiddler , a malicious user can set the IsAdmin field to true. The model binder will dutifully bind the value, and you have just fallen victim to mass assignment/over posting:

Preventing mass assignment or over posting in ASP.NET Core

Defending against the attack

So how can you prevent this attack? Luckily there's a whole host of different ways, and they are generally the same as the approaches you could use in the previous version of ASP.NET. I'll run through a number of your options here.

1. Use BindAttribute on the action method

Seeing as the vulnerability is due to model binding, our first option is to use the BindAttribute:

public IActionResult Safe1([Bind(nameof(UserModel.Name))] UserModel model)  
{
    return View("Index", model);
}

The BindAttribute lets you whitelist only those properties which should be bound from the incoming request. In our case, we have specified just Name, so even if a user provides a value for IsAdmin, it will not be bound. This approach works, but is not particularly elegant, as it requires you specify all the properties that you want to bind.

2. Use [Editable] or [BindNever] on the model

Instead of applying binding directives in the action method, you could use DataAnnotations on the model instead. DataAnnotations are often used to provide additional metadata on a model for both generating appropriate markup and for validation.

For example, our UserModel might actually be already decorated with some data annotations for the Name property:

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    [Editable(false)]
    public bool IsAdmin { get; set; }
}

Notice that as well as the Name attributes, I have also added an EditableAttribute. This will be respected by the model binder when the post is made, so an attempt to post to IsAdmin will be ignored.

The problem with this one is that although applying the EditableAttribute to the IsAdmin produces the correct output, it may not be semantically correct in general. What if you can edit the IsAdmin property in some cases? Things can just get a little messy sometimes.

As pointed out by Hamid in the comments, the [BindNever] attribute is a better fit here. Using [BindNever] in place of [Editable(false)] will prevent binding without additional implications.

3. Use two different models

Instead of trying to retrofit safety to our models, often the better approach is conceptually a more simple one. That is to say that our binding/input model contains different data to our view/output model. Yes, they both have a Name property, but they are encapsulating different parts of the system so it could be argued they should be two different classes:

public class BindingModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }
}

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    [Editable(false)]
    public bool IsAdmin { get; set; }
}

Here our BindingModel is the model actually provided to the action method during model binding, while the UserModel is the model used by the View during HTML generation:

public IActionResult Safe3(BindingModel bindingModel)  
{
    var model = new UserModel();

    // can be simplified using AutoMapper
    model.Name = bindingModel.Name;

    return View("Index", model);
}

Even if the IsAdmin property is posted, it will not be bound as there is no IsAdmin property on BindingModel. The obvious disadvantage to this simplistic approach is the duplication this brings, especially when it comes to the data annotations used for validation and input generation. Any time you need to, for example, update the max string length, you need to remember to do it in two different places.

This brings us on to a variant of this approach:

4. Use a base class

Where you have common properties like this, an obvious choice would be to make one of the models inherit from the other, like so:

public class BindingModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }
}

public class UserModel : BindingModel  
{
    public bool IsAdmin { get; set; }
}

This approach keeps your models safe from mass assignment attacks by using different models for model binding and for View generation. But compared to the previous approach, you keep your validation logic DRY.

public IActionResult Safe4(BindingModel bindingModel)  
{
    // do something with the binding model
    // when ready to display HTML, create a new view model
    var model = new UserModel();

    // can be simplified using e.g. AutoMapper
    model.Name = bindingModel.Name;

    return View("Index", model);
}

There is also a variation of this approach which keeps your models completely separate, but allows you to avoid duplicating all your data annotation attributes by using the ModelMetadataTypeAttribute.

5. Use ModelMetadataTypeAttribute

The purpose of this attribute is to allow you defer all the data annotations and additional metadata about you model to a different class. If you want to keep your BindingModel and UserModel hierarchically distinct, but also son't want to duplicate all the [MaxLength(200)] attributes etc, you can use this approach:

[ModelMetadataType(typeof(UserModel))]
public class BindingModel  
{
    public string Name { get; set; }
}

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    public bool IsAdmin { get; set; }
}

Note that only the UserModel contains any metadata attributes, and that there is no class hierarchy between the models. However the MVC model binder will use the metadata of the equivalent properties in the UserModel when binding or validating the BindingModel.

The main thing to be aware of here is that there is an implicit contract between the two models now - if you were to rename Name on the UserModel, the BindingModel would no longer have a matching contract. There wouldn't be an error, but the validation attributes would no longer be applied to BindingModel.

Summary

This was a very quick run down of some of the options available to you to prevent mass assignment. Which approach you take is up to you, though I would definitely suggest using one of the latter 2-model approaches. There are other options too, such as doing explicit binding via TryUpdateModelAsync<> but the options I've shown represent some of the most common approaches. Whatever you do, don't just blindly bind your view models if you have properties that should not be edited by a user, or you could be in for a nasty surprise.

And whatever you do, don't bind directly to your EntityFramework models. Pretty please.


Andrew Lock: Git integration improvements in Visual Studio 2017 - git-hooks

Git integration improvements in Visual Studio 2017 - git-hooks

Visual Studio 2017 includes a whole host of improvements to its Git integration. Among other things, SSH support is built it, you can push --force-with-lease, and easily diff commits.

Some of these improvements are due to a switch in the way the Git integration works - instead of relying on libgit, VS2017 has switched to using Git.exe for all your Git interactions. That might seem like a minor change, but it actually solves a number of the issues I had in using git in my daily VS2015 work.

Git integration in Visual Studio

For those that don't know, Visual Studio has come with built in support for Git repositories for some time. You can do a lot from within VS, including staging and commiting obviously, but also merging, rebasing, managing branches (both local and remote), viewing the commit history and a whole host of other options.

Git integration improvements in Visual Studio 2017 - git-hooks

I know the concept of a git UI is an abomination for some people, but for some things I really like it. Don't get me wrong, I'm very comfortable at the command line, but the built in diff view in VS works really well for me, and sometimes it's just handy to stay in your IDE.

One of the windows I use the most is the Changes window within Team Explorer. This shows you all the files you have waiting to be committed - it's essentially a visual git status. Having that there while I'm working is great, and easily lets me flick back to a file I was editing. I find it sometimes easier to work with than ctrl + Tabing through the sea of tabs I inevitably have open:

Git integration improvements in Visual Studio 2017 - git-hooks

Git integration limitations in VS 2015

Generally speaking, the changes window has served me well, but there were a couple of niggles. One of the problems I often ran into was when you have files in your repo that are not part of the Visual Studio solution. Normally, hitting Commit All should be equivalent to running git commit -Am "My commit message" i.e. it should commit all unstaged files. However, occasionally I find that it leaves out some files that are part of the repository but not part of the solution. I've seen it do it with word files in particular.

By calling out to the underlying git.exe executable, you can be sure that the details shown in the Changes window match those you'd get from git status; a much smoother experience.

Another feature that works from the command line, but not from the Changes window in VS 2015 was client-side git-hooks.

Using git hooks in Visual Studio 2017

I dabbled with using git-hooks on the client side a while back. These run in your checked-out repository, rather than on the git-server. You can use them for a whole variety of things, but a common reason is to validate and enforce the format of commit messages before the commit is actually made. These hooks aren't full-proof, and they aren't installed by default when you clone a repository, but they can be handy none the less.

An occasional requirement for commit messages is that they should always start with an issue number. For example, if your issue tracker, such as JIRA, produces issues prefixed with EV-, e.g. EV-123 or EV-345, you might require that all commit messages start with such an EV- issue label to ensure commits are tracked correctly.

If you create a commit-msg file inside the .git/hooks directory of your repository, then you can create a file that validates the format of your commit message before the commit is made. For example, I used this simple script to run a regex on the commit message to check it starts with an issue number:

#!C:/Program\ Files/Git/usr/bin/sh.exe

COMMIT_MESSAGE_FILE=$1  
COMMIT_MESSAGE_LINE1=$(head -n 1 $COMMIT_MESSAGE_FILE)  
ERR_MSG='Aborting commit. Your commit message is missing a JIRA Issue (''EV-1111'')'

MATCH_RESULT=$(echo $COMMIT_MESSAGE_LINE1 | grep -E '^EV-[[:digit:]]+.*')

if [[ ! -n "$MATCH_RESULT" ]]; then  
    echo "ERR_MSG" >&2
    exit 1
fi

exit 0  

You can also use powershell and other scripting languages if you like and have them available. The commit-msg file above is specific to my windows machine and location of Git.

With this file in place, when you try and make a commit, it will be rejected with the message:

Aborting commit. Your commit message is missing a JIRA Issue (''EV-1111'')  

Git integration improvements in Visual Studio 2017 - git-hooks

Good, if I forget to add an issue, the message will let me know.

This might seem like a handy feature, but the big problem I had was VS 2015's use of libgit. Unfortunately, this doesn't support git hooks, which means that all our good work was for nought. VS 2015 would just ignore the hook and commit the files anyway. doh.

Enter VS 2017. With no other changes, when I click 'Commit All' from the Changes dialog, I get the warning message, and the commit is aborted!

Git integration improvements in Visual Studio 2017 - git-hooks

Once I've fixed my commit message, I can happily commit without leaving my IDE

Git integration improvements in Visual Studio 2017 - git-hooks

This is just one of a whole plethora of updates to VS 2017, but as someone who uses the git integration a fair amount, it's definitely a welcome one.


Damien Bowden: ASP.NET Core Error Management with elmah.io

This article shows how to use elmah.io error management with an ASP.NET Core application. The error, log data is added to elmah.io using different elmah.io nuget packages, directly from ASP.NET Core and also using an NLog elmah.io target.

Code: https://github.com/damienbod/AspNetCoreElmah

elmah.io is an error management system which can help you monitor, find and fix application problems fast. While structured logging is supported, the main focus of elmah.io is handling errors.

Getting started with Elmah.Io

Before you can start logging to elmah.io, you need to create an account and setup a log. Refer to the documentation here.

Logging exceptions, errors with Elmah.Io.AspNetCore and Elmah.Io.Extensions.Logging

You can add logs, exceptions to elmah.io directly from an ASP.NET Core application using the Elmah.Io.AspNetCore and the Elmah.Io.Extensions.Logging nuget packages. These packages can be added to the project using the nuget package manager.

Or you can just add the packages directly in the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <PropertyGroup>
    <UserSecretsId>AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4</UserSecretsId>
  </PropertyGroup>
  
  <ItemGroup>
    <Content Include="wwwroot\index.html" />
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="Elmah.Io.AspNetCore" Version="3.2.39-pre" />
    <PackageReference Include="Elmah.Io.Extensions.Logging" Version="3.1.22-pre" />
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="1.1.1" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0" />
  </ItemGroup>

</Project>

The Elmah.Io.AspNetCore package is used to catch unhandled exceptions in the application. This is configured in the Startup class. The OnMessage method is used to set specific properties in the messages which are sent to elmah.io. Setting the Hostname and the Application properties are very useful when evaluating the logs in elmah.io.

app.UseElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId),
	new ElmahIoSettings()
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

The Elmah.Io.Extensions.Logging package is used to log messages using the built in ILoggerFactory. You should only send warning, errors, critical messages and not just log everything to elmah.io, but it is possible to do this. Again the OnMessage method can be used to set the Hostname and the Application name for each log.

loggerFactory.AddElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId), 
	new FilterLoggerSettings
	{
		{"ValuesController", LogLevel.Information}
	},
	new ElmahIoProviderOptions
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

Using User Secrets for the elmah.io API-KEY and LogID

ASP.NET Core user secrets can be used to set the elmah.io API-KEY and the LogID as you don’t want to commit these to your source. The AddUserSecrets method can be used to set this.

private string _elmahAppKey;
private string _elmahLogId;

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddEnvironmentVariables();

	if (env.IsDevelopment())
	{
		builder.AddUserSecrets("AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4");
	}

	Configuration = builder.Build();
}

The user secret properties can then be used in the ConfigureServices method.

 public void ConfigureServices(IServiceCollection services)
{
	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];
	// Add framework services.
	services.AddMvc();
}

A dummy exception is thrown in this example, which then sends the data to elmah.io.

[HttpGet("{id}")]
public string Get(int id)
{
	throw new System.Exception("something terrible bad here!");
	return "value";
}

Logging exceptions, errors to elmah.io using NLog

NLog using the Elmah.Io.NLog target can also be used in ASP.NET Core to send messages to elmah.io. This can be added using the nuget package manager.

Or you can just add it to the csproj file.

<PackageReference Include="Elmah.Io.NLog" Version="3.1.28-pre" />
<PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />

NLog for ASP.NET Core applications can be configured in the Startup class. You need to set the target properties with the elmah.io API-KEY and also the LogId. You could also do this in the nlog.config file.

loggerFactory.AddNLog();
app.AddNLogWeb();

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreElmah\\Logs";

foreach (ElmahIoTarget target in LogManager.Configuration.AllTargets.Where(t => t is ElmahIoTarget))
{
	target.ApiKey = _elmahAppKey;
	target.LogId = _elmahLogId;
}

LogManager.ReconfigExistingLoggers();

The IHttpContextAccessor and the HttpContextAccessor also need to be registered to the default IoC in ASP.NET Core to get the extra information from the web requests.

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();

	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];

	// Add framework services.
	services.AddMvc();
}

The nlog.config file can then be configured for the target with the elmah.io type. The application property is also set which is useful in elmah.io.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreElmah\Logs\internal-nlog.txt">

  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
    <add assembly="Elmah.Io.NLog"/>    
  </extensions>

  
  <targets>
    <target name="elmahio" type="elmah.io" apiKey="API_KEY" logId="LOG_ID" application="AspNetCoreElmahUI"/>
    
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} |${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} | ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

  </targets>

  <rules>
    <logger name="*" minlevel="Warn" writeTo="elmahio" />
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>


The About method calls the AspNetCoreElmah application method which throws the dummy exception, so we send exceptions from both applications.

public async Task<IActionResult> About()
{
	_logger.LogInformation("HomeController About called");
	// throws exception
	HttpClient _client = new HttpClient();
	var response = await _client.GetAsync("http://localhost:37209/api/values/1");
	response.EnsureSuccessStatusCode();
	var responseString = System.Text.Encoding.UTF8.GetString(
		await response.Content.ReadAsByteArrayAsync()
	);
	ViewData["Message"] = "Your application description page.";

	return View();
}

Now both applications can be started, and the errors can be viewed in the elmah.io dashboard.

Where you open the dashboard in elmah.io and access you logs, you can view the exceptions.

Here’s the log sent from the AspNetCoreElmah application.

Here’s the log sent from the AspNetCoreElmahUI application using NLog with Elmah.Io.


Links

https://elmah.io/

https://github.com/elmahio/elmah.io.nlog

http://nlog-project.org/



Andrew Lock: What is the NETStandard.Library metapackage?

What is the NETStandard.Library metapackage?

In my last post, I took a quick look at the Microsoft.AspNetCore meta package. One of the libraries referenced by the package, is the NETStandard.Library NuGet package. In this post I take a quick look at this package and what it contains.

If you're reading this post, you have hopefully already heard of .NET Standard. This is acts as an interface to .NET Platforms, and aims to define a unified set of APIs that those platforms must implement. It is the spiritual successor to PCLs, and allow you to target .NET Framework, .NET Core, and other .NET platforms with the same library code base.

The NETStandard.Library metapackage references a set of NuGet packages that define the .NET Standard library. Like the Microsoft.AspNetCore package from my last post, the package does not contain dlls itself, but rather references a number of other packages, hence the name metapackage. Depending on the target platform of your project, different packages will be added to the project, in line with the appropriate version of .NET Standard the platform implements.

For example, the .NET Standard 1.3 dependencies for the NETStandard.Library package includes the System.Security.Cryptography.X509Certificates package, but this does not appear in the 1.0, 1.1 or 1.2 target platforms. You can also see this on the nuget.org web page for the package.

It's worth noting that the NETStandard.Library package will typically be referenced by projects, though not by libraries. It's also worth noting that the version number of the package does not correspond to the version of .NET Standard, it is just the package version.

So even if your project is targeting .NET Standard version 1.3 (or multi-targeting), you can still use the latest NETStandard.Library package version (1.6.1 at the time of writing). The package itself is versioned primarily because it also contains various tooling support such as the list of .NET Standard versions.

It's also worth bearing in mind that the NETStandard.Library is essentially only an API definition, it does not contain the actual implementation itself - that comes from the underlying platform that implements the standard, such as the .NET Framework or .NET Core.

If you download one of the packages referenced in the NETStandard.Library package, System.Collections for example, and open up the nuget package as before, you'll see there's a lot more too it than the Microsoft.AspNetCore metapackage. In particular, there's a lib folder and a ref folder:

What is the NETStandard.Library metapackage?

In a typical NuGet package, lib is where the actual dlls for the package would live. However, if we do a search for all the files in the lib folder, you can see that there aren't actually any dlls, just a whole load of empty placeholder files called _._ :

What is the NETStandard.Library metapackage?

So if there aren't any dlls in here, where are they? Taking a look through the ref folder you find a similar thing - mostly _._ placeholders. However that's not entirely the case. The netstandard1.0 and netstandard 1.3 folders do contain a dll (and a load of xml metadata files):

What is the NETStandard.Library metapackage?

But look at the size of that System.Collections.dll - only 42kb! Remember, the NETStandard.Library only includes reference assemblies, not the actual implementations. The implementation comes from the final platform you target; for example .NET Framework 4.6.1, .NET Core or Mono etc. The reference dlls just define the various APIs that these platforms must expose for a given version of .NET Standard.

You can see this for yourself by decompiling the contained dll using something like ILSpy. If you do that, you can see what looks likes the source code for System.Collections, but without any method bodies, showing that this really is just a reference assembly:

What is the NETStandard.Library metapackage?

These placeholder assemblies are are a key part of the the .NET Standard infrastructure. They provide concrete APIs against which you can compile your projects, without tying you to a specific implementation (i.e. .NET Framework or .NET Core).

Final thoughts

If this all seems confusing and convoluted, that's because it is! It doesn't that every time you think you've got your head around it, things have moved on, are being changed or improved…

Having said that, most of this is more detail than you'll need. Generally, it's enough to understand the broad concept of .NET Standard, and the fact that it allows you to share code between multiple platforms.

There's a whole host of bits I haven't gone into, such as type forwarding, so if you want to get further into the details, and really try to understand what's going on, I suggest checking out the links below. In particular, I highly recommend the video series by Immo Landwerth on the subject.

Of course, when .NET Standard 2.0 is out, all this will change again, so brace yourself!


Anuraj Parameswaran: .editorconfig support in Visual Studio 2017

This post is about .editorconfig support in Visual Studio 2017. EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. As part of productivity improvements in Visual Studio, Microsoft introduced support for .editorconfig file in Visual Studio 2017.


Anuraj Parameswaran: Live Unit Testing in Visual Studio 2017

This post is about Live Unit Testing in Visual Studio 2017. With VS2017, Microsoft released Live Unit Testing. Live Unit Testing automatically runs the impacted unit tests in the background as you edit code, and visualizes the results and code coverage, live in the editor.


Anuraj Parameswaran: Create a dotnet new project template in dotnet core

This post is about creating project template for the dotnet new command. As part of the new dotnet command, now you can create Empty Web app, API app, MS Test and Solution file as part of dotnet new command. This post is about creating a Web API template with Swagger support.


Damien Bowden: Testing an ASP.NET Core MVC Protobuf API using HTTPClient and xUnit

The article shows how to test an ASP.NET Core MVC API using xUnit and a HTTPClient client using Protobuf for the content formatters.

Code: https://github.com/damienbod/AspNetMvc6ProtobufFormatters

Posts in this series:

The test project tests the ASP.NET Core API produced here. xUnit is used as a test framework. The xUnit dependencies can be added to the test project using NuGet in Visual Studio 2017 as well as the Microsoft.AspNetCore.TestHost package. Microsoft provide nice docs about Integration testing ASP.NET Core.

When the NuGet packages have been added, you can view these in the csproj file, or install and update directly in this file. A reference to the project containg the API is also added to the test project.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>AspNetCoreProtobuf.IntegrationTests</AssemblyName>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" />
    <PackageReference Include="xunit.runner.console" Version="2.2.0" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
    <PackageReference Include="xunit" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="1.1.1" />
    <PackageReference Include="protobuf-net" Version="2.1.0" />
    <PackageReference Include="xunit.runners" Version="2.0.0" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\AspNetCoreProtobuf\AspNetCoreProtobuf.csproj" />
  </ItemGroup>

  <ItemGroup>
    <Service Include="{82a7f48d-3b50-4b1e-b82e-3ada8210c358}" />
  </ItemGroup>

</Project>

The TestServer is used to test the ASP.NET Core API. This is setup for all the API tests.

private readonly TestServer _server;
private readonly HttpClient _client;

public ProtobufApiTests()
{
	_server = new TestServer(
		new WebHostBuilder()
		.UseKestrel()
		.UseStartup<Startup>());
	_client = _server.CreateClient();
}

HTTP GET request test

The GetProtobufDataAndCheckProtobufContentTypeMediaType test sends a HTTP GET to the test server, and requests the content as application/x-protobuf. The result is deserialized using protobuf and the header and the expected result is checked.

[Fact]
public async Task GetProtobufDataAndCheckProtobufContentTypeMediaType()
{
	// Act
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var response = await _client.GetAsync("/api/values/1");
	response.EnsureSuccessStatusCode();

	var result = ProtoBuf.Serializer.Deserialize<ProtobufModelDto>(await response.Content.ReadAsStreamAsync());

	// Assert
	Assert.Equal("application/x-protobuf", response.Content.Headers.ContentType.MediaType );
	Assert.Equal("My first MVC 6 Protobuf service", result.StringValue);
}
		

HTTP POST request test

The PostProtobufData test method sends a HTTP POST request to the test server with a protobuf serialized content. The status code of the request is validated.

[Fact]
public void PostProtobufData()
{
	// HTTP GET with Protobuf Response Body
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	
	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<ProtobufModelDto>(stream, new ProtobufModelDto
	{
		Id = 2,
		Name= "lovely data",
		StringValue = "amazing this ah"
	
	});

	HttpContent data = new StreamContent(stream);

	// HTTP POST with Protobuf Request Body
	var responseForPost = _client.PostAsync("api/Values", data).Result;

	Assert.True(responseForPost.IsSuccessStatusCode);
}

The tests can be executed or debugged in Visual Studio using the Test Explorer

The tests can also be run with dotnet test in the commandline.

C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests>dotnet test
Build started, please wait...
Build completed.

Test run for C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests\bin\Debug\netcoreapp1.1\AspNetCoreProtobuf.IntegrationTests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Testausführungs-Befehlszeilentool Version 15.0.0.0
Copyright (c) Microsoft Corporation. Alle Rechte vorbehalten.

Die Testausf├╝hrung wird gestartet, bitte warten...
[xUnit.net 00:00:00.5821132]   Discovering: AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.6841246]   Discovered:  AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.7273897]   Starting:    AspNetCoreProtobuf.IntegrationTests
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) with arguments (AspNetCoreProtobuf.Model.ProtobufModelDto) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) in 137.2264ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 346.8796ms 200
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 39.0599ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 43.2983ms 200 application/x-protobuf
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 1.4974ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 3.6715ms 200 application/x-protobuf
[xUnit.net 00:00:01.5669956]   Finished:    AspNetCoreProtobuf.IntegrationTests

Tests gesamt: 3. Bestanden: 3. Fehler: 0. Übersprungen: 0.
Der Testlauf war erfolgreich.
Testausführungszeit: 2.7499 Sekunden

appveyor CI

The project can then be connected to any build server. Appveyor is a easy one to setup and works well with github projects. Create an account and select the github repository to build. Add an appveyor.yml file to the root of your project and configure as required. Docs can be found here:
https://www.appveyor.com/docs/build-configuration/

image: Visual Studio 2017
init:
  - git config --global core.autocrlf true
install:
  - ECHO %APPVEYOR_BUILD_WORKER_IMAGE%
  - dotnet --version
  - dotnet restore
build_script:
- dotnet build
before_build:
- appveyor-retry dotnet restore -v Minimal
test_script:
- cd src/AspNetCoreProtobuf.IntegrationTests
- dotnet test

The appveyor badges can then be used in your project md file.

|                           | Build                                                                                                                                                             |       
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| .NET Core                 | [![Build status](https://ci.appveyor.com/api/projects/status/ihtrq4u81rtsty9k?svg=true)](https://ci.appveyor.com/project/damienbod/aspnetmvc6protobufformatters)  |

This would then be displayed in github as follows:

Links

https://developers.google.com/protocol-buffers/docs/csharptutorial

http://www.stackoverflow.com/questions/7774155/deserialize-long-string-with-protobuf-for-c-sharp-doesnt-work-properly-for-me

https://xunit.github.io/

https://www.appveyor.com/docs/build-configuration/

https://www.nuget.org/packages/protobuf-net/

https://github.com/mgravell/protobuf-net

http://teelahti.fi/using-google-proto3-with-aspnet-mvc/

https://github.com/damienpontifex/ProtobufFormatter/tree/master/src/ProtobufFormatter

http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/

http://blogs.msdn.com/b/webdev/archive/2014/11/24/content-negotiation-in-mvc-5-or-how-can-i-just-write-json.aspx

https://github.com/WebApiContrib/WebApiContrib.Formatting.ProtoBuf

https://damienbod.wordpress.com/2014/01/11/using-protobuf-net-media-formatter-with-web-api-2/

https://docs.microsoft.com/en-us/aspnet/core/testing/integration-testing



Andrew Lock: What is the Microsoft.AspNetCore metapackage?

What is the Microsoft.AspNetCore  metapackage?

One of the packages added to many ASP.NET Core templates is Microsoft.AspNetCore. This post takes a quick look at that package and what it contains.

The Microsoft.AspNetCore package is often included as one of the standard project dependencies when starting a new ASP.NET Core project. It provides many of the packages necessary to stand up a basic ASP.NET Core application.

However this package does not contain any actual code or dlls itself. Instead, it simply contains a series of dependencies on other packages. By adding the package to your project you bring in all the packages it depends on, along with their dlls. This is called a metapackage.

You can see this for yourself by downloading the package and taking a look inside. Nupkg files are essentially just zip files, so you can download them and open them up. Just change the file extension to zip and open in Windows Explorer, or open them with your archive browser of choice:

What is the Microsoft.AspNetCore  metapackage?

As you can see, there's really not a lot of files inside. The main one you can see is the Microsoft.AspNetCore.nuspec. This contains the metadata details for the package, including all the package dependencies (you can also see the dependencies listed on nuget.org.

Specifically, the packages it lists are:

  • Microsoft.AspNetCore.Diagnostics
  • Microsoft.AspNetCore.Hosting
  • Microsoft.AspNetCore.Routing
  • Microsoft.AspNetCore.Server.IISIntegration
  • Microsoft.AspNetCore.Server.Kestrel
  • Microsoft.Extensions.Configuration.EnvironmentVariables
  • Microsoft.Extensions.Configuration.FileExtensions
  • Microsoft.Extensions.Configuration.Json
  • Microsoft.Extensions.Logging
  • Microsoft.Extensions.Logging.Console
  • Microsoft.Extensions.Options.ConfigurationExtensions
  • NETStandard.Library

Which versions of these packages you will receive depends on which version of the Microsoft.AspNetCore package you install. If you are working on the 'LTS' release version of ASP.NET Core, you will (currently) need the 1.0.4 version of the package. If on the 'Current' release version, you will want version 1.1.1

These dependencies provide the initial basic libraries for setting up a basic ASP.NET Core server that uses the Kestrel web server and includes IIS Integration.

In terms of the application itself, with this package alone you can load application settings and environment variables into configuration, use the IOptions interface, and configure logging to the console.

For middleware, only the Microsoft.AspNetCore.Diagnostics package is included, which would allow adding middleware such as the ExceptionHandlerMiddleware, the DeveloperExceptionPageMiddleware and the StatusCodePagesMiddleware.

As you can see, the meta package is generally not sufficient by itself to build a complete application. You would typically use at least the Microsoft.AspNetCore.Mvc or Microsoft.AspNetCore.MvcCore package to add MVC capabilities to your application, and would often need a variety of other packages.

The meta package is a trade off between trying to find a useful collection of packages, that are applicable to a wide range of applications, without bringing in a whole load of dependencies to projects that don't need them. It mostly just serves to reduce the number of explicit dependencies you need to add to your .csproj file. Obviously, as the metapackage takes dependencies on other packages it doesn't reduce the actual dependencies of your project, just how many of them are listed in the project file.

One of the dependencies on which the Microsoft.AspNetCore depends is the NETStandard.Library package, which is itself a metapackage. As that package is a bit complex, I'll discuss it in more detail in a follow up post.


Anuraj Parameswaran: What is new in Visual Studio 2017 for web developers?

This post is about new features of Visual Studio 2017 for Web Developers. The new features inclues ASP.NET Core tooling, CSProj support, Migration option from project.json to csproj, client side debugging improvements etc.


Anuraj Parameswaran: Create an offline installer for Visual Studio 2017

This post is about building an offline installer for Visual Studio 2017. On March 7th 2017, Microsoft introduced Visual Studio 2017. Unlike earlier versions of Visual Studio, Microsoft don’t offer an ISO image. This post will help you to install Visual Studio when you’re offline.


Andrew Lock: Supporting both LTS and Current releases for ASP.NET Core

Supporting both LTS and Current releases for ASP.NET Core

Some time ago, I wrote a post on how to use custom middleware to set various security headers in an ASP.NET Core application. This formed the basis for a small package on GitHub and NuGet that does just that, it adds standard headers to your responses like X-Frame-Options and X-XSS-Protection.

I recently updated the package to include the Referrer-Policy header, after seeing Scott Helme's great post on it. When I was doing so, I was reminded of a Pull Request made some time ago to the repo, that I had completely forgotten about (oops 😳):

Supporting both LTS and Current releases for ASP.NET Core

As you can see, this PR was upgrading the packages used in the package to the 'Current' Release of ASP.NET Core at the time. Discovering this again got me thinking about the new versioning approach to .NET Core, and how to support both versions of the framework as a library author.

The two tracks of .NET Core

.NET Core (and hence, ASP.NET Core) currently has two different release cadences. On the one hand, there is the Long Term Support (LTS) branch, which has a slow release cycle, and will only see bug fixes over its lifetime, no extra features. Only when a new (major) version of .NET Core ships will you see new features. The plus sides to using LTS are that it will be the most stable, and is supported by Microsoft for three years.

On the other hand, there is the Current branch, which is updated at a much faster cadence. This branch does see features added with subsequent releases, but you have to make sure you keep up with the releases to remain supported. Each release is only supported for 3 months once the next version is released, so you have to be sure to update your apps in a timely fashion.

You can think of the LTS branch as a sub-set of the Current branch, though this is not strictly true as patch releases are made to fix bugs in the LTS branch. So for the (hypothetical) Current branch releases:

  • 1.0.0 - First LTS release
  • 1.1.0
  • 1.1.1
  • 1.2.0
  • 2.0.0 - Second LTS release
  • 2.1.0

only the major versions will be be LTS releases.

Package versioning

One of the complexities introduced by adopting the more modular approach to development taken in .NET Core, where everything is delivered as individual packages, is the fact the individual libraries that go into a .NET Core release don't necessarily have the same package version as the release version.

I looked at this in a post about a patch release to the LTS branch (version 1.0.3). The upshot is that the actual packages that go into a release could be a variety of different values. For example, in the 1.0.3 release, the following packages were all current:

"Microsoft.ApplicationInsights.AspNetCore" : "1.0.2",
"Microsoft.AspNet.Identity.AspNetCoreCompat" : "0.1.1",
"Microsoft.AspNet.WebApi.Client" : "5.2.2",
"Microsoft.AspNetCore.Antiforgery" : "1.0.2",
"Microsoft.Extensions.SecretManager.Tools" : "1.0.0-preview4-final",
"Microsoft.Extensions.WebEncoders" : "1.0.1",
"Microsoft.IdentityModel.Protocols.OpenIdConnect" : "2.0.0",

It's clear that versioning is a complex beast...

Picking package versions for a library

With this in mind, I was faced with deciding whether to upgrade the package versions of the various ASP.NET Core packages that the security headers library depends on. Specifically, these were originally:

"Microsoft.Extensions.Options": "1.0.0",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.0",
"Microsoft.AspNetCore.Http.Abstractions": "1.0.0"

The library itself uses some of the ASP.NET Core abstractions around dependency injection and IOption, hence the dependencies on these libraries. However the version of the packages it was using were all 1.0.0. These all correspond to the first release on the LTS branch. The question was whether to upgrade these packages to a newer LTS version, to upgrade them to the latest Current branch package versions, or to just leave them as they were.

To be clear, the library itself does not depend on anything that is specific to any particular package version; it is using the types defined in the first LTS release and nothing from later releases.

The previous pull request I mentioned was to update the packages to match those on the Current release branch. My hesitation with doing so is that this could cause problems for users who are currently sticking to the LTS release branch, as I'll explain shortly.

NuGet dependency resolution

The problem all stems from the way NuGet resolves dependencies for packages, where different versions of a package are referenced by others. This is a complex problem, and there are some great docs covering it on the website which are well worth a read, but I'll try and explain the basic problem here.

Imagine you have two packages that provide you some middleware, say my SecurityHeadersMiddleware package, and the HttpCacheHeaders package (check it out on GitHub!). Both of these packages depend on the Microsoft.AspNetCore.Http.Abstractions package. Just considering these packages and your application, the dependency chain looks something like the following:

Supporting both LTS and Current releases for ASP.NET Core

Now, if both of the middleware packages depend on the same LTS version of Microsoft.AspNetCore.Http.Abstractions then there is no problem, NuGet knows which version to restore and everything is great. In reality though, the chances of that are relatively slim.

So what happens if I have updated the SecurityHeadersMiddleware package to depend on the Current release branch, say version 1.1.0? This is where the different NuGet rules kick in. (Honestly, check out the docs!)

Package dependencies are normally specified as a minimum version, so I would be saying I need at least version 1.1.0. NuGet tries to satisfy all the requirements, so if one package requires at least 1.0.0 and another requires at least 1.1.0, then it knows it can use the 1.1.0 package to satisfy all the requirements.

However, it's not quite as simple as that. NuGet uses a rule whereby the first package to specify a version for a package wins. So the package 'closest' to your application will 'win' when it comes to picking which version of a package is installed.

For example, in the version below, even though the HTTP Cache Headers package specifies a higher minimum version of Microsoft.AspNetCore.Http.Abstractions than in the SecurityHeadersMiddleware, the lower version or 1.0.0 will be chosen, as it is further to the left in the graph.

Supporting both LTS and Current releases for ASP.NET Core

This behaviour can obviously cause problems, as it means packages could end up using an older version of a package than they specify as a dependency! Obviously it can also end up using a newer version of a package than it might expect. This theoretically should not be a problem, but in some cases it can cause issues.

Handling dependency graphs is tricky stuff…

Implications for users

So all this leads me back to my initial question - should I upgrade the package versions of the NetEscapades.AspNetCore.SecurityHeaders package? What implications would that have for people's code?

An important point to be aware of when using the Microsoft ASP.NET Core packages is that you must use all your packages on the same version - all LTS or all Current.

If I upgraded the package to use Current branch packages, and you used it in a project on the LTS branch, then the NuGet graph resolution rules could mean that you ended up using Current release version of the packages I referenced. That is not supported and could result in weird bugs.

For that reason, I decided to stay on the LTS packages. Now, having said that, if you use the package in a Current release project, it could technically be possible for this to result in a downgrade of the packages I reference. Also not good…

Luckily, if you get a downgrade, then you will be warned with a warning/error when you do a dotnet restore. You can easily fix this by adding an explicit reference to the offending package in your project. For example, if you had a warning about a downgrade from 1.1.0 to 1.0.0 with the Microsoft.AspNetCore.Http.Abstractions package, you could update your dependencies to include it explicitly:

{
    "NetEscapades.AspNetCore.SecurityHeaders" : "0.3.0", //<- depends on v1.0.0 ... 
    "Microsoft.AspNetCore.Http.Abstractions"  : "1.1.0"  //<- but v1.1.0 will win 
}

The explicit reference puts the dependent package further left on the dependency graph, and so that will be preferentially selected - version 1.1.0 will be installed even though NetEscapades.AspNetCore.SecurityHeaders depends on version 1.0.0.

Creating multiple versions

So does this approach make sense? For simple packages like my middleware, I think so. I don't need any of the features from later releases and it seems the easiest approach to manage.

Another obvious alternative would be to keep two concurrent versions of the package, one for the LTS branch, and another for the Current branch. After all, that's what happens with the actual packages that make up ASP.NET Core itself. I could have a version 1.0.0 for the LTS stream, and a version 1.1.0 for the Current stream.

The problem with that in my eyes, is that you are combining two separate streams - which are logically distinct - into a single version stream. It's not obvious that they are distinct, and things like the GUI for NuGet package manager in Visual Studio would not know they are distinct, so would always be prompting you to upgrade the LTS version packages.

Another alternative which fixes this might be to have two separate packages, say NetEscapades.AspNetCore.SecurityHeaders.LTS and NetEscapades.AspNetCore.SecurityHeaders.Current. That would play nicer in terms of keeping the streams separate, but just adds such an overhead to managing and releasing the project, that it doesn't seem worth the hassle.

Conclusion

So to surmise, I think I'm going to stick with targeting the LTS version of packages in any libraries on GitHub, but I'd be interested to hear what other people think. Different maintainers seem to be taking different tacks, so I'm not sure there's an obvious best practice yet. If there is, and I've just missed it, do let me know!


Damien Bowden: .NET Core logging to MySQL using NLog

This article shows how to log to MySQL in a .NET Core application using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. ASP.NET Core, logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Set up the MySQL database

MySQL Workbench can be used to add the schema ‘nlog’ which will be used for logging to the MySQL database. The user ‘damienbod’ is also required, which must match the defined user in the connection string. If you configure the MySQL database differently, then you need to change the connection string in the nlog.config file.

nlogmysql_02

You also need to create a log table. The following script can be used. If you decide to use NLog.Web in a ASP.NET Core application and add some extra properties, fields to the logs, then this script needs to be extended and also the database target in the nlog.config.

CREATE TABLE `log` (
  `Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `Application` varchar(50) DEFAULT NULL,
  `Logged` datetime DEFAULT NULL,
  `Level` varchar(50) DEFAULT NULL,
  `Message` varchar(512) DEFAULT NULL,
  `Logger` varchar(250) DEFAULT NULL,
  `Callsite` varchar(512) DEFAULT NULL,
  `Exception` varchar(512) DEFAULT NULL,
  PRIMARY KEY (`Id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

Add NLog and the MySQL provider to the project.

The MySql.Data pre release NuGet package can be used to log to MySQL. Add this to your project.

nlogmysql

The NLog.Web.AspNetCore package also needs to be added or just NLog if you do not require any web extensions.

nlog.config

The database target needs to be configured to log to MySQL. The database provider is set to use the MySQL.Data package which was downloaded using NuGet. If your using a different MySQL provider, this needs to be changed. The connection string is also set here, which matches what was configured previously in the MySQL database using Workbench. If you read the connection string from the app settings, a NLog variable can be used here.

  <target name="database" xsi:type="Database"
              dbProvider="MySql.Data.MySqlClient.MySqlConnection, MySql.Data"
              connectionString="server=localhost;Database=nlog;user id=damienbod;password=1234"
             >

          <commandText>
              insert into nlog.log (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>

NLog can then be used in the application.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using NLog;
using NLog.Targets;

namespace ConsoleNLog
{
    public class Program
    {
        public static void Main(string[] args)
        {

            LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

            var logger = LogManager.GetLogger("console");
            logger.Warn("console logging is great");

            Console.WriteLine("log sent");
            Console.ReadKey();
        }
    }
}

Full nlog.config file:
https://github.com/damienbod/AspNetCoreNlog/blob/master/src/ConsoleNLogMySQL/nlog.config

Links

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://github.com/NLog/NLog/blob/38aef000f916bd5ffd8b80a5576afa2423192e84/examples/targets/Configuration%20API/Database/MSSQL/Example.cs

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: Using routing DataTokens in ASP.NET Core

Using routing DataTokens in ASP.NET Core

ASP.NET Core uses routing to map incoming URLs to controllers and action methods, and also to generate URLs when you provide route parameters.

One of the lesser known features of the routing infrastructure is data tokens. These are additional values that can be associated with a particular route, but don't affect the process of URL matching or generation at all.

This post takes a brief look at data tokens and how to use them in your applications for providing supplementary information about a route, but generally speaking I recommend avoiding them if possible.

How to add data tokens to a route

Data tokens are specified when you define your global convention-based routes, in the call to UseMvc. For example, the following route adds a data token called Name to the default route:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: null,
        constraints: null,
        dataTokens: new { Name = "default_route" });
});

This route just adds the standard default conventional route to the route collection but it also specifies the Name data token. Note that due to the available overloads, you have to explicitly provide values for defaults and constraints.

This data token is functionally identical to the MapRoute version without dataTokens; the data tokens do not modify the way URLs are routed at all.

Accessing the data tokens from an action method

Whenever a route is used to map an incoming URL to an action method, the data tokens associated with the route are set. These can be accessed from the RouteData.DataTokens property on the Controller base class. This exposes the values as a RouteValueDictionary so you can access them by name. For example, you could retrieve and display the above data token as follows:

public class ProductController : Controller  
{
    public string Index()
    {
        var nameTokenValue = (string)RouteData.DataTokens["Name"];
        return nameTokenValue;
    }
}

As you can see, the data token needs to be cast to the appropriate Type it was defined as, in this case string.

This behaviour is different to that of the route parameter values. Route values are stored as strings, so the values need be convertible to a string. Data tokens don't have this restriction, so you can store the values as any type you like and just cast when retrieving it.

Using data tokens to identify the selected route

So what can data tokens actually be used for? Well, fundamentally they are designed to help you associate state data with a specific route. The values aren't dynamic, so they don't change depending on the URL; instead, they are fixed for a given route.

This means you can use data tokens to determine which route was selected during routing. This may be useful if you have multiple routes that map to the same action method, and you need to know which route was selected.

Consider the following couple of routes. They are for two different URLs, but they match to the same action method, HomeController.Index:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "otherRoute",
        template: "fancy-other-route",
        defaults: new { controller = "Home", action = "Index" },
        constraints: null,
        dataTokens: new { routeOrigin = new RouteOrigin { Name = "fancy route" } });

    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: null,
        constraints: null,
        dataTokens: new { routeOrigin = new RouteOrigin { Name = "default route" } });
});

Both routes set a data token of type RouteOrigin which is just a simple class, to demonstrate that data tokens can be complex types:

public class RouteOrigin  
{
    public string Name { get; set; }
}

So, if we make a request to the app at URL, /, /Home, or /Home/Index, a data token is set with a Name of "default route". If we make a request to /fancy-other-route, then the same action method will be executed, but the data token will have the value "fancy route". To easily visualise these values, I created the HomeController as follows:

public class HomeController : Controller  
{
    public string Index()
    {
        var origin = (RouteOrigin)RouteData.DataTokens["routeOrigin"];
        return $"This is the Home controller.\nThe route data is '{origin.Name}'";
    }
}

If we hit the app at the two different paths, you can easily see the different data token values:

Using routing DataTokens in ASP.NET Core

This works for our global convention-based routes, but what if you are using attribute-based routing? How do we use data tokens then?

How to use data tokens with RouteAttributes?

The short answer is, you can't! You can use constraints and defaults when you define your routes using RouteAttributes by including them inline as part of the route template. But you can't define data tokens inline, so you can't use them with attribute routing.

The good news is that it really shouldn't be a problem. Attribute routing is often used when you are designing an API for consumption by various clients. It's good practice to have a well defined URL space when designing your APIs; that's one of the reasons attribute routing is suggested over conventional routing in this case.

A "well defined" URL space could mean a lot of things, but one of those would probably be not having multiple different URLs all executing the same action. If there's only one route that can be used to execute an action, then data tokens use their value. For example, the following API defines a route attribute for invoking the Get action.

public class InstrumentController  
{
    [HttpGet("/instruments")]
    public IList<string> Get()
    {
        return new List<string> { "Guitar", "Bass", "Drums" };
    }
}

Associating a data token with the route wouldn't give you any more information when this method is invoked. We know which route it came from, as there is only one possibility - the HttpGet Route attribute!

Note: If an action has a route attribute, it can not be routed using conventional routing. That's how we know it's not routed from anywhere else.

When should I use data tokens?

I confess, I'm struggling with this section. Data tokens create a direct coupling between the routes and the action methods being executed. It seems like if your action methods are written in such a way as to depend on this route, you have bigger problems.

Also, the coupling is pretty insidious, as the data tokens are a hidden dependency that you have to know how to access. A more explicit approach might be to just set the values of appropriate route parameters.

For example, we could achieve virtually the same behaviour using explicit route parameters instead of data tokens. We could rewrite the routes as the following:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "otherRoute",
        template: "fancy-route-with-param",
        defaults: new
        {
            controller = "Home",
            action = "Other",
            routeOrigin = "fancy route"
        });

    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: new { routeOrigin = "default" });
});

Here we are providing a route value for each route for the routeOrigin parameter. This will be explicitly bound to our action method if we define it like the following:

public class HomeController : Controller  
{
    public string Other(string routeOrigin)
    {
        return $"This is the Other action.\nThe route param is '{routeOrigin}'";
    }
}

We now have an explicit dependency on the routeOrigin parameter which is automatically populated for us:

Using routing DataTokens in ASP.NET Core

Now, I know this behaviour is not the same as when we used dataTokens. In this case, the routeOrigin parameter is actually bound using the normal model binding mechanism, and you can only use values that can be converted to/from strings. But personally, as I say I don't really see a need for data tokens. Either using the route value approach seems preferable, or alternatively straight dependency injection, depending on your requirements.

Do let me know in the comments if there's a use case I've missed here, as currently I can't really see it!


Damien Bowden: Implementing an Audit Trail using ASP.NET Core and Elasticsearch with NEST

This article shows how an audit trail can be implemented in ASP.NET Core which saves the audit documents to Elasticsearch using NEST.

Code: https://github.com/damienbod/AspNetCoreElasticsearchNestAuditTrail

Should I just use a logger?

Depends. If you just need to save requests, responses and application events, then a logger would be a better solution for this use case. I would use NLog as it provides everything you need, or could need, when working with ASP.NET Core.

If you only need to save business events/data of the application in the audit trail, then this solution could fit.

Using the Audit Trail

The audit trail is implemented so that it can be used easily. In the Startup class of the ASP.NET Core application, it is added to the application in the ConfigureServices method. The class library provides an extension method, AddAuditTrail, which can be configured as required. It takes 2 parameters, a bool parameter which defines if a new index is created per day or per month to save the audit trail documents, and a second int parameter which defines how many of the previous indices are included in the alias used to select the audit trail items. If this is 0, all indices are included for the search.

Because the audit trail documents are grouped into different indices per day or per month, the amount of documents can be controlled in each index. Usually the application user requires only the last n days, or last 2 months of the audit trails, and so the search does not need to search through all audit trails documents since the application began. This makes it possible to optimize the data as required, or even remove, archive old unused audit trail indices.

public void ConfigureServices(IServiceCollection services)
{
	var indexPerMonth = false;
	var amountOfPreviousIndicesUsedInAlias = 3;
	services.AddAuditTrail<CustomAuditTrailLog>(options => 
		options.UseSettings(indexPerMonth, amountOfPreviousIndicesUsedInAlias)
	);

	services.AddMvc();
}

The AddAuditTrail extension method requires a model definition which will be used to save or retrieve the documents in Elasticsearch. The model must implement the IAuditTrailLog interface. This interface just forces you to implement the property Timestamp which is required for the audit logs.

The model can then be designed, defined as required. NEST attributes can be used for each of the properties in the model. Use the keyword attribute, if the text field should not be analyzed. If you must use enums, then save the string value and NOT the integer value to the persistent layer. If integer values are saved for the enums, then it cannot be used without the knowledge of what each integer value represents, making it dependent on the code.

using AuditTrail.Model;
using Nest;
using System;

namespace AspNetCoreElasticsearchNestAuditTrail
{
    public class CustomAuditTrailLog : IAuditTrailLog
    {
        public CustomAuditTrailLog()
        {
            Timestamp = DateTime.UtcNow;
        }

        public DateTime Timestamp { get; set; }

        [Keyword]
        public string Action { get; set; }

        public string Log { get; set; }

        public string Origin { get; set; }

        public string User { get; set; }

        public string Extra { get; set; }
    }
}

The audit trail can then be used anywhere in the application. The IAuditTrailProvider can be added in the constructor of the class and an audit document can be created using the AddLog method.

private readonly IAuditTrailProvider<CustomAuditTrailLog> _auditTrailProvider;

public HomeController(IAuditTrailProvider<CustomAuditTrailLog> auditTrailProvider)
{
	_auditTrailProvider = auditTrailProvider;
}

public IActionResult Index()
{
	var auditTrailLog = new CustomAuditTrailLog()
	{
		User = User.ToString(),
		Origin = "HomeController:Index",
		Action = "Home GET",
		Log = "home page called doing something important enough to be added to the audit log.",
		Extra = "yep"
	};

	_auditTrailProvider.AddLog(auditTrailLog);
	return View();
}

The audit trail documents can be viewed using QueryAuditLogs which supports paging and uses a simple query search which accepts wildcards. The AuditTrailSearch method returns a MVC view with the audit trail items in the model.

public IActionResult AuditTrailSearch(string searchString, int skip, int amount)
{

	var auditTrailViewModel = new AuditTrailViewModel
	{
		Filter = searchString,
		Skip = skip,
		Size = amount
	};

	if (skip > 0 || amount > 0)
	{
		var paging = new AuditTrailPaging
		{
			Size = amount,
			Skip = skip
		};

		auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString, paging).ToList();
		
		return View(auditTrailViewModel);
	}

	auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString).ToList();
	return View(auditTrailViewModel);
}

How is the Audit Trail implemented?

The AuditTrailExtensions class implements the extension methods used to initialize the audit trail implementations. This class accepts the options and registers the interfaces, classes with the IoC used by ASP.NET Core.

Generics are used so that any model class can be used to save the audit trail data. This changes always with each project, application. The type T must implement the interface IAuditTrailLog.

using System;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Localization;
using AuditTrail;
using AuditTrail.Model;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class AuditTrailExtensions
    {
        public static IServiceCollection AddAuditTrail<T>(this IServiceCollection services) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            return AddAuditTrail<T>(services, setupAction: null);
        }

        public static IServiceCollection AddAuditTrail<T>(
            this IServiceCollection services,
            Action<AuditTrailOptions> setupAction) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            services.TryAdd(new ServiceDescriptor(
                typeof(IAuditTrailProvider<T>),
                typeof(AuditTrailProvider<T>),
                ServiceLifetime.Transient));

            if (setupAction != null)
            {
                services.Configure(setupAction);
            }
            return services;
        }
    }
}

When a new audit trail log is added, it uses the index defined in the _indexName field.

public void AddLog(T auditTrailLog)
{
	var index = new IndexName()
	{
		Name = _indexName
	};

	var indexRequest = new IndexRequest<T>(auditTrailLog, index);

	var response = _elasticClient.Index(indexRequest);
	if (!response.IsValid)
	{
		throw new ElasticsearchClientException("Add auditlog disaster!");
	}
}

The _indexName field is defined using the date pattern, either days or months depending on your options.

private const string _alias = "auditlog";
private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";

index definition per month:

if(_options.Value.IndexPerMonth)
{
	_indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
}

When quering the audit trail logs, a simple query search query is used to find, select the audit trial documents required for the view. This is used so that wildcards can be used. The method accepts a query filter and paging options. If you search without any filter, all documents are returned which are defined in the alias (used indices). By using the simple query, the filter can accept options like AND, OR for the search.

public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
{
	var from = 0;
	var size = 10;
	EnsureAlias();
	if(auditTrailPaging != null)
	{
		from = auditTrailPaging.Skip;
		size = auditTrailPaging.Size;
		if(size > 1000)
		{
			// max limit 1000 items
			size = 1000;
		}
	}
	var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
	{
		Size = size,
		From = from,
		Query = new QueryContainer(
			new SimpleQueryStringQuery
			{
				Query = filter
			}
		),
		Sort = new List<ISort>
			{
				new SortField { Field = TimestampField, Order = SortOrder.Descending }
			}
	};

	var searchResponse = _elasticClient.Search<T>(searchRequest);

	return searchResponse.Documents;
}

The alias is also updated in the search query, if required. Depending on you configuration, the alias uses all the audit trail indices or just the last n days, or n months. This check uses a static field. If the alias needs to be updated, the new alias is created, which also deletes the old one.

private void EnsureAlias()
{
	if (_options.Value.IndexPerMonth)
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}
	else
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}           
}

Here’s how the alias is created for all indices of the audit trail.

private void CreateAliasForAllIndices()
{
	var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
	if (!response.IsValid)
	{
		throw response.OriginalException;
	}

	if (response.Exists)
	{
		_elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	}

	var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	if (!responseCreateIndex.IsValid)
	{
		throw response.OriginalException;
	}
}

The full AuditTrailProvider class which implements the audit trail.

using AuditTrail.Model;
using Elasticsearch.Net;
using Microsoft.Extensions.Options;
using Nest;
using Newtonsoft.Json.Converters;
using System;
using System.Collections.Generic;
using System.Linq;

namespace AuditTrail
{
    public class AuditTrailProvider<T> : IAuditTrailProvider<T> where T : class
    {
        private const string _alias = "auditlog";
        private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";
        private static Field TimestampField = new Field("timestamp");
        private readonly IOptions<AuditTrailOptions> _options;

        private ElasticClient _elasticClient { get; }

        public AuditTrailProvider(
           IOptions<AuditTrailOptions> auditTrailOptions)
        {
            _options = auditTrailOptions ?? throw new ArgumentNullException(nameof(auditTrailOptions));

            if(_options.Value.IndexPerMonth)
            {
                _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
            }

            var pool = new StaticConnectionPool(new List<Uri> { new Uri("http://localhost:9200") });
            var connectionSettings = new ConnectionSettings(
                pool,
                new HttpConnection(),
                new SerializerFactory((jsonSettings, nestSettings) => jsonSettings.Converters.Add(new StringEnumConverter())))
              .DisableDirectStreaming();

            _elasticClient = new ElasticClient(connectionSettings);
        }

        public void AddLog(T auditTrailLog)
        {
            var index = new IndexName()
            {
                Name = _indexName
            };

            var indexRequest = new IndexRequest<T>(auditTrailLog, index);

            var response = _elasticClient.Index(indexRequest);
            if (!response.IsValid)
            {
                throw new ElasticsearchClientException("Add auditlog disaster!");
            }
        }

        public long Count(string filter = "*")
        {
            EnsureAlias();
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = 0,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<AuditTrailLog>(searchRequest);

            return searchResponse.Total;
        }

        public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
        {
            var from = 0;
            var size = 10;
            EnsureAlias();
            if(auditTrailPaging != null)
            {
                from = auditTrailPaging.Skip;
                size = auditTrailPaging.Size;
                if(size > 1000)
                {
                    // max limit 1000 items
                    size = 1000;
                }
            }
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = size,
                From = from,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<T>(searchRequest);

            return searchResponse.Documents;
        }

        private void CreateAliasForAllIndices()
        {
            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw response.OriginalException;
            }
        }

        private void CreateAlias()
        {
            if (_options.Value.AmountOfPreviousIndicesUsedInAlias > 0)
            {
                CreateAliasForLastNIndices(_options.Value.AmountOfPreviousIndicesUsedInAlias);
            }
            else
            {
                CreateAliasForAllIndices();
            }
        }

        private void CreateAliasForLastNIndices(int amount)
        {
            var responseCatIndices = _elasticClient.CatIndices(new CatIndicesRequest(Indices.Parse($"{_alias}-*")));
            var records = responseCatIndices.Records.ToList();
            List<string> indicesToAddToAlias = new List<string>();
            for(int i = amount;i>0;i--)
            {
                if (_options.Value.IndexPerMonth)
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddMonths(-i + 1).ToString("yyyy-MM")}";
                    if(records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
                else
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddDays(-i + 1).ToString("yyyy-MM-dd")}";                   
                    if (records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
            }

            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            Indices multipleIndicesFromStringArray = indicesToAddToAlias.ToArray();
            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(multipleIndicesFromStringArray, _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw responseCreateIndex.OriginalException;
            }
        }

        private static DateTime aliasUpdated = DateTime.UtcNow.AddYears(-50);

        private void EnsureAlias()
        {
            if (_options.Value.IndexPerMonth)
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }
            else
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }           
        }
    }
}

Testing the audit log

The created audit trails can be checked using the following HTTP GET requests:

Counts all the audit trail entries in the alias.
http://localhost:9200/auditlog/_count

Shows all the audit trail indices. You can count all the documents from the indices used in the alias and it must match the count from the alias.
http://localhost:9200/_cat/indices/auditlog*

You can also start the application and the AuditTrail logs can be displayed in the Audit Trail logs MVC view.

01_audittrailview

This view is just a quick test, if implementing properly, you would have to localize the timestamp display and add proper paging in the view.

Notes, improvements

If lots of audit trail documents are written at once, maybe a bulk insert could be used to add the documents in batches, like most of the loggers implement this. You should also define a strategy on how the old audit trails, indices should be cleaned up, archived or whatever. The creating of the alias could be optimized depending on you audit trail data, and how you clean up old audit trail indices.

Links:

https://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-aliases.html

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html

https://docs.microsoft.com/en-us/aspnet/core/

https://www.elastic.co/products/elasticsearch

https://github.com/elastic/elasticsearch-net

https://www.nuget.org/packages/NLog.Web.AspNetCore/



Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Anuraj Parameswaran: Aspect oriented programming with ASP.NET Core

This post is about implementing simple AOP (Aspect Oriented Programming) with ASP.NET Core. AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself. An example of crosscutting concerns is “logging,” which is frequently used in distributed applications to aid debugging by tracing method calls. AOP helps you to implement logging without affecting you actual code.


Damien Bowden: Hot Module Replacement with Angular and Webpack

This article shows how HMR, or Hot Module Replacement can be used together with Angular and Webpack.

Code: VS2017 angular 4.x | VS2017 angular 2.x

Blogs in this series:

2017.03.18: Updated to angular 4.0.0

See here for full history:
https://github.com/damienbod/AngularWebpackVisualStudio/blob/master/CHANGELOG.md

package.json npm file

The webpack-dev-server from Kees Kluskens is added to the devDependencies in the npm package.json file. The webpack-dev-server package implements and supports the HMR feature.

"devDependencies": {
  ...
  "webpack": "^2.2.1",
  "webpack-dev-server": "2.2.1"
},

In the scripts section of the package.json, the start command is configured to start the dotnet server and also the webpack-dev-server with the –hot and the –inline parameters.

See the webpack-dev-server documentation for more information about the possible parameters.

The dotnet server is only required because this demo application uses a Web API service implemented in ASP.NET Core.

"start": "concurrently \"webpack-dev-server --hot --inline --port 8080\" \"dotnet run\" "

webpack dev configuration

The devServer is added to the module.exports in the webpack.dev.js. This configures the webpack-dev-server as required. The webpack-dev-server configuration can be set here as well as the command line options, so you as a developer can decide which is better for you.

devServer: {
	historyApiFallback: true,
	contentBase: path.join(__dirname, '/wwwroot/'),
	watchOptions: {
		aggregateTimeout: 300,
		poll: 1000
	}
},

The output in the module.exports also needs to be configured correctly for the webpack-dev-server to work correctly. If the ‘./’ path is used in the path option of the output section, the webpack-dev-server will not start.

output: {
	path: __dirname +  '/wwwroot/',
	filename: 'dist/[name].bundle.js',
	chunkFilename: 'dist/[id].chunk.js',
	publicPath: '/'
},

The module should be declared and the module.hot needs to be added the the main.ts.

// Entry point for JiT compilation.
declare var System: any;

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';

// Enables Hot Module Replacement.
declare var module: any;
if (module.hot) {
    module.hot.accept();
}

platformBrowserDynamic().bootstrapModule(AppModule);

Running the application

Build the application using the webpack dev build. This can be done in the command line. Before building, you need to install all the npm packages using npm install.

$ npm run build-dev

The npm script build-dev is defined in the package.json file and uses the webpack-dev script which does a development build.

"build-dev": "npm run webpack-dev",
"webpack-dev": "set NODE_ENV=development && webpack",

Now the server can be started using the start script.

$ npm start

hmr_angular_01

The application is now running on localhost with port 8080 as defined.

http://localhost:8080/home

If for example, the color is changed in the app.scss, the bundles will be reloaded in the browser without refreshing.
hmr_angular2_03

Links

https://webpack.js.org/concepts/hot-module-replacement/

https://webpack.js.org/configuration/dev-server/#devserver

https://github.com/webpack/webpack-dev-server

https://www.sitepoint.com/beginners-guide-to-webpack-2-and-module-bundling/

View story at Medium.com



Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Damien Bowden: Docker compose with ASP.NET Core, EF Core and the PostgreSQL image

This article show how an ASP.NET Core application with a PostgreSQL database can be setup together using docker as the deployment containers for both web and database parts of the application. docker-compose is used to connect the 2 containers and the application is build using Visual Studio 2017.

Code: https://github.com/damienbod/AspNetCorePostgreSQLDocker

Setting up the PostgreSQL docker container from the command line

2017.02.03: Updated to VS2017 RC3 msbuild3

The PostgreSQL docker image can be started or setup from the command line simple by defining the required environment parameters, and the port which can be used to connect with PostgreSQL. A named volume called pgdata is also defined in the following command. The container is called postgres-server.

$ docker run -d -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=damienbod 
 --name postgres-server -p 5432:5432 -v pgdata:/var/lib/postgresql/data 
 --restart=always postgres

You can check all your local volumes with the following docker command:

$ docker volume ls

The docker containers can be viewed by running the docker ps -a:

$ docker ps -a

Then you can check the docker container for the postgres-server by using the logs command and the id of the container. Only the first few characters from the container id is required for docker to find the container.

$ docker logs <docker_id>

If you would like to view the docker container configuration and its properties, the inspect command can be used:

$ docker inspect <docker_id>

When developing docker applications, you will regularly need to clean up the images, containers and volumes. Here’s some quick commands which are used regularly.

If you need to find the dangling volumes:

$ docker volume ls -qf dangling=true

A volume can be removed using the volume id:

$ docker volume rm <volume id>

Clean up container and volume (dangerous as you might not want to remove the data):

$ docker rm -fv <docker id>

Configure the database using pgAdmin

Open pgAdmin to configure a new user in PostgreSQL, which will be used for the application.

EF7_PostgreSQL_01

Right click your user and click properties to set the password

EF7_PostgreSQL_02

Now a PostgreSQL database using docker is ready to be used. This is not the only way to do this, a better way would be to use a Dockerfile and and docker-compose.

Creating the PostgreSQL docker image using a Dockerfile

Usually you do not want to create the application from hand. You can do everything described above using a Dockerfile and docker-compose. The PostgresSQL docker image for this project is created using a Dockerfile and docker-compose. The Dockerfile uses the latest offical postgres docker image and adds the required database to the docker-entrypoint-initdb.d folder inside the container. When the PostgreSQL inits, it executes these scripts.

FROM postgres:latest
EXPOSE 5432
COPY dbscripts/10-init.sql /docker-entrypoint-initdb.d/10-init.sql
COPY dbscripts/20-damienbod.sql /docker-entrypoint-initdb.d/20-database.sql

The docker-compose defines the image, ports and a named volume for this image. The POSTGRES_PASSWORD is required.

version: '2'

services:
  damienbodpostgres:
     image: damienbodpostgres
     restart: always
     build:
       context: .
       dockerfile: Dockerfile
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Now switch to the directory where the docker-compose file is and build.

$ docker-compose build

If you want to deploy, you could create a new docker tag on the postgres container. Use your docker hub name if you have.

$ docker ps -a
$ docker tag damienbodpostgres damienbod/postgres-server

You can check your images and should see it in your list.

$ docker images

Creating the ASP.NET Core application

An ASP.NET Core application was created in VS2017. The EF Core and the PostgreSQL nuget packages were added as required. The Docker support was also added using the Visual Studio tooling.

<Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Relational" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.0.0-msbuild3-final" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="1.1.0" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0-msbuild3-final" />
  </ItemGroup>
</Project>

The EF Core context is setup to access the 2 tables defined in PostgreSQL.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AspNetCorePostgreSQLDocker
{
    // >dotnet ef migration add testMigration in AspNet5MultipleProject
    public class DomainModelPostgreSqlContext : DbContext
    {
        public DomainModelPostgreSqlContext(DbContextOptions<DomainModelPostgreSqlContext> options) :base(options)
        {
        }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

The used database was created using the dockerfile scripts executed in the docker container init. This could also be done with EF Core migrations.

$ dotnet ef migrations add postgres-scripts

$ dotnet ef database update

The connection string used in the application must use the network name defined for the database in the docker-compose file. When debugging locally using IIS without docker, you would have so supply a way of switching the connection string hosts. The host postgresserver is defined in this demo, and so used in the connection string.

 "DataAccessPostgreSqlProvider": "User ID=damienbod;Password=damienbod;Host=postgresserver;Port=5432;Database=damienbod;Pooling=true;"

Now the application can be built. You need to check that it can be published to the release bin folder, which is used by the docker-compose.

Setup the docker-compose

The docker-compose for the application defines the web tier, database server and the network settings for docker. The postgresserver service is built using the damienbodpostgres image. It exposes the PostgreSQL standard post like we have defined before. The aspnetcorepostgresqldocker web application runs on post 5001 and depends on postgresserver. This is the ASP.NET Core application in Visual studio 2017.

version: '2'

services:
  postgresserver:
     image: damienbodpostgres
     restart: always
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data
     networks:
       - mynetwork

  aspnetcorepostgresqldocker:
     image: aspnetcorepostgresqldocker
     ports:
       - 5001:80
     build:
       context: ./src/AspNetCorePostgreSQLDocker
       dockerfile: Dockerfile
     links:
       - postgresserver
     depends_on:
       - "postgresserver"
     networks:
       - mynetwork

volumes:
  pgdata:

networks:
  mynetwork:
     driver: bridge

Now the application can be started, deployed or tested. The following command will start the application in detached mode.

$ docker-compose -d up

Once the application is started, you can test it using:

http://localhost:5001/index.html

01_postgresqldocker

You can add some data using Postman
02_postgresqldocker

POST http://localhost:5001/api/dataeventrecords
{
  "DataEventRecordId":3,
  "Name":"Funny data",
  "Description":"yes",
  "Timestamp":"2015-12-27T08:31:35Z",
   "SourceInfo":
  { 
    "SourceInfoId":0,
    "Name":"Beauty",
    "Description":"second Source",
    "Timestamp":"2015-12-23T08:31:35+01:00",
    "DataEventRecords":[]
  },
 "SourceInfoId":0 
}

And the data can be viewed using

http://localhost:5001/api/dataeventrecords

03_postgresqldocker

Or you can view the data using pgAdmin

04_postgresqldocker
Links

https://hub.docker.com/_/postgres/

https://www.andreagrandi.it/2015/02/21/how-to-create-a-docker-image-for-postgresql-and-persist-data/

https://docs.docker.com/engine/examples/postgresql_service/

http://stackoverflow.com/questions/25540711/docker-postgres-pgadmin-local-connection

http://www.postgresql.org

http://www.pgadmin.org/

https://github.com/npgsql/npgsql

https://docs.docker.com/engine/tutorials/dockervolumes/



Damien Bowden: Creating an ASP.NET Core Docker application and deploying to Azure

This blog is a simple step through, which creates an ASP.NET Core Docker image using Visual Studio 2017, deploys it to Docker Hub and then deploys the image to Azure.

Thanks to Malte Lantin for his fantastic posts on MSDN. See the links at the end of this post.

Code: https://github.com/damienbod/AspNetCoreDockerAzureDemo

2017.02.03: Updated to VS2017 RC3 msbuild3

Step 1: Create a Docker project in Visual Studio 2017 using ASP.NET Core

In the example, an ASP.NET Core Visual Studio 2017 project using msbuild is used as the demo application. Then the Docker support is added to the project using Visual Studio 2017.

Right click the project, Add/Docker Project Support
firstazuredocker_01

Update the docker files to ASPNET.Core and the correct docker version as required. More information can be found here:

http://www.jeffreyfritz.com/2017/01/docker-compose-api-too-old-for-windows/

https://damienbod.com/2016/12/24/creating-an-asp-net-core-1-1-vs2017-docker-application/

Now the application will be built in a layer on top of the microsoft/aspnetcore image.

Dockerfile:

FROM microsoft/aspnetcore:1.0.3
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-bin/Release/PublishOutput} .
ENTRYPOINT ["dotnet", "AngularClient.dll"]

docker-compose.yml

version: '2'

services:
  angularclient:
    image: angularclient
    build:
      context: .
      dockerfile: Dockerfile

Once the project is built and ready, it can be deployed to docker hub. Do a release build of the projects.

Step 2: Build a docker image and deploy to docker hub

Before you can deploy the docker image to docker hub, you need to have, or create a docker hub account.

Then open up the console and create a docker tag for your application. Replace damienbod with your docker hub user name. The docker image angularclient, created from Visual Studio 2017, will be tagged to damienbod/aspnetcorethingsclient.

docker tag angularclient damienbod/aspnetcorethingsclient

Now login to docker hub in the command line:

docker login

Once logged in, the image can be pushed to docker hub. Again replace damienbod with your docker hub name.

docker push damienbod/aspnetcorethingsclient

Once deployed, you can view this on docker hub.

https://hub.docker.com/u/damienbod/

firstazuredocker_02

For more information on docker images and containers:

https://docs.docker.com/engine/getstarted/step_four/

Step 3: Deploy to Azure

Login to https://portal.azure.com/ and click the new button and search for Web App On Linux. We want to deploy a docker container to this, using our docker image.

Select the + New and search for Web App On Linux.
firstazuredocker_03

Then select. Do not click the create until the docker container has been configured.

firstazuredocker_04

Now configure the docker container. Add the new created image on docker hub to the text field.

firstazuredocker_05

Click create and the aplication will be deployed on Azure. Now the application can be used.

firstazuredocker_07

And the application runs as requires.

http://thingsclient.azurewebsites.net/home

firstazuredocker_06

Notes:

The Visual Studio 2017 docker tooling is still rough and has problems when using newer versions of docker, or the msbuild 1.1 versions etc, but it is still in RC and will improve before the release. Next steps are now to use CI to automatically complete all these steps, add security, and use docker compose for multiple container deployments.

Links

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/12/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-13/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-23/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-33/

https://hub.docker.com/

Orchestrating multi service asp.net core application using docker-compose

Debugging Asp.Net core apps running in Docker Containers using VS 2017

https://docs.docker.com/engine/getstarted/step_four/

https://stefanprodan.com/2016/aspnetcore-cd-pipeline-docker-hub/



Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 and ASP.NET Core 1.1

aka RC5 – last RC – promised!

The update from ASP.NET Core 1.0 (aka LTS – long term support) to ASP.NET Core 1.1 (aka Current) didn’t go so well (at least IMHO).

There were a couple of breaking changes both on the APIs as well as in behaviour. Especially around challenge/response based authentication middleware and EF Core.

Long story short – it was not possible for us to make IdentityServer support both versions. That’s why we decided to move to 1.1, which includes a bunch of bug fixes, and will also most probably be the version that ships with the new Visual Studio.

To be more specific – we build against ASP.NET Core 1.1 and the 1.0.0-preview2-003131 SDK.

Here’s a guide that describes how to update your host to 1.1. Our docs and samples have been updated.


Filed under: ASP.NET, OAuth, OpenID Connect, WebAPI


Ben Foster: Bare metal APIs with ASP.NET Core MVC

ASP.NET Core MVC now provides a true "one asp.net" framework that can be used for building both APIs and websites. But what if you only want to build an API?

Most of the ASP.NET Core MVC tutorials I've seen advise using the Microsoft.AspNetCore.Mvc package. While this does indeed give you what you need to build APIs, it also gives you a lot more:

  • Microsoft.AspNetCore.Mvc.ApiExplorer
  • Microsoft.AspNetCore.Mvc.Cors
  • Microsoft.AspNetCore.Mvc.DataAnnotations
  • Microsoft.AspNetCore.Mvc.Formatters.Json
  • Microsoft.AspNetCore.Mvc.Localization
  • Microsoft.AspNetCore.Mvc.Razor
  • Microsoft.AspNetCore.Mvc.TagHelpers
  • Microsoft.AspNetCore.Mvc.ViewFeatures
  • Microsoft.Extensions.Caching.Memory
  • Microsoft.Extensions.DependencyInjection
  • NETStandard.Library

A few of these packages are still needed if you're building APIs but many are specific to building full websites.

After installing the above package we typically register MVC in Startup.ConfigureServices like so:

services.AddMvc();

This code is responsible for wiring up the necessary MVC services with application container. Let's look at what this actually does:

public static IMvcBuilder AddMvc(this IServiceCollection services)
{
    var builder = services.AddMvcCore();

    builder.AddApiExplorer();
    builder.AddAuthorization();

    AddDefaultFrameworkParts(builder.PartManager);

    // Order added affects options setup order

    // Default framework order
    builder.AddFormatterMappings();
    builder.AddViews();
    builder.AddRazorViewEngine();
    builder.AddCacheTagHelper();

    // +1 order
    builder.AddDataAnnotations(); // +1 order

    // +10 order
    builder.AddJsonFormatters();

    builder.AddCors();

    return new MvcBuilder(builder.Services, builder.PartManager);
}

Again most of the service registration refers to the components used for rendering web pages.

Bare Metal APIs

It turns out that the ASP.NET team anticipated that developers may only want to build APIs and nothing else, so they gave us the ability to do just that.

First of all, rather than installing Microsoft.AspNetCore.Mvc, only install Microsoft.AspNetCore.Mvc.Core. This will give you the bare MVC middleware (routing, controllers, HTTP results) and not a lot else.

In order to process JSON requests and return JSON responses we also need the Microsoft.AspNetCore.Mvc.Formatters.Json package.

Then, to add both the core MVC middleware and JSON formatter, add the following code to ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddJsonFormatters();
}

The final thing to do is to change your controllers to derive from ControllerBase instead of Controller. This provides a base class for MVC controllers without any View support.

Looking at the final list of packages in project.json, you can see we really don't need that much after all, especially given most of these are related to configuration and logging:

"Microsoft.AspNetCore.Mvc.Core": "1.1.0",
"Microsoft.AspNetCore.Mvc.Formatters.Json": "1.1.0",
"Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
"Microsoft.Extensions.Configuration.Json": "1.1.0",
"Microsoft.Extensions.Configuration.CommandLine": "1.1.0",
"Microsoft.Extensions.Logging": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Microsoft.Extensions.Logging.Debug": "1.1.0"

You can find the complete code on GitHub.


Dominick Baier: New in IdentityServer4: Resource-based Configuration

For RC4 we decided to re-design our configuration object model for resources (formerly known as scopes).

I know, I know – we are not supposed to make fundamental breaking changes once reaching the RC status – but hey – we kind of had our “DNX” moment, and realized that we either change this now – or never.

Why did we do that?
We spent the last couple of years explaining OpenID Connect and OAuth 2.0 based architectures to hundreds of students in training classes, attendees at conferences, fellow developers, and customers from all types of industries.

While most concepts are pretty clear and make total sense – scopes were the most confusing part for most people. The abstract nature of a scope as well as the fact that the term scope has a somewhat different meaning in OpenID Connect and OAuth 2.0, made this concept really hard to grasp.

Maybe it’s also partly our fault, that we stayed very close to the spec-speak with our object model and abstraction level, that we forced that concept onto every user of IdentityServer.

Long story short – every time I needed to explain scope, I said something like “A scope is a resource a client wants to access.”..and “there are two types of scopes: identity related and APIs…”.

This got us thinking if it would make more sense to introduce the notion of resources in IdentityServer, and get rid of scopes.

What did we do?
Before RC4 – our configuration object model had three main parts: users, client, and scopes (and there were two types of scopes – identity and resource – and some overlapping settings between them).

Starting with RC4 – the configuration model does not have scope anymore as a top-level concept, but rather identity resources and API resources.

terminology

We think this is a more natural way (and language) to model a typical token-based system.

From our new docs:

User
A user is a human that is using a registered client to access resources.

Client
A client is a piece of software that requests tokens from IdentityServer – either for authenticating a user (requesting an identity token)
or for accessing a resource (requesting an access token). A client must be first registered with IdentityServer before it can request tokens.

Resources
Resources are something you want to protect with IdentityServer – either identity data of your users (like user id, name, email..), or APIs.

Enough talk, show me the code!
Pre-RC4, you would have used a scope store to return a flat list of scopes. Now the new resource store deals with two different resource types: IdentityResource and ApiResource.

Let’s start with identity – standard scopes used to be defined like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        StandardScopes.OpenId,
        StandardScopes.Profile
    };
}

..and now:

public static IEnumerable<IdentityResource> GetIdentityResources()
{
    return new List<IdentityResource>
    {
        new IdentityResources.OpenId(),
        new IdentityResources.Profile()
    };
}

Not very different. Now let’s define a custom identity resource with associated claims:

var customerProfile = new IdentityResource(
    name:        "profile.customer",
    displayName: "Customer profile",
    claimTypes:  new[] { "name""status""location" });

This is all that’s needed for 90% of all identity resources you will ever define. If you need to tweak details, you can set various properties on the IdentityResource class.

Let’s have a look at the API resources. You used to define a resource-scope like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        new Scope
        {
            Name = "api1",
            DisplayName = "My API #1",
 
            Type = ScopeType.Resource
        }
    };
}

..and the new way:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource("api1""My API #1")
    };
}

Again – for the simple case there is not a huge difference. The ApiResource object model starts to become more powerful when you have advanced requirements like APIs with multiple scopes (and maybe different claims based on the scope) and support for introspection, e.g.:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource
        {
            Name = "calendar",
 
            // secret for introspection endpoint
            ApiSecrets =
            {
                new Secret("secret".Sha256())
            },
 
            // claims to include in access token
            UserClaims =
            {
                JwtClaimTypes.Name,
                JwtClaimTypes.Email
            },
 
            // API has multiple scopes
            Scopes =
            {
                new Scope
                {
                    Name = "calendar.read_only",
                    DisplayName = "Read only access to the calendar"
                },
                new Scope
                {
                    Name = "calendar.full_access",
                    DisplayName = "Full access to the calendar",
                    Emphasize = true,
 
                    // include additional claim for that scope
                    UserClaims =
                    {
                        "status"
                    }
                }
            }
        }
    };

IOW – We reversed the configuration approach, and you now model APIs (which might have scopes) – and not scopes (that happen to represent an API).

We like the new model much better as it reflects how you architect a token-based system much better. We hope you like it too – and sorry for moving the cheese ;)

As always – give us feedback on the issue tracker. RTM is very close.


Filed under: .NET Security, ASP.NET, OAuth, Uncategorized, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.