Andrew Lock: How to use multiple hosting environments on the same machine in ASP.NET Core

How to use multiple hosting environments on the same machine in ASP.NET Core

This short post is in response to a comment I received on a post I wrote a while ago, about how to set the hosting environment in ASP.NET Core. It's a question I've heard a couple of times, so thought I'd write it up here.

The question by Denis Zavershinskiy is as follows:

Do you know if there is a way to overwrite environment variable name? For example, I want my CoolProject to take environment name not from ASPNETCORE_ENVIRONMENT but from COOL_PROJ_ENV. Is it possible?

The answer to that question is a little nuanced. If you already have an app deployed, and want to switch the environment for it without changing other apps on that machine, then you can't do it with Environment variables. Those obviously affect the whole environment!

tl;dr; Create a custom configuration object in your Program.cs file, load the environment variable using a custom key, and call UseEnvironment on the WebHostBuilder.

However, if this is a capability you think you will need, you can use a similar approach to the one I use in that post to set the environment using command line arguments.

This approach involves building a new IConfiguration object, and passing that in to the WebHostBuilder on application startup. This lets you load configuration from any source, just as you would in your normal startup method, and pass that configuration to the WebHostBuilder using UseConfiguration. The WebHostBuilder will look for a key named "Environment" in this configuration, and use that as the environment.

For example, if you use the following configuration.

var config = new ConfigurationBuilder()  
    .AddCommandLine(args)
    .Build();

var host = new WebHostBuilder()  
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseKestrel()
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

You can pass any setting value with this setup, including the "environment variable":

> dotnet run --environment "MyCustomEnv"

Project TestApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.

Hosting environment: MyCustomEnv  
Content root path: C:\Projects\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

This is fine if you can use command line arguments like this, but what if you want to use environment variables? Again, the problem is that they're shared between all apps on a machine.

However, you can use a similar approach, coupled with the UseEnvironment extension method, to set a different environment for each machine. This will override the ASPNETCORE_ENVIRONMENT value, if it exists, with the value you provide.

public class Program  
{
    public static void Main(string[] args)
    {
        const string EnvironmentKey = "MYCOOLPROJECT_ENVIRONMENT";

        var config = new ConfigurationBuilder()
            .AddEnvironmentVariables()
            .Build();

        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseEnvironment(config[EnvironmentKey])
            .UseIISIntegration()
            .UseStartup<Startup>()
            .UseApplicationInsights()
            .Build();

        host.Run();
    }
}

To test this out, I added the MYCOOLPROJECT_ENVIRONMENT key with a value of Staging to the launch.json file VS uses when running the app:

{
  "profiles": {
    "EnvironmentTest": {
      "commandName": "Project",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "MYCOOLPROJECT_ENVIRONMENT": "Staging"
      },
      "applicationUrl": "http://localhost:56172"
    }
  }
}

Running the app using F5, shows that we have correctly picked up the Staging value using our custom environment variable:

Hosting environment: Staging  
Content root path: C:\Users\Sock\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:56172  
Application started. Press Ctrl+C to shut down.  

With this approach you can effectively have a per-app environment variable that you can use to configure the environment for an app individually.

Summary

On shared hosting, you may be in a situation when you want to use a different IHostingEnvironment for multiple apps on the same machine. You can achieve this with the approach outlined in this post, building an IConfiguration object and passing a key to WebHostBuilder.UseEnvironment extension method.


Andrew Lock: Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

One of the brand new features added to ASP.NET Core in version 2.0 preview 1 is Razor Pages. This harks back to the (terribly named) ASP.NET Web Pages framework, which was a simpler, page-based, alternative framework to MVC. It was a completely different stack, and was sort of .NET's answer to PHP.

I know, that might make you shudder, but I've actually had a fair amount of success with it. It allowed our designers who had HTML knowledge but no C# to be productive, without having to deal with full MVC. Web developers might turn their nose up at that, but there's really no reason for a designer to go there, and would you want them to? A developer could just jump on for any dynamic bits of code that were beyond the designer, but otherwise you could leave them to it.

This post dips a toe into the justification of Razor Pages and when you might want to use it in your own ASP.NET Core applications.

ASP.NET Core 2.0 includes a new razor template. This shows the default MVC template completely converted to Razor Pages. Mike Brind has a great rundown of this template on his blog. In this post I'm looking at a hybrid approach - only converting the simplest pages to Razor Pages.

What are Razor Pages?

The new Razor Pages treads a line somewhere between ASP.NET Web Pages and full MVC. It's still a "page based" model, in that all the code for a single page lives in one file, and that file is used to describe the URL structure of the app.

This makes it very easy to reason about the behaviour of simple apps, and removes some of the ceremony of creating new pages. Instead of having to create a controller, create an action, create a model, and create a view, you can simply create the Razor Page instead.

Now granted, for cases where you're doing anything much more than displaying a View, MVC may well be the correct thing to use. And the great thing about Razor Pages is that it's built directly on top of the MVC stack. Essentially all of the primitives like model binding and validation are available to you in Razor Pages. Or even better, you could use Razor Pages for most of the app, and fallback to full MVC for the complex actions.

Putting it to the test - the MVC template

As an example, I thought I'd convert the default MVC template to use Razor pages for the simple actions. This is remarkably simple to do and highlights the advantages of using Razor Pages if you don't have much logic in your app.

The MVC template

The default MVC template is pretty simple - it consists of a HomeController with four actions:

  • Index()
  • About()
  • Contact()
  • Error()

The first three of these are really simply actions, that optionally set some ViewData and return a ViewResult. These are prime candidates for converting to Razor Pages.

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        return View();
    }

    public IActionResult About()
    {
        ViewData["Message"] = "Your application description page.";

        return View();
    }

    public IActionResult Contact()
    {
        ViewData["Message"] = "Your contact page.";

        return View();
    }

    public IActionResult Error()
    {
        return View(new ErrorViewModel 
        { 
            RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier 
        });
    }
}

Each of the simple methods renders a .cshtml file in the Views folder of the project. These are perfect of us to convert to Razor pages - they are basically just HTML files with some dynamic regions. For example, the About.cshtml file looks like this:

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>@ViewData["Message"]</h3>

<p>Use this area to provide additional information.</p>  

So lets convert these to Razor Pages.

The Error action is only slightly more complicated, and could easily be converted to a Razor Page too (as it is in the dotnet new razor template). To highlight the ability to mix the two however, I'm going to leave that as an MVC action.

The Razor Pages version

Converting these pages is super-simple, I'll take the About page as an example.

To turn the page into a Razor Page instead of a view, you simply need to add the @page directive to the top of the page, and move the file from Views/Home/About.cshtml to Pages/Home/About.cshtml.

Razor pages are stored in the pages folder by default.

Finally, we can move the small piece of logic, setting the message, from the action method to the page itself. We could store this in the ViewData["Message"], but there's not a lot of need in this case as it's not used in parent Layouts or anything, so we'll just write it in the markup.

@page

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>Your application description page.</h3>

<p>Use this area to provide additional information.</p>  

With that, we can delete the About action method, and give it a test!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Hmmm, that's not quite right - we appear to have lost the layout! The Views folder contains a _ViewStart.cshtml that defines the default Layout for all the views in its subfolder (unless overwridden by a sub-_ViewStart.cshtml or the .cshtml file itself. Razor Pages uses the exact same concept of Layouts and partial views, so we can add a _ViewStart.cshtml to the Pages folder too:

@{
    Layout = "_Layout";
}

Note that we don't need to create separate layout files themselves, we can still reference the ones in the Views/Shared folder - the Views/Shared folder is searched by Razor Pages (as well as the Pages/Home folder in this case, and the Pages/Shared folder). With the Layout statement in place, we get our original About page back!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

And that's pretty much that! We can easily convert the other pages in the same way, and we can still access them at the same URL paths: /Home/About, /Home/Contact and /Home/Index.

Limitations

I actually bent the truth slightly there, we can't quite use all the same URLs. With the full-MVC template and the default MVC routing template, {controller=home/action=index/id?}, the following URLs all route to the same MVC HomeController.Index action:

  • /Home/Index
  • /Home
  • /

Those first two work perfectly well with Razor Pages, in the same way as with MVC, thanks to Index.cshtml being considered the default document for the folder. The last one is a problem however - you can't get this result with Razor Pages alone. And that's not really surprising - a page-based model implies a single URL - which is essentially what we have.

As far as I can tell, if you want a single action to cover all these URLs you'll need to fall back to MVC. But then, at least you can, and in those cases, maybe you should! Alternatively, you could create two pages, one for / and one for /Home, and use partial views to avoid duplication of markup between them.

So where does that leave us? Well if we just look at the number of files, you can see that we actually have more now than before. The difference is that you can add a new URL handler/Page by just creating a single .cshtml file in the appropriate Pages folder, no model or controller files needed!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Whether this works for you will depend on the sort of application you're building. A simple, mostly-static app with some dynamics sections might be well suited to building a Razor Pages app, while a more fully-featured, complicated app most definitely would not!

The real winner here is the ability to combine Razor with full MVC - you can start out building your application with Razor Pages, and if you find some complex pages you have a choice. You can either add the functionality using Razor Pages alone (as in Mike Brind's follow up post on handlers), or you can drop back to full MVC, whichever works best for you.

Summary

This post just scratches the surface of what you can do with Razor Pages - you can do far more complicated-MVC things if you like, but personally I fell like if you start doing that, just switch to proper MVC actions and encapsulate your logic properly. If you're interested, install the .NET Core 2.0 preview 1, and checkout the docs!


Andrew Lock: Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

One of the goals in ASP.NET Core 2.0 has been to clean up the basic templates, simplify the basic use-cases, and make it easier to get started with new projects.

This is evident in the new Program and Startup classes, which, on the face of it, are much simpler than their ASP.NET Core 1.0 counterparts. In this post, I'll take a look at the new WebHost.CreateDefaultBuilder() method, and see how it bootstraps your application.

Program and Startup responsibilities in ASP.NET Core 1.X

In ASP.NET Core 1.X, the Program class is used to setup the IWebHost. The default template for a web app looks something like this:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

This relatively compact file does a number of things:

  • Configuring a web server (Kestrel)
  • Set the Content directory (the directory containing the appsettings.json file etc)
  • Setup IIS Integration
  • Define the Startup class to use
  • Build(), and Run the IWebHost

The Startup class varies considerably depending on the application you are building. The MVC template shown below is a fairly typical starter template:

public class Startup  
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseBrowserLink();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This is a far more substantial file that has 4 main reponsibilities:

  • Setup configuration in the Startup constructor
  • Setup dependency injection in ConfigureServices
  • Setup Logging in Configure
  • Setup the middleware pipeline in Configure

This all works pretty well, but there are a number of points that the ASP.NET team considered to be less than ideal.

First, setting up configuration is relatively verbose, but also pretty standard; it generally doesn't need to vary much either between applications, or as the application evolves.

Secondly, logging is setup in the Configure method of Startup, after configuration and DI have been configured. This has two draw backs. On the one hand, it makes logging feel a little like a second class citizen - Configure is generally used to setup the middleware pipeline, so having the logging config in there doesn't make a huge amount of sense. Also it means you can't easily log the bootstrapping of the application itself. There are ways to do it, but it's not obvious.

In ASP.NET Core 2.0 preview 1, these two points have been addressed by modifying the IWebHost and by creating a helper method for setting up your apps.

Program and Startup responsibilities in ASP.NET Core 2.0 preview 1

In ASP.NET Core 2.0 preview 1, the responsibilities of the IWebHost have changed somewhat. As well as having the same responsibilities as before, the IWebHost has gained two more:

  • Setup configuration
  • Setup Logging

In addition, ASP.NET Core 2.0 introduces a helper method, CreateDefaultBuilder, that encapsulates most of the common code found in Program.cs, as well as taking care of configuration and logging!

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>()
            .Build();
}

As you can see, there's no mention of Kestrel, IIS integration, configuration etc - that's all handled by the CreateDefaultBuilder method as you'll see in a sec.

Moving the configuration and logging code into this method also simplifies the Startup file:

public class Startup  
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This class is pretty much identical to the 1.0 class with the logging and most of the configuration code removed. Notice too that the IConfiguration object is injected into the class and stored in a property on the class, instead of creating the configuration in the constructor itself

This is new to ASP.NET Core 2.0 - the IConfiguration object is registered with DI by default. in 1.X you had to register the IConfigurationRoot yourself if you needed it to be available in DI.

My initial reaction to CreateDefaultBuilder was that it was just obfuscating the setup, and felt a bit like a step backwards, but in hindsight, that was more just a "who moved my cheese" reaction. There's nothing magical about the CreateDefaultBuilder it just hides a certain amount of standard, ceremonial code that would often go unchanged anyway.

The WebHost.CreateDefaultBuilder helper method

In order to properly understand the static CreateDefaultBuilder helper method, I decided to take a peek at the source code on GitHub! You'll be pleased to know, if you're used to ASP.NET Core 1.X, most of this will look remarkably familiar.

public static IWebHostBuilder CreateDefaultBuilder(string[] args)  
{
    var builder = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .ConfigureAppConfiguration((hostingContext, config) => { /* setup config */  })
        .ConfigureLogging((hostingContext, logging) =>  { /* setup logging */  })
        .UseIISIntegration()
        .UseDefaultServiceProvider((context, options) =>  { /* setup the DI container to use */  })
        .ConfigureServices(services => 
        {
            services.AddTransient<IConfigureOptions<KestrelServerOptions>, KestrelServerOptionsSetup>();
        });

    return builder;
}

There's a few new methods in there that I've elided for now, which I'll explore in follow up posts. You can see that this method is largely doing the same work that Program did in ASP.NET Core 1.0 - it sets up Kestrel, defines the ContentRoot, and sets up IIS integration, just like before. Additionally, it does a number of other things

  • ConfigureAppConfiguration - this contains the configuration code that use to live in the Startup configuration
  • ConfigureLogging - sets up the logging that use to live in Startup.Configure
  • UseDefaultServiceProvider - I'll go into this in a later post, but this sets up the built-in DI container, and lets you customise its behaviour
  • ConfigureServices - Adds additional services needed by components added to the IWebHost. In particular, it configures the Kestrel server options, which lets you easily define your web host setup as part of your normal config.

I'll look a closer look at configuration and logging in this post, and dive into the other methods in a later post.

Setting up app configuration in ConfigureAppConfiguration

The ConfigureAppConfiguration method takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, and an IConfigurationBuilder instance, config:

ConfigureAppConfiguration(hostingContext, config) =>  
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

    if (env.IsDevelopment())
    {
        var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
        if (appAssembly != null)
        {
            config.AddUserSecrets(appAssembly, optional: true);
        }
    }

    config.AddEnvironmentVariables();

    if (args != null)
    {
        config.AddCommandLine(args);
    }
});

As you can see, the hostingContext parameter exposes the IHostingEnvironment (whether we're running in "Development" or "Production") as a property, HostingEnvironment. Apart form that the bulk of the code should be pretty familiar if you've used ASP.NET Core 2.0.

The one exception to this is setting up User Secrets, which is done a little different in ASP.NET Core 2.0. This uses an assembly reference to load the user secrets, though you can still use the generic config.AddUserSecrets<T> version in your own config.

In ASP.NET Core 2.0, the UserSecretsId is stored in an assembly attribute, hence the need for the Assembly code above. You can still define the id to use in your csproj file - it will be embedded in an assembly level attribute at compile time.

This is all pretty standard stuff. It loads configuration from the following providers, in the following order:

  • appsettings.json (optional)
  • appsettings.{env.EnvironmentName}.json (optional)
  • User Secrets
  • Environment Variables
  • Command line arguments

The main difference between this method and the approach in ASP.NET Core 1.X is the location - config is now part of the WebHost itself, instead of sliding in through the backdoor so-to-speak by using the Startup constructor. Also, the initial creation and final call to Build() on the IConfigurationBuilder instance happens in the web host itself, instead of being handled by you.

Setting up logging in ConfigureLogging

The ConfigureLogging method also takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, just like the configuration method, and a LoggerFactory instance, logging:

ConfigureLogging((hostingContext, logging) =>  
{
    logging.UseConfiguration(hostingContext.Configuration.GetSection("Logging"));
    logging.AddConsole();
    logging.AddDebug();
});

The logging infrastructure has changed a little in ASP.NET Core 2.0, but broadly speaking, this code echoes what you would find in the Configure method of an ASP.NET Core 1.0 app, setting up the Console and Debug log providers. You can use the UseConfiguration method to setup the log levels to use by accessing the already-defined IConfiguration, exposed on hostingContext.Configuration.

Customising your WebHostBuilder

Hopefully this dive into the WebHost.CreateDefaultBuilder helper helps show why the ASP.NET team decided to introduce it. There's a fair amount of ceremony in getting an app up and running, and this makes it far simpler.

But what if this isn't the setup you want? Well, then you don't have to use it! There's nothing special about the helper, you could copy-and paste its code into your own app, customise it, and you're good to go.

That's not quite true - the KestrelServerOptionsSetup class referenced in ConfigureServices is currently internal, so you would have to remove this. I'll dive into what this does in a later post.

Summary

This post looked at some of the differences between Program.cs and Startup.cs in moving from ASP.NET Core 1.X to 2.0 preview 1. In particular, I took a slightly deeper look into the new WebHost.CreateDefaultBuilder method which aims to simplify the initial bootstrapping of your app. If you're not keen on the choices it makes for you, or you need to customise them, you can still do this, exactly as you did before. The choice is yours!


Andrew Lock: The .NET Core 2.0 Preview 1, version numbers and global.json

The .NET Core 2.0 Preview 1, version numbers and global.json

So as I'm sure most people reading this are aware, Microsoft announced ASP.NET Core and .NET Core 2.0 Preview 1 at Microsoft Build 2017 this week. I've been pretty busy with a variety of things, so I wasn't really planning on checking out most of the pieces for now. I've been keeping an eye on the community standups so I kind of knew what to expect. But after watching the video of Scott Hanselman and Dan Roth, and reading Steve Gordon's blog post, I couldn't resist!

If you haven't already, I really recommend you check out the various links above first - this post isn't really meant to serve as an introduction to ASP.NET Core, it focuses on installing the preview on your system while you continue to work on production ASP.NET Core 1.0/1.1 sites. In particular, it looks at the different version numbers associated with .NET Core.

Shortly after writing this, I realised Scott Hanselman had just written a very similar post, check it out!

Installing the .NET Core 2.0 Preview 1 SDK

The first step is obviously installing the .NET Core 2.0 Preview 1 SDK from https://www.microsoft.com/net/core/preview#windowscmd. This is pretty painless, no matrix of options, just a "Download now" button. And there's just one version number!

The .NET Core 2.0 Preview 1, version numbers and global.json

One interesting point is that .NET Core now also includes ASP.NET Core. That should mean smaller packages when deploying your applications, which is nice! I'll be exploring this at some point soon.

It's also worth noting that if you want to create ASP.NET Core 2.0 applications in Visual Studio, then you'll need to install the preview version of Visual Studio 2017. This should install side-by-side with the stable version, but I decided to just stick to the SDK for now while I'm just playing with it.

.NET Core version numbers

I pointed out just now that there's finally only one version number for the various .NET Core parts - 2.0 preview 1 - but that's not entirely true.

There are two different aspects to a .NET Core install: the version number of the SDK/Tools/CLI; and the version number of the .NET Core runtime (or .NET Core Shared Framework Host).

If you've just installed 2.0 preview 1, then when you run dotnet info you should see something like the following:

$ dotnet --info
.NET Command Line Tools (2.0.0-preview1-005977)

Product Information:  
 Version:            2.0.0-preview1-005977
 Commit SHA-1 hash:  414cab8a0b

Runtime Environment:  
 OS Name:     Windows
 OS Version:  10.0.14393
 OS Platform: Windows
 RID:         win10-x64
 Base Path:   C:\Program Files\dotnet\sdk\2.0.0-preview1-005977\

Microsoft .NET Core Shared Framework Host

  Version  : 2.0.0-preview1-002111-00
  Build    : 1ff021936263d492539399688f46fd3827169983

This gives you a whole bunch of different values, but there are two different versions listed here:

  • 2.0.0-preview1-005977 - the CLI version
  • 2.0.0-preview1-002111-00 - the runtime version

But these version numbers are slightly misleading. I also have version 1.0 of the .NET Core tools installed on the machine, as well as versions 1.1.1 and 1.0.4 of the .NET Core runtime.

Understanding multiple .NET Core runtime versions

One of the selling point of .NET Core is being able to install multiple versions of the .NET Core runtime side-by-side, without affecting one another. This is in contrast to the way .NET Framework works as a central install - you can't install version 4.5, 4.6 and 4.7 side-by-side withe the .NET Framework; 4.7 will replace the previous versions.

You can see which versions of the .NET Core runtime are installed by browsing to C:\Program Files\dotnet\shared\Microsoft.NETCore.App. As you can see, I have 3 versions available on my machine:

The .NET Core 2.0 Preview 1, version numbers and global.json

So the question is, how do you know which runtime will be used when you run an application?

Well, you specify it in your .NET .csproj file!

For example, in a .NET Core 1.1 project, you would set the <TargetFramework> (or <TargetFrameworks> if you're targeting more than one) to netcoreapp1.1:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

</Project>

This will use the .NET Core version 1.1.1 that I have installed when it runs.

If you had set <TargetFramework> to netcoreapp1.0 then the version 1.0.4 that I have installed would be used.

Which brings us to the version 2.0 preview 1 .csproj:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <UserSecretsId>aspnet-v2test-32450BD7-D635-411A-A507-53B20874D210</UserSecretsId>
  </PropertyGroup>

  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0-preview1-final" />
  </ItemGroup>

</Project>  

As before, the csproj file specifies that the <TargetFramework> should be netcoreapp2.0, so the newest version of the 2.0 runtime on my machine will be used - 2.0.0-preview1-002111-00.

Understanding the SDK versions

Hopefully that clears up the .NET Core runtime versions, but there's still the issue of the SDK/CLI version. What is that used for?

If you navigate to C:\Program Files\dotnet\sdk, you can see the SDK versions installed on your system. I've got two versions installed on my machine: 1.0.0 and 2.0.0-preview1-005977.

The .NET Core 2.0 Preview 1, version numbers and global.json

It's a bit of an over-simplification, but you can think of the SDK/CLI as providing all of the "build" related commands, dotnet new, dotnet build, dotnet publish etc.

Generally speaking, any SDK version higher than the one used to create a project can be used to dotnet build and dotnet publish it. So you can use the 2.0 SDK to build projects created with the 1.0 SDK.

So in that case, you can just use the newer SDK all the time right? Well mostly. If you're still building project.json based projects then you need to ensure you use the RC2 SDK.

The current version of the SDK also becomes apparent when you call dotnet new - if you are using the 2.0 Preview 1 SDK, you will get a 2.0-based application, if you're using the 1.0 SDK you'll get a 1.1 based application!

The question is how do you control which version of the SDK is used?

Choosing an SDK version with global.json

The global.json file has a very simple schema, that simply defines which version of the SDK to use:

{
  "sdk": {
    "version": "1.0.0"
  }
}

Back in the day, the global.json was also used to define the source code folders for a "solution", but that functionality was removed with the 1.0.0 SDK.

When you run dotnet new or dotnet build, the dotnet host looks in the current folder,and all parent folders up to the drive's root for a global.json. If it can't find one, it will just use the newest version of the SDK - in my case 2.0.0-preview1-005977.

If a global.json exists (and the SDK version it references exists!) then that version will be used for all SDK commands, which is is basically all dotnet commands other than dotnet run.

Personally, I've put the above global.json in my Repos folder, so any existing projects will continue to use the 1.0.0 SDK, as will any new projects I create. I've then created a subfolder called netcore20 and added the following global.json. I can then use this folder whenever I'm playing with the ASP.NET Core 2.0 preview bits without risking any issues!

{
  "sdk": {
    "version": "2.0.0-preview1-005977"
  }
}

Summary

Versioning has been an issue throughout the recent history of .NET Core. Aligning all the versions going forward will definitely simplify things and hopefully cause less confusion, but it's still a good idea to try and understand the difference between runtime and SDK versions. I hope this post has helped clear some of that up!


Anuraj Parameswaran: What is new in ASP.NET Core 2.0 Preview

This post is about new features of ASP.NET Core 2.0 Preview. Microsoft announced ASP.NET Core 2.0 Preview 1 at Build 2017. This post will introduce some ASP.NET 2.0 features.


Damien Bowden: Anti-Forgery Validation with ASP.NET Core MVC and Angular

This article shows how API requests from an Angular SPA inside an ASP.NET Core MVC application can be protected against XSRF by adding an anti-forgery cookie. This is required, if using Angular, when using cookies to persist the auth token.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

Cross Site Request Forgery

XSRF is an attack where a hacker makes malicious requests to a web app, when the user of the website is already authenticated. This can happen when a website uses cookies to persist the token of an trusted website, user. A pure SPA should not use cookies to as it is hard to protect against this. With a server side rendered application, like ASP.NET Core MVC, anti-forgery cookies can be used to protect against this, which makes it safer, when using cookies.

Angular automatically adds the X-XSRF-TOKEN HTTP Header with the anti-forgery cookie value for each request if the XSRF-TOKEN cookie is present. ASP.NET Core needs to know, that it must use this to validate the request. This can be added to the ConfigureServices method in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddAntiforgery(options => options.HeaderName = "X-XSRF-TOKEN");
	services.AddMvc();
}

The XSRF-TOKEN cookie is added to the response of the HTTP request. The cookie is a secure cookie so this is only sent with HTTPS and not HTTP. All HTTP (Not HTTPS) requests will fail and return a 400 response. The cookie is created and added each time a new server url is called, but not for an API call.

app.Use(async (context, next) =>
{
	string path = context.Request.Path.Value;
	if (path != null && !path.ToLower().Contains("/api"))
	{
		// XSRF-TOKEN used by angular in the $http if provided
		var tokens = antiforgery.GetAndStoreTokens(context);
		context.Response.Cookies.Append("XSRF-TOKEN", 
		  tokens.RequestToken, new CookieOptions { 
		    HttpOnly = false, 
		    Secure = true 
		  }
		);
	}

	...

	await next();
});

The API uses the ValidateAntiForgeryToken attribute to check if the request contains the correct value for the XSRF-TOKEN cookie. If this is incorrect, or not sent, the request is rejected with a 400 response. The attribute is required when data is changed. HTTP GET requests should not require this attribute.

[HttpPut]
[ValidateAntiForgeryToken]
[Route("{id:int}")]
public IActionResult Update(int id, [FromBody]Thing thing)
{
	...

	return Ok(updatedThing);
}

You can check the cookies in the chrome browser.

Or in Firefox using Firebug (Cookies Tab).

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

http://www.fiyazhasan.me/angularjs-anti-forgery-with-asp-net-core/

http://www.dotnetcurry.com/aspnet/1343/aspnet-core-csrf-antiforgery-token

http://stackoverflow.com/questions/43312973/how-to-implement-x-xsrf-token-with-angular2-app-and-net-core-app/43313402

https://en.wikipedia.org/wiki/Cross-site_request_forgery

https://stormpath.com/blog/angular-xsrf



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

In my previous post I updated my comparison between ImageSharp and CoreCompat.System.Drawing. Instead of loading an arbitrary file from a URL using the HttpClient, the path to a file in the wwwroot folder was provided as part of the URL path.

This approach falls more in line with use cases you've likely run into many times - an image is uploaded at a particular size and you need to dynamically resize it to arbitrary dimensions. I've been using ImageSharp to do this - a new image library that runs on netstandard1.1 and is written entirely in managed code, so is fully cross-platform.

The code from the previous post fulfils this role, allowing you to arbitrarily resize images in your website. The biggest problem with the code as-is is how long it takes to process an image - large images could take multiple seconds to be loaded, processed, and served, as you can see below

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

This post shows how to add IDistributedCache to the implementation to quickly improve the response time for serving resized images.

IDistributeCache and IMemoryDistributedCache

The most obvious option here is to cache the resized image after it's first processed, and just serve the resized image from cache from subsequent requests. ASP.NET Core includes a general caching infrastructure in the form of the IDistributedCache interface. It's used by various parts of the framework, whenever caching is needed.

The IDistributedCache interface provides a number of methods related to saving and loading byte arrays by a string key, the pertinent ones of which are shown below:

 public interface IDistributedCache
{
    Task<byte[]> GetAsync(string key);
    Task SetAsync(string key, byte[] value, DistributedCacheEntryOptions options);
}

In addition, there is an extension method that provides a simplified version of the SetAsync method, without DistributedCacheEntryOptions:

public static void Set(this IDistributedCache cache, string key, byte[] value)  
{
    cache.Set(key, value, new DistributedCacheEntryOptions());
}

There are a number of different implementations of IDistributedCache that you can use to store data in Redis, SqlServer and other stores, but by default, ASP.NET Core registers an in-memory cache, MemoryDistributedCache, which uses the IMemoryCache under the hood. This essentially caches data in a dictionary in-process. Normally, you would want to replace this with an actually distributed cache, but for our purposes, it should do the job nicely.

Loading data from an IDistributedCache

The first step to adding caching, is to decide on a key to use. For our case, that's quite simple - we can combine the requested path with the requested image dimensions. We can then try and load the image from the cache. If we get a hit, we can use the cached byte[] data to create a FileResult directly, instead of having to load and resize the image again:

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;
    private readonly IDistributedCache _cache;

    public HomeController(IHostingEnvironment env, IDistributedCache cache)
    {
        _fileProvider = env.WebRootFileProvider;
        _cache = cache;
    }

    [Route("/image/{width}/{height}/{*url}")]
    public async Task<IActionResult> ResizeImage(string url, int width, int height)
    {
        if (width < 0 || height < 0) { return BadRequest(); }

        var key = $"/{width}/{height}/{url}";
        var data = await _cache.GetAsync(key);
        if (data == null)
        {
           // resize image and cache it
        }

        return File(data, "image/jpg");
    }

All that remains is to add the set-cache code. This code is very similar to the previous post, the only difference being that we need to create a byte[] of data to cache, instead of passing a Stream to the FileResult, as we did in the previous post.

Saving resized images in an IDistributedCache

Saving data to the IDistributedCache is very simple - you simply provide the string key and the data as byte[]. We'll reuse most of the code from the previous post here - checking the requested image exists, reading it into memory, resizing it and saving it to an output stream. The only difference is that we call ToArray() on the MemoryStream to get a byte[] we can store in cache.

var imagePath = PathString.FromUriComponent("/" + url);  
var fileInfo = _fileProvider.GetFileInfo(imagePath);  
if (!fileInfo.Exists) { return NotFound(); }

using (var outputStream = new MemoryStream())  
{
    using (var inputStream = fileInfo.CreateReadStream())
    using (var image = Image.Load(inputStream))
    {
        image
            .Resize(width, height)
            .SaveAsJpeg(outputStream);
    }

    data = outputStream.ToArray();
}
await _cache.SetAsync(key, data);  

And that's it, we're done - let's take it for a spin.

Testing it out

The first time we request an image, the cache is empty, so we still have to check the image exists, load it up, resize it, and store it in the cache. This is the same process as we had before, so the first request for a resized image is always going to be slow:

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

If we reload the page however, you can see that our subsequent requests are much better - we're down from 2+ seconds to 10ms in some cases!

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

This is clearly a vast improvement, and suddenly makes the approach of resizing on-the-fly a viable option. If we want, we can add some logging to our method to confirm that we are in fact pulling the data from the cache:

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

Protecting against DOS attacks

While we have a working, cached version of our resizing action, there is still one particular aspect we haven't covered that was raised by Bertrand Le Roy in the last post. With the current implementation, you can resize to arbitrary dimensions, which opens the app up to Denial of Service (DOS) attacks.

A malicious user could use significant server resources by requesting multiple different sizes for a resized image, e.g.

  • 640×480- /640/480/images/clouds.jpg
  • 640×481- /640/481/images/clouds.jpg
  • 640×482- /640/482/images/clouds.jpg
  • 641×480- /641/480/images/clouds.jpg
  • 642×480 - /642/480/images/clouds.jpg
  • etc

With the current design, each of those requests would trigger an expensive resize operation, as well as caching the result in the IDistributedCache. Hopefully it's clear that could end up being a problem - your server ends up using a significant amount of CPU resizing the images, and a large amount of memory caching every slight variation.

There are a number of ways you could get round this, all centred around limiting the number of "acceptable" image sizes, for example:

  • Only allow n specific, fixed sizes, e.g. 640×480, 960×720 etc.
  • Only allow n specific dimensions e.g 640, 720 and 960, but allow any combination of these e.g. 640×640, 640×720 etc.
  • Only allow you to specify the dimension in one direction, e.g. height=640 or width=640, and automatically scale the other dimension to keep the correct aspect ratio

Limiting the number of supported sizes like this means you also need to decide what to do if an unsupported size is requested. The easiest solution is to just return a 404 NotFound, but that's not necessarily the most user-friendly.

An alternative approach is to always return the smallest supported size that is larger than the requested size. For example, if we only support 640×480, 960×720, then:

  • if 640×480 is requested, return 640×480
  • if 480×320 is requested, return 640×480
  • if 720×540 is requested, return 960×720
  • if 1280×960 is requested, return ?

We still have a question of what to return for the last point, which requests a size larger than our largest supported size, but you would probably just return the biggest size you can here.

Exactly which approach you choose is obviously up to you. As an example, I've updated the ResizeImage action method resize method to ensure that either width or height is always 0, to preserve image aspect ratio. The SanitizeSize method is shown afterwards

[Route("/image/{width}/{height}/{*url}")]
public async Task<IActionResult> ResizeImage(string url, int width, int height)  
{
    if (width < 0 || height < 0) { return BadRequest(); }
    if (width == 0 && height == 0) { return BadRequest(); }

    if(height == 0)
    {
        width = SanitizeSize(width);
    }
    else
    {
        width = 0;
        height = SanitizeSize(height);
    }

    // remainder of method
}

For the SanitizeSize method, I've chosen to have 3 fixed sizes, where the smallest size larger than the requested size is used, or the largest size (1280) if you request larger than this.

private static int[] SupportedSizes = { 480, 960, 1280};

private int SanitizeSize(int value)  
{
    if (value >= 1280) { return 1280; }
    return SupportedSizes.First(size => size >= value);
}

With this in place, you can only request 6 different sizes for each image - 480, 960, 1280 width or 480, 960, 1280 height. The other dimension will have whatever value preserves the aspect ratio.

This provides you with simple protection from DOS attacks. It does, however, raise the question of whether it is worth doing this work in a web request at all. If you only have fixed supported sizes, then resizing the images at compile time and saving as files might make more sense. That way you avoid all the overhead of resizing images at runtime. Anyway, I digress, this covers you from DOS attacks anyway!

Working with other cache implementations

In this example, I simply used one of the the most basic of caching options available. However, as this code depends on the general IDistributedCache interface, we can easily extend it. If we wanted to, we could replace the MemoryDistributedCache implementation with a RedisCache, which would allow a whole cluster of web servers to only resize an image once, instead of having to resize it once per-server.

Adding caching to your code really can be as simple as this example, especially when you have immutable data as I do. I don't need to worry about images getting stale - I'm assuming you're not going to be adding images to your wwwroot folder when the server is in production - so caching is pretty simple. Obviously, as soon you have to worry about cache invalidation, things get way more complicated.

An alternative approach

This caching approach works, but there's one thing that slightly bugs me. Even though we're not resizing the image on every request, we're still serving the whole data every time. Notice how in the last screen shot every response is identical - a 200 response and 51.1KB of data?

Response caching is a standard approach to getting around this issue - instead of returning the whole data, the server sends some cache headers to the browser with the original data. On subsequent requests, the server can check the headers sent back by the browser, and if nothing's changed, can send a 304 response telling the browser to just use the data it has cached.

Now, we could add this functionality to our existing method, but in my next post I'll look at an alternative approach to caching which lets us achieve this without having to write the code ourselves.

Summary

Adding caching to a method is a very simple way to speed up long-running requests. In this case, we used the IDistributedCache to avoid having to resize an image every time it is requested. Instead, we store the byte[] data with a unique key the first time it is requested, and store this is in the cache. Subsequent requests for the resized image can just reload the cached data.


Anuraj Parameswaran: Implementing localization in ASP.NET Core

This post is about implementing localization in ASP.NET Core. Localization is the process of adapting a globalized app, which you have already processed for localizability, to a particular culture/locale. Localization is one of the best practise in web application development.


Anuraj Parameswaran: Hardware assisted virtualization and data execution protection must be enabled in the BIOS

This post is about fixing the error, Hardware assisted virtualization and data execution protection must be enabled in the BIOS which displayed by Docker while running Windows 10. Today while running Docker, it throws an error like this.


Damien Bowden: Secure ASP.NET Core MVC with Angular using IdentityServer4 OpenID Connect Hybrid Flow

This article shows how an ASP.NET Core MVC application using Angular in the razor views can be secured using IdentityServer4 and the OpenID Connect Hybrid Flow. The user interface uses server side rendering for the MVC views and the Angular app is then implemented in the razor view. The required security features can be added to the application easily using ASP.NET Core, which makes it safe to use the OpenID Connect Hybrid flow, which once authenticated and authorised, saves the token in a secure cookie. This is not an SPA application, it is an ASP.NET Core MVC application with Angular in the razor view. If you are implementing an SPA application, you should use the OpenID Connect Implicit Flow.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

IdentityServer4 configuration for OpenID Connect Hybrid Flow

IdentityServer4 is implemented using ASP.NET Core Identity with SQLite. The application implements the OpenID Connect Hybrid flow. The client is configured to allow the required scopes, for example the ‘openid’ scope must be added and also the RedirectUris property which implements the URL which is implemented on the client using the ASP.NET Core OpenID middleware.

using IdentityServer4;
using IdentityServer4.Models;
using System.Collections.Generic;

namespace QuickstartIdentityServer
{
    public class Config
    {
        public static IEnumerable<IdentityResource> GetIdentityResources()
        {
            return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
                new IdentityResources.Email(),
                new IdentityResource("thingsscope",new []{ "role", "admin", "user", "thingsapi" } )
            };
        }

        public static IEnumerable<ApiResource> GetApiResources()
        {
            return new List<ApiResource>
            {
                new ApiResource("thingsscope")
                {
                    ApiSecrets =
                    {
                        new Secret("thingsscopeSecret".Sha256())
                    },
                    Scopes =
                    {
                        new Scope
                        {
                            Name = "thingsscope",
                            DisplayName = "Scope for the thingsscope ApiResource"
                        }
                    },
                    UserClaims = { "role", "admin", "user", "thingsapi" }
                }
            };
        }

        // clients want to access resources (aka scopes)
        public static IEnumerable<Client> GetClients()
        {
            // client credentials client
            return new List<Client>
            {
                new Client
                {
                    ClientName = "angularmvcmixedclient",
                    ClientId = "angularmvcmixedclient",
                    ClientSecrets = {new Secret("thingsscopeSecret".Sha256()) },
                    AllowedGrantTypes = GrantTypes.Hybrid,
                    AllowOfflineAccess = true,
                    RedirectUris = { "https://localhost:44341/signin-oidc" },
                    PostLogoutRedirectUris = { "https://localhost:44341/signout-callback-oidc" },
                    AllowedCorsOrigins = new List<string>
                    {
                        "https://localhost:44341/"
                    },
                    AllowedScopes = new List<string>
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        IdentityServerConstants.StandardScopes.OfflineAccess,
                        "thingsscope",
                        "role"

                    }
                }
            };
        }
    }
}

MVC Angular Client Configuration

The ASP.NET Core MVC application with Angular is implemented as shown in this post: Using Angular in an ASP.NET Core View with Webpack

The cookie authentication middleware is used to store the access token in a cookie, once authorised and authenticated. The OpenIdConnectAuthentication middleware is used to redirect the user to the STS server, if the user is not authenticated. The SaveTokens property is set, so that the token is persisted in the secure cookie.

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
	AuthenticationScheme = "Cookies"
});

app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
	AuthenticationScheme = "oidc",
	SignInScheme = "Cookies",

	Authority = "https://localhost:44348",
	RequireHttpsMetadata = true,

	ClientId = "angularmvcmixedclient",
	ClientSecret = "thingsscopeSecret",

	ResponseType = "code id_token",
	Scope = { "openid", "profile", "thingsscope" },

	GetClaimsFromUserInfoEndpoint = true,
	SaveTokens = true
});

The Authorize attribute is used to secure the MVC controller or API.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Authorization;

namespace AspNetCoreMvcAngular.Controllers
{
    [Authorize]
    public class HomeController : Microsoft.AspNetCore.Mvc.Controller
    {
        public IActionResult Index()
        {
            return View();
        }

        public IActionResult Error()
        {
            return View();
        }
    }
}

CSP: Content Security Policy in the HTTP Headers

Content Security Policy helps you reduce XSS risks. The really brilliant NWebSec middleware can be used to implement this as required. Thanks to André N. Klingsheim for this excellent library. The middleware adds the headers to the HTTP responses.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

In this configuration, mixed content is not allowed and unsafe inline styles are allowed.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
	.StyleSources(s => s.UnsafeInline())
);

Set the Referrer-Policy in the HTTP Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Again NWebSec middleware is used to implement this.

           
app.UseReferrerPolicy(opts => opts.NoReferrer());

Redirect Validation

You can secure that application so that only redirects to your sites are allowed. For example, only a redirect to IdentityServer4 is allowed.

// Register this earlier if there's middleware that might redirect.
// The IdentityServer4 port needs to be added here. 
// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

Secure Cookies

Only secure cookies should be used to store the session information.

You can check this in the Chrome browser:

XFO: X-Frame-Options

The X-Frame-Options Headers can be used to prevent an IFrame from being used from within the UI. This helps protect against click jacking.

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

app.UseXfo(xfo => xfo.Deny());

Configuring HSTS: Http Strict Transport Security

The HTTP Header tells the browser to force HTTPS for a length of time.

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

TOFU (Trust on first use) or first time loading.

Once you have a proper cert and a fixed URL, you can configure that the browser to preload HSTS settings for your website.

https://hstspreload.org/

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

X-Xss-Protection NWebSec

Adds a middleware to the ASP.NET Core pipeline that sets the X-Xss-Protection (Docs from NWebSec)

 app.UseXXssProtection(options => options.EnabledWithBlockMode());

CORS

Only the allowed CORS should be enabled when implementing this. Disabled this as much as possible.

Cross Site Request Forgery XSRF

See this blog:
Anti-Forgery Validation with ASP.NET Core MVC and Angular

Validating the security Headers

Once you start the application, you can check that all the security headers are added as required:

Here’s the Configure method with all the NWebsec app settings as well as the authentication middleware for the client MVC application.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IAntiforgery antiforgery)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	loggerFactory.AddSerilog();

	//Registered before static files to always set header
	app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
	app.UseXContentTypeOptions();
	app.UseReferrerPolicy(opts => opts.NoReferrer());

	app.UseCsp(opts => opts
		.BlockAllMixedContent()
		.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
		.StyleSources(s => s.UnsafeInline())
	);

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCookieAuthentication(new CookieAuthenticationOptions
	{
		AuthenticationScheme = "Cookies"
	});

	app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
	{
		AuthenticationScheme = "oidc",
		SignInScheme = "Cookies",

		Authority = "https://localhost:44348",
		RequireHttpsMetadata = true,

		ClientId = "angularmvcmixedclient",
		ClientSecret = "thingsscopeSecret",

		ResponseType = "code id_token",
		Scope = { "openid", "profile", "thingsscope" },

		GetClaimsFromUserInfoEndpoint = true,
		SaveTokens = true
	});

	var angularRoutes = new[] {
		 "/default",
		 "/about"
	 };

	app.Use(async (context, next) =>
	{
		string path = context.Request.Path.Value;
		if (path != null && !path.ToLower().Contains("/api"))
		{
			// XSRF-TOKEN used by angular in the $http if provided
			  var tokens = antiforgery.GetAndStoreTokens(context);
			context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken, new CookieOptions { HttpOnly = false, Secure = true });
		}

		if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
			(ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
		{
			context.Request.Path = new PathString("/");
		}

		await next();
	});

	app.UseDefaultFiles();
	app.UseStaticFiles();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseStaticFiles();

	//Registered after static files, to set headers for dynamic content.
	app.UseXfo(xfo => xfo.Deny());

	// Register this earlier if there's middleware that might redirect.
	// The IdentityServer4 port needs to be added here. 
	// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
	app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

	app.UseXXssProtection(options => options.EnabledWithBlockMode());

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});  
}

Links:

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows

https://docs.nwebsec.com/en/latest/index.html

https://www.nwebsec.com/

https://github.com/NWebsec/NWebsec

https://content-security-policy.com/

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

https://scotthelme.co.uk/a-new-security-header-referrer-policy/

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

https://gun.io/blog/tofu-web-security/

https://en.wikipedia.org/wiki/Trust_on_first_use

http://www.dotnetnoob.com/2013/07/ramping-up-aspnet-session-security.html

http://openid.net/specs/openid-connect-core-1_0.html

https://www.ssllabs.com/



Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Damien Bowden: Using Angular in an ASP.NET Core View with Webpack

This article shows how Angular can be run inside an ASP.NET Core MVC view using Webpack to build the Angular application. By using Webpack, the Angular application can be built using the AOT and Angular lazy loading features and also profit from the advantages of using a server side rendered view. If you prefer to separate the SPA and the server into 2 applications, use Angular CLI or a similiar template.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

The application was created using the .NET Core ASP.NET Core application template in Visual Studio 2017. A packages.json npm file was added to the project. The file contains the frontend build scripts as well as the npm packages required to build the application using Webpack and also the Angular packages.

{
  "name": "angular-webpack-visualstudio",
  "version": "1.0.0",
  "description": "An Angular VS template",
  "author": "",
  "license": "ISC",
    "repository": {
    "type": "git",
    "url": "https://github.com/damienbod/Angular2WebpackVisualStudio.git"
  },
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release",
    "test": "karma start"
  },
  "dependencies": {
    "@angular/common": "4.1.0",
    "@angular/compiler": "4.1.0",
    "@angular/compiler-cli": "4.1.0",
    "@angular/platform-server": "4.1.0",
    "@angular/core": "4.1.0",
    "@angular/forms": "4.1.0",
    "@angular/http": "4.1.0",
    "@angular/platform-browser": "4.1.0",
    "@angular/platform-browser-dynamic": "4.1.0",
    "@angular/router": "4.1.0",
    "@angular/upgrade": "4.1.0",
    "@angular/animations": "4.1.0",
    "angular-in-memory-web-api": "0.3.1",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.10",
    "rxjs": "5.3.0",
    "zone.js": "0.8.8",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
    "@types/node": "7.0.13",
    "@types/jasmine": "2.5.47",
    "angular2-template-loader": "0.6.2",
    "angular-router-loader": "^0.6.0",
    "awesome-typescript-loader": "3.1.2",
    "clean-webpack-plugin": "^0.1.16",
    "concurrently": "^3.4.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.28.0",
    "file-loader": "^0.11.1",
    "html-webpack-plugin": "^2.28.0",
    "jquery": "^3.2.1",
    "json-loader": "^0.5.4",
    "node-sass": "^4.5.2",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.6.1",
    "sass-loader": "^6.0.3",
    "source-map-loader": "^0.2.1",
    "style-loader": "^0.16.1",
    "ts-helpers": "^1.1.2",
    "tslint": "^5.1.0",
    "tslint-loader": "^3.5.2",
    "typescript": "2.3.2",
    "url-loader": "^0.5.8",
    "webpack": "^2.4.1",
    "webpack-dev-server": "2.4.2",
    "jasmine-core": "2.5.2",
    "karma": "1.6.0",
    "karma-chrome-launcher": "2.0.0",
    "karma-jasmine": "1.1.0",
    "karma-sourcemap-loader": "0.3.7",
    "karma-spec-reporter": "0.0.31",
    "karma-webpack": "2.0.3"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

The angular application is added to the angularApp folder. This frontend app implements a default module and also a second about module which is lazy loaded when required (About button clicked). See Angular Lazy Loading with Webpack 2 for further details.

The _Layout.cshtml MVC View is also added here as a template. This will be used to build into the MVC application in the Views folder.

The webpack.prod.js uses all the Angular project files and builds them into pre-compiled AOT bundles, and also a separate bundle for the about module which is lazy loaded. Webpack adds the built bundles to the _Layout.cshtml template and copies this to the Views/Shared/_Layout.cshtml file.

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angularApp/vendor.ts',
        'polyfills': './angularApp/polyfills.ts',
        'app': './angularApp/main-aot.ts' // AoT compilation
    },

    output: {
        path: __dirname + '/wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: ''
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: '../Views/Shared/_Layout.cshtml',
            inject: 'body',
            template: 'angularApp/_Layout.cshtml'
        }),

        new CopyWebpackPlugin([
            { from: './angularApp/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};

The Startup.cs is configured to load the configuration and middlerware for the application using client or server routing as required.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using AspNetCoreMvcAngular.Repositories.Things;
using Microsoft.AspNetCore.Http;

namespace AspNetCoreMvcAngular
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddCors(options =>
            {
                options.AddPolicy("AllowAllOrigins",
                    builder =>
                    {
                        builder
                            .AllowAnyOrigin()
                            .AllowAnyHeader()
                            .AllowAnyMethod();
                    });
            });

            services.AddSingleton<IThingsRepository, ThingsRepository>();

            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                 "/default",
                 "/about"
             };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseCors("AllowAllOrigins");

            app.UseDefaultFiles();
            app.UseStaticFiles();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The application can be built and run using the command line. The client application needs to be built before you can deploy or run!

> npm install
> npm run build-production
> dotnet restore
> dotnet run

You can also build inside Visual Studio 2017 using the Task Runner Explorer. If building inside Visual Studio 2017, you need to configure the NodeJS path correctly to use the right version.

Now you have to best of both worlds in the UI.

Note:
You could also use Microsoft ASP.NET Core JavaScript Services which supports server side pre rendering but not client side lazy loading. If your using Microsoft ASP.NET Core JavaScript Services, configure the application to use AOT builds for the Angulat template.

Links:

Angular Templates, Seeds, Starter Kits

https://github.com/damienbod/AngularWebpackVisualStudio

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/aspnet/JavaScriptServices



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 2

Using ImageSharp to resize images in ASP.NET Core - Part 2

In my last post, I showed a way to crop and resize an image downloaded from a URL, using the ImageSharp library. This was in response to a post in which Dmitry Sikorsky used the CoreCompat.System.Drawing library to achieve the same thing. The purpose of that post was simply to compare the two libraries from an API perspective, and to demonstrate ImageSharp in the application.

The code in that post was not meant to be used in production, and contained a number of issues such as not disposing objects correctly, creating a fresh HttpClient for every request, and happily downloading files from any old URL!

In this post I'll show a few tweaks to make the code from the last post a little more production worthy. In particular, rather than downloading a file from any URL provided in the querystring, we'll load the file from the web root folder on disk, if the file exists.

Add the NuGet.config file

As mentioned in my previous post, ImageSharp is currently only published on MyGet, not NuGet, so you'll need to add a NuGet.config file to ensure you can restore the ImageSharp library.

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="ImageSharp Nightly" value="https://www.myget.org/F/imagesharp/api/v3/index.json" />
  </packageSources>
</configuration>  

Once you've added the config file, you can add the ImageSharp library to the project.

Using a catch-all route parameter

The first step I wanted to take was to move from passing the URL to the target image as a querystring parameter to part of the URL path. As a part of this, we'll switch from allowing URLs to be absolute paths to relative paths.

Previously, you would pass the URL to resize something like the following:

/image?url=https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png?width=200&height=100

Instead, our updated route will look something like the following, where the path to the image to resize is images/clouds.jpg:

/resize/200/100/images/clouds.jpg

All that's required is to introduce a [Route] attribute with a appropriate parameters for the dimensions and a catch-all parameter, for example:

[Route("/resize/width/height/{*url}")]
public IActionResult ResizeImage(string url, int width, int height)  
{
    /* Method implementation */
}

This gives us urls that are easier to read and parse around (plus will give us another benefit, as you'll see in the next post.

Validating the requested file exists

Before we try and load the file from disk, we first need to make sure that a valid file has been requested. To do so, we'll use the FileInfo class and the IFileProvider interface.

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;

    [Route("/image/{width}/{height}/{*url}")]
    public IActionResult ResizeImage(string url,  int width, int height)
    {
        if (width <= 0 || height <= 0) 
        { 
            return BadRequest(); 
        }

        var imagePath = PathString.FromUriComponent("/" + url);
        var fileInfo = _fileProvider.GetFileInfo(imagePath);
        if (!fileInfo.Exists) { return NotFound(); }

        /* Load image, resize and return */
    }
}

First, we perform some simple parameter validation to make sure the requested dimenstions aren't less that zero, and if that fails, we return a 400 result.

I'm going to treat the value 0 as a special case for now - if you pass zero in either width or height then we'll ignore that value, and use the original image's dimension.

Assuming the width and height are valid, we try and get the FileInfo using the injected IFileProvider, and if it deems the file doesn't exist, we return a 404.

So the first question is, where does the implementation of IFileProvider come from?

WebRootFileProvider vs ContentRootFileProvider

The IHostingEnvironment exposes two IFileProviders:

  • ContentRootFileProvider
  • WebRootFileProvider

These file providers allow serving files from the ContentRootPath and WebRootPath respectively. By default, the ContentRootPath points to the root of the project folder, while WebRootPath points to the wwwroot folder.

Using ImageSharp to resize images in ASP.NET Core - Part 2

For this example, we only want to serve files from the wwwroot folder - serving files from anywhere else would be a security risk - so we use the WebRootFileProvider property, by accessing it from an IHostingEnvironment injected into the consturctor:

public HomeController(IHostingEnvironment env)  
{
    _fileProvider = env.WebRootFileProvider;
}

Resizing the image

Once we have validated the file exists, we can continue with the rest of the action method. This part is very similar to the previous post, just tweaked a little using suggestions from James South. We use the FileInfo object to obtain a Stream for the file we want to reload, load it into memory.

Once we have loaded the image, we can resize it. For this example, we'll just use the values provided in the URL, and we'll always save the image as a jpeg, so we can use the SaveAsJpeg extension method:

[Route("/image/{width}/{height}/{*url}")]
public IActionResult ResizeImage(string url, int width, int height)  
{
    if (width < 0 || height < 0 ) { return BadRequest(); }

    var imagePath = PathString.FromUriComponent("/" + url);
    var fileInfo = _fileProvider.GetFileInfo(imagePath);
    if (!fileInfo.Exists) { return NotFound(); }

    var outputStream = new MemoryStream();
    using (var inputStream = fileInfo.CreateReadStream())
    using (var image = Image.Load(inputStream))
    {
        image
            .Resize(widthToUse, heightToUse)
            .SaveAsJpeg(outputStream);
    }

    outputStream.Seek(0, SeekOrigin.Begin);

    return File(outputStream, "image/jpg");
}

Note, if you pass 0 for either width or height, by default ImageSharp will preserve the original aspect ratio when resizing.

With this revised action method, we have an action closer to something we'd actually use in practice.

There's still some aspects that we would likely want to improve before we used in production. In particular, we would likely want some sort of caching of the final output, so we are not doing an expensive resize operation with every request. I'll look at fixing this in a follow up post.

Summary

This post showed a revised version of the "crop and resize" action method from my previous post. In this post, I stopped loading the image with HttpClient and instead required that it already be located in the web app in the wwwroot folder. The file was loaded using the IHostingEnvironment.WebRootFileProvider property, and finally resized in a more fluent way, and ensuring we dispose the underlying arrays.


Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Currently, one of the significant missing features in .NET Core and .NET Standard are the System.Drawing APIs that you can use, among other things, for server-side image processing in ASP.NET Core. Bertrand Le Roy gave a great run down of the various alternatives available in Jan 2017, each with different pros and cons.

I was reading a post by Dmitry Sikorsky yesterday describing how to use one of these libraries, the CoreCompat.System.Drawing package, to resize an image in ASP.NET Core. This package is designed to mimic the existing System.Drawing APIs (it's a .NET Core port, of the Mono port, of System.Drawing!) so if you need a drop in replacement for System.Drawing then it's a good place to start.

I'm going to need to start doing some image processing soon, so I wanted to take a look at how the code for working with CoreCompat.System.Drawing would compare to using the ImageSharp package. This is a brand new library that is designed from the ground up to be cross-platform by using only managed-code. This means it will probably not be as performant as libraries that use OS-specific features, but on the plus side, it is completely cross platform.

For the purposes of this comparison, I'm going to start with the code presented by Dmitry in his post and convert it to use ImageSharp.

The sample app

This post will as based on the code from Dimitri's post, so it uses the same sample app. This contains a single controller, the ImageController, which you can use to crop and resize an image from a given URL.

For example, a request might look like the following:

/image?url=https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png&sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360&destinationwidth=100&destinationheight=100

This will downloaded the GitHub logo from https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png:

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

It will then crop it using the rectangle specified by sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360, and resize the output to 100×100. Finally, it will render the result in the response as a jpeg.

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

This is the same functionality Dimitri described, I will just convert his code to use ImageSharp instead.

Installing ImageSharp

The first step is to add the ImageSharp package to your project. Currently, this is not quite as smooth as it will be, as it is not yet published on Nuget, but instead just to a MyGet feed. This is only a temporary situation while the code-base stabilises - it will be published to NuGet at that point - but at the moment it is a bit of a barrier to adding it to your project.

Note, ImageSharp actually is published on NuGet, but that package is currently just a placeholder for when the package is eventually published. Don't use it!

To install the package from the MyGet feed, add a NuGet.config file to your solution folder, specifying the location of the feed:

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="ImageSharp Nightly" value="https://www.myget.org/F/imagesharp/api/v3/index.json" />
  </packageSources>
</configuration>  

You can now add the ImageSharp package to your csproj file, and run a restore. I specified the version 1.0.0-* to fetch the latest version from the feed (1.0.0-alpha7 in my case).

<PackageReference Include="ImageSharp" Version="1.0.0-*" />  

When you run dotnet restore you should see that the CLI has used the ImageSharp MyGet feed, where it lists the config files used:

$ dotnet restore
  Restoring packages for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj...
  Installing ImageSharp 1.0.0-alpha7-00006.
  Generating MSBuild file C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\AspNetCoreImageResizingService.csproj.nuget.g.props.
  Writing lock file to disk. Path: C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\project.assets.json
  Restore completed in 2.76 sec for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj.

  NuGet Config files used:
      C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\NuGet.Config
      C:\Users\Sock\AppData\Roaming\NuGet\NuGet.Config
      C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config

  Feeds used:
      https://www.myget.org/F/imagesharp/api/v3/index.json
      https://api.nuget.org/v3/index.json
      C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

  Installed:
      1 package(s) to C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj

Adding the NuGet.config file is a bit of a pain, but it's a step that will hopefully go away soon, when the package makes its way onto NuGet.org. On the plus side, you only need to add a single package to your project for this example.

In contrast, to add the CoreCompat.System.Drawing packages you have to include three different packages when writing cross-platform code - the library itself and the run time components for both Linux and OS X:

<PackageReference Include="CoreCompat.System.Drawing" Version="1.0.0-beta006" />  
<PackageReference Include="runtime.linux-x64.CoreCompat.System.Drawing" Version="1.0.0-beta009" />  
<PackageReference Include="runtime.osx.10.10-x64.CoreCompat.System.Drawing" Version="1.0.1-beta004" />  

Obviously, if you are running on only a single platform, then this probably won't be an issue for you, but it's something to take into consideration.

Loading an image from a stream

Now the library is installed, we can start converting the code. The first step in the app is to download the image provided in the URL.

Note that this code is very much sample only - downloading files sent to you in query arguments is probably not advisable, plus you should probably be using a static HttpClient, disposing correctly etc!

For the CoreCompat.System.Drawing library, the code doing the work reads the stream into a Bitmap, which is then set to the Image object.

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

using (Bitmap temp = new Bitmap(inputStream))  
{
    image = new Bitmap(temp);
}

While for ImageSharp we have the following:

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

image = Image.Load(inputStream);  

Obviously the HttpClient code is identical here, but there is less faffing required to actually read an image from the response stream. The ImageSharp API is much more intuitive - I have to admit I always have to refresh my memory on how the System.Drawing Imageand Bitmap classes interact! Definitely a win to ImageSharp I think.

It's worth noting that the Image classes in these two examples are completely different types, in different namespaces, so are not interoperable in general.

Cropping and Resizing an image

Once the image is in memory, the next step is to crop and resize it to create our output image. The CropImage function for the CoreCompat.System.Drawing is as follows:

private Image CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
  Image destinationImage = new Bitmap(destinationWidth, destinationHeight);
  Graphics g = Graphics.FromImage(destinationImage);

  g.DrawImage(
    sourceImage,
    new Rectangle(0, 0, destinationWidth, destinationHeight),
    new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight),
    GraphicsUnit.Pixel
  );

  return destinationImage;
}

This code creates the destination image first, generates a Graphics object to allow manipulating the content, and then draws a region from the first image onto the second, resizing as it does so.

This does the job, but it's not exactly simple to follow - if I hadn't told you, would you have spotted that the image is being resized as well as cropped? Maybe, given we set the destinationImage size, but possibly not if you were just looking at the DrawImage function.

In contrast, the ImageSharp version of this method would look something like the following:

private Image<Rgba32> CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
    return sourceImage
        .Crop(new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight))
        .Resize(destinationWidth, destinationHeight);
}

I think you'd agree, this is much easier to understand! Instead of using a mapping from one coordinate system to another, handling both the crop and resize in one operation, it has two well-named methods that are easy to understand.

One slight quirk in the ImageSharp version is that this method returns an Image<Rgba32> when we gave it an Image. The definition for this Image object is:

public sealed class Image : Image<Rgba32> { }  

so the Image is-an Image<Rgba32>. This isn't a big issue, I guess it would just be nice if you were working with the Image class to get back an Image from the manipulation functions. I still count this as a win for ImageSharp.

Saving the image to a stream

The final part of the app is to save the cropped image to the response stream and return it to the browser.

The CoreCompat.System.Drawing version of saving the image to a stream looks like the following. We first download the image, crop it and then save it to a MemoryStream. This stream can then be used to create a FileResponse object in the browser (check the example source code or Dimitri's post for details.

Image sourceImage = await this.LoadImageFromUrl(url);

Image destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, ImageFormat.Jpeg);  

The ImageSharp equivalent is very similar. It just involves changing the type of the destination image to be Image<Rgba32> (as mentioned in the previous section), and updating the last line, in which we save the image to a stream.

Image sourceImage = await this.LoadImageFromUrl(url);

Image<Rgba32> destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, new JpegEncoder());  

Instead of using an Enum to specify the output formatting, you pass an instance of an IImageEncoder, in this case the JpegEncoder. This approach is more extensible, though it is slightly less discoverable then the System.Drawing approach.

Note, there are many different overloads to Image<T>.Save() that you can use to specify all sorts of different encoding options etc.

Wrapping up

And that's it. Everything you need to convert from CoreCompat.System.Drawing to ImageSharp. Personally, I really like how ImageSharp is shaping up - it has a nice API, is fully managed cross-platform and even targets .NET Standard 1.1 - no mean feat! It may not currently hit the performance of other libraries that rely on native code, but with all the improvements and progress around Spans<T>, it may be able to come close to parity down the line.

If you're interested in the project, do check it out on GitHub and consider contributing - it will be great to get the project to an RTM state.

Thanks are due to James South for creating the ImageSharp project, and also to Dmitry Sikorsky for inspiring me to write this post! You can find the source code for his project on GitHub here, and the source for my version here.


Anuraj Parameswaran: Post requests from Azure Logic apps

This post is about sending post request to services from Azure Logic Apps. Logic Apps provide a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow. There are many connectors across the cloud and on-premises to quickly integrate across services and protocols. A logic app begins with a trigger (like ‘When an account is added to Dynamics CRM’) and after firing can begin many combinations actions, conversions, and condition logic.


Andrew Lock: Creating a basic Web API template using dotnet new custom templates

Creating a basic Web API template using dotnet new custom templates

In my last post, I showed a simple, stripped-down version of the Web API template with the Razor dependencies removed.

As an excuse to play with the new CLI templating functionality, I decided to turn this template into a dotnet new template.

For details on this new capability, check out the announcement blog post, or the excellent series by Muhammed Rehan Saeed. In brief, the .NET CLI includes functionality to let you create your own templates using dotnet new, which can be distributed as zip files, installed from the source code project folder or as NuGet packages.

The Basic Web API template

I decided to wrap the basic web API template I created in my last post so that you can easily use it to create your own Web API projects without Razor templates.

To do so, I followed Muhammed Rehan Saeed's blog posts (and borrowed heavily from his example template!) to create a version of the basic Web API template you can install from NuGet.

This template creates a very stripped-down version of the web API project, with the Razor functionality removed. If you are looking for a more fully-featured template, I recommend checking out the ASP.NET MVC Boilerplate project.

If you have installed Visual Studio 2017, you can use the .NET CLI to install new templates and use them to create projects:

  1. Run dotnet new --install "NetEscapades.Templates::*" to install the project template
  2. Run dotnet new basicwebapi --help to see how to select the various features to include in the project
  3. Run dotnet new basicwebapi --name "MyTemplate" along with any other custom options to create a project from the template.

This will create a new basic Web API project in the current folder.

Options and feature selection

One of the great features in the .NET CLI templates are the ability to do feature selection . This lets you add or remove features from the template at the time it is generated.

I added a number of options to the template (again, heavily inspired by the ASP.NET Boilerplate project). This lets you add features that will be commonly included in a Web API project, such as CORS, DataAnnotations, and the ApiExplorer.

You can view these options by running dotnet new basicwebapi --help:

$dotnet new basicwebapi --help
Template Instantiation Commands for .NET Core CLI.

Usage: dotnet new [arguments] [options]

Arguments:  
  template  The template to instantiate.

Options:  
  -l|--list         List templates containing the specified name.
  -lang|--language  Specifies the language of the template to create
  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is used.
  -o|--output       Location to place the generated output.
  -h|--help         Displays help for this command.
  -all|--show-all   Shows all templates


Basic ASP.NET Core Web API (C#)  
Author: Andrew Lock  
Options:

  -A|--ApiExplorer                 The ApiExplorer functionality allows you to expose metadata about your API endpoints. You can use it to generate documentation about your application. Enabling this option will add the ApiExplorer libraries and services to your project.
                                   bool - Optional
                                   Default: false

  -C|--Controller                  If true, this will generate an example ValuesController in your project.
                                   bool - Optional
                                   Default: false

  -D|--DataAnnotations             DataAnnotations provide declarative metadata and validations for models in ASP.NET Core.
                                   bool - Optional
                                   Default: false

  -CO|--CORS                       Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sensitive data from another site. CORS is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly allow some cross-origin requests while rejecting others.
                                   bool - Optional
                                   Default: true

  -T|--Title                       The name of the project which determines the assembly product name. If the Swagger feature is enabled, shows the title on the Swagger UI.
                                   string - Optional
                                   Default: BasicWebApi

 -De|--Description                A description of the project which determines the assembly description. If the Swagger feature is enabled, shows the description on the Swagger UI.

                                   string - Optional
                                   Default: BasicWebApi

  -Au|--Author                     The name of the author of the project which determines the assembly author, company and copyright information.

                                   string - Optional
                                   Default: Project Author

  -F|--Framework                   Decide which version of the .NET Framework to target.

                                       .NET Core         - Run cross platform (on Windows, Mac and Linux). The framework is made up of NuGet packages which can be shipped with the application so it is fully stand-alone.

                                       .NET Framework    - Gives you access to the full breadth of libraries available in .NET instead of the subset available in .NET Core but requires it to be pre-installed.

                                       Both              - Target both .NET Core and .NET Framework.

                                   Default: Both

  -I|--IncludeApplicationInsights  Whether or not to include Application Insights in the project
                                   bool - Optional
                                   Default: false

You can invoke the template with any or all of these options, for example:

$dotnet new basicwebapi --Controller false --DataAnnotations true -Au "Andrew Lock"
Content generation time: 762.9798 ms  
The template "Basic ASP.NET Core Web API" created successfully.  

Source code for the template

If you're interested to see the source for the template, you can view it on GitHub. There you will find an example of the templates.json file that describes the template, as well as a full CI build using cake and AppVeyor to automatically publish the NuGet templates.

If you have any suggestions, bugs or comments, then do let me know on the GitHub!


Andrew Lock: Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In this article I'll show how to add the minimal required dependencies to create an ASP.NET Core Web API project, without including the additional MVC/Razor functionality and packages.

Note: I have since created a custom dotnet new template for the code described in this post, plus a few extra features. You can read about it here, or view it on GitHub and NuGet.

MVC vs Web API

In the previous version of ASP.NET, the MVC and Web API stacks were completely separate. Even though there were many similar concepts shared between them, the actual types were distinct. This was generally a little awkward, and often resulted in confusing error messages when you accidentally references the wrong namespace.

In ASP.NET Core, this is no longer an issue - MVC and Web API have been unified under the auspices of ASP.NET Core MVC, in which there is fundamentally no real difference between an MVC controller and a Web API controller. All of your controllers can act both as MVC controllers, serving server-side rendered Razor templates, and as Web API controllers returning formatted (e.g. JSON or XML) data.

This unification is great and definitely reduces the mental overhead required when working with both previously. Even if you are not using both aspects in a single application, the fact the types are all familiar is just a smoother experience.

Having said that, if you only need to use the Web API features (e.g. you're building an API client without any server-side rendering requirements), then you may not want/need the additional MVC capabilities in your app. Currently, the default templates include these by default.

The default templates

When you create a new MVC project from a template in Visual Studio or via the command line, you can choose whether to create an empty ASP.NET Core project, a Web API project or an MVC web app project:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

If you create an 'empty' project, then the resulting app really is super-lightweight. It has no dependencies on any MVC constructs, and just produces a very simple 'Hello World' response when run:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

At the other end of the scale, the 'MVC web app' gives you a more 'complete' application. Depending on the authentication options you select, this could include ASP.NET Core Identity, EF Core, and SQL server integration, in addition to all the MVC configuration and Razor view templating:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In between these two templates is the Web API template. This includes the necessary MVC dependencies for creating a Web API, and the simplest version just includes a single example ValuesController:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

However, while this looks stripped back, it also adds all the necessary packages for creating full MVC applications too, i.e. the server-side Razor packages. This is because it includes the same Microsoft.AspNetCore.Mvc package that the full MVC web app does, and calls AddMvc() in Startup.ConfigureServices.

As described in Steve Gordon's post on the AddMvc function, this adds a bunch of various services to the service collection. Some of these are required to allow you to use Web API, but some of them - the Razor-related services in particular - are unnecessary for a web API.

In most cases, using the Microsoft.AspNetCore.Mvc package is the easiest thing to do, but sometimes you want to trim your dependencies as much as possible, and make your APIs as lightweight as you can. In those cases you may find it useful to specifically add only the MVC packages and services you need for your app.

Adding the package dependencies

We'll start with the 'Empty' web application template, and add the packages necessary for Web API to it.

The exact packages you will need will depend on what features you need in your application. By default, the Empty ASP.NET Core template includes ApplicationInsights and the Microsoft.AspNetCore meta package, so I'll leave those in the project.

On top of those, I'll add the MVC.Core package, the JSON formatter package, and the CORS package:

  • The MVC Core package adds all the essential MVC types such as ControllerBase and RouteAttribute, as well as a host of dependencies such as Microsoft.AspNetCore.Mvc.Abstractions and Microsoft.AspNetCore.Authorization.
  • The JSON formatter package ensures we can actually render our Web API action results
  • The CORS package adds Cross Origin Resource Sharing (CORS) support - a common requirement for web APIs that will be hosted on a different domain to the client calling them.

    The final .csproj file should look something like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Json" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Cors" Version="1.1.2" />
  </ItemGroup>

</Project>  

Once you've restored the packages, we can update Startup to add our Web API services.

Adding the necessary services to Startup.cs

In most cases, adding the Web API services to a project would be as simple as calling AddMvc() in your ConfigureServices method. However, that method adds a whole load of functionality that I don't currently need. by default, it would add the ApiExplorer, the Razor view engine, Razor views, tag helpers and DataAnnotations - none of which we are using at the moment (We might well want to add the ApiExplorer and DataAnnotations back at a later date, but right now, I don't need them).

Instead, I'm left with just the following services:

public void ConfigureServices(IServiceCollection services)  
{
    var builder = services.AddMvcCore();
    builder.AddAuthorization();
    builder.AddFormatterMappings();
    builder.AddJsonFormatters();
    builder.AddCors();
}

That's all the services we need for now - next stop, middleware.

Adding the MvcMiddleware

Adding the MvcMiddleware to the pipeline is simple. I just replace the "Hello World" run call with UseMvc(). Note that I'm using the unparameterised version of the method, which does not add any conventional routes to the application. As this is a web API, I will just be using attribute routing, so there's no need to setup the conventional routes.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseMvc();
}

That's all the MVC configuration we need - the final step is to add a controller to show off our new API.

Adding an MVC Controller

There's one important caveat to be aware of when creating a web API in this way - you must use the ControllerBase class, not Controller. The latter is defined in the Microsoft.AspNetCore.Mvc package, which we haven't added. Luckily, it mostly contains methods related to rendering Razor, so it's not a problem for us here. The ControllerBase class includes all the various StatusCodeResult helper methods you will likely use, such as Ok used below.

[Route("api/[controller]")]
public class ValuesController : ControllerBase  
{
    // GET api/values
    [HttpGet]
    public IActionResult Get()
    {
        return Ok(new string[] { "value1", "value2" });
    }
}

And if we take it for a spin:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Voila! A stripped down web API controller, with minimal dependencies.

Bonus: AddWebApi extension method

As a final little piece of tidying up - our ConfigureServices call looks a bit messy now. Personally I'm a fan of the "Clean Startup.cs" approach espoused by K. Scott Allen, in which you reduce the clutter in your Startup.cs class by creating wrapper extension methods for your configuration.

We can do the same with our simplified web API project by adding an extension method called AddWebApi(). I've even created a parameterised overload that takes an Action<MvcOptions>, synonymous with the AddMvc() equivalent that you are likely already using.

using System;  
using Microsoft.AspNetCore.Mvc;  
using Microsoft.AspNetCore.Mvc.Internal;

// ReSharper disable once CheckNamespace
namespace Microsoft.Extensions.DependencyInjection  
{
    public static class WebApiServiceCollectionExtensions
    {
        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));

            var builder = services.AddMvcCore();
            builder.AddAuthorization();

            builder.AddFormatterMappings();

            // +10 order
            builder.AddJsonFormatters();

            builder.AddCors();

            return new MvcBuilder(builder.Services, builder.PartManager);
        }

        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <param name="setupAction">An <see cref="Action{MvcOptions}"/> to configure the provided <see cref="MvcOptions"/>.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services, Action<MvcOptions> setupAction)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));
            if (setupAction == null) throw new ArgumentNullException(nameof(setupAction));

            var builder = services.AddWebApi();
            builder.Services.Configure(setupAction);

            return builder;
        }

    }
}

Finally, we can use this extension method to tidy up our ConfigureServices method:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddWebApi();
}

Much better!

Summary

This post showed how you could trim the Razor dependencies from your application, when you know you are not going to need them. This represents pretty much the most bare-bones web API template you might use in your application. Obviously mileage may vary, but luckily adding extra capabilities (validation, ApiExplorer for example) is easy!


Damien Bowden: ASP.NET Core IdentityServer4 Resource Owner Password Flow with custom UserRepository

This article shows how a custom user store or repository can be used in IdentityServer4. This can be used for an existing user management system which doesn’t use Identity or request user data from a custom source. The Resource Owner Flow using refresh tokens is used to access the protected data on the resource server. The client is implemented using IdentityModel.

Code: https://github.com/damienbod/AspNetCoreIdentityServer4ResourceOwnerPassword

Setting up a custom User Repository in IdentityServer4

To create a custom user store, an extension method needs to be created which can be added to the AddIdentityServer() builder. The .AddCustomUserStore() adds everything required for the custom user management.

services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddInMemoryIdentityResources(Config.GetIdentityResources())
		.AddInMemoryApiResources(Config.GetApiResources())
		.AddInMemoryClients(Config.GetClients())
		.AddCustomUserStore();
}

The extension method adds the required classes to the ASP.NET Core dependency injection services. A user respository is used to access the user data, a custom profile service is added to add the required claims to the tokens, and a validator is also added to validate the user credentials.

using CustomIdentityServer4.UserServices;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class CustomIdentityServerBuilderExtensions
    {
        public static IIdentityServerBuilder AddCustomUserStore(this IIdentityServerBuilder builder)
        {
            builder.Services.AddSingleton<IUserRepository, UserRepository>();
            builder.AddProfileService<CustomProfileService>();
            builder.AddResourceOwnerValidator<CustomResourceOwnerPasswordValidator>();

            return builder;
        }
    }
}

The IUserRepository interface adds everything required by the application to use the custom user store throughout the IdentityServer4 application. The different views, controllers, use this interface as required. This can then be changed as required.

namespace CustomIdentityServer4.UserServices
{
    public interface IUserRepository
    {
        bool ValidateCredentials(string username, string password);

        CustomUser FindBySubjectId(string subjectId);

        CustomUser FindByUsername(string username);
    }
}

The CustomUser class is the the user class. This class can be changed to map the user data defined in the persistence medium.

namespace CustomIdentityServer4.UserServices
{
    public class CustomUser
    {
            public string SubjectId { get; set; }
            public string Email { get; set; }
            public string UserName { get; set; }
            public string Password { get; set; }
    }
}

The UserRepository implements the IUserRepository interface. Dummy users are added in this example to test. If you using a custom database, or dapper, or whatever, you could implement the data access logic in this class.

using System.Collections.Generic;
using System.Linq;
using System;

namespace CustomIdentityServer4.UserServices
{
    public class UserRepository : IUserRepository
    {
        // some dummy data. Replce this with your user persistence. 
        private readonly List<CustomUser> _users = new List<CustomUser>
        {
            new CustomUser{
                SubjectId = "123",
                UserName = "damienbod",
                Password = "damienbod",
                Email = "damienbod@email.ch"
            },
            new CustomUser{
                SubjectId = "124",
                UserName = "raphael",
                Password = "raphael",
                Email = "raphael@email.ch"
            },
        };

        public bool ValidateCredentials(string username, string password)
        {
            var user = FindByUsername(username);
            if (user != null)
            {
                return user.Password.Equals(password);
            }

            return false;
        }

        public CustomUser FindBySubjectId(string subjectId)
        {
            return _users.FirstOrDefault(x => x.SubjectId == subjectId);
        }

        public CustomUser FindByUsername(string username)
        {
            return _users.FirstOrDefault(x => x.UserName.Equals(username, StringComparison.OrdinalIgnoreCase));
        }
    }
}

The CustomProfileService uses the IUserRepository to get the user data, and adds the claims for the user to the tokens, which are returned to the client, if the user/application was validated.

using System.Security.Claims;
using System.Threading.Tasks;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;

namespace CustomIdentityServer4.UserServices
{
    public class CustomProfileService : IProfileService
    {
        protected readonly ILogger Logger;


        protected readonly IUserRepository _userRepository;

        public CustomProfileService(IUserRepository userRepository, ILogger<CustomProfileService> logger)
        {
            _userRepository = userRepository;
            Logger = logger;
        }


        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();

            Logger.LogDebug("Get profile called for subject {subject} from client {client} with claim types {claimTypes} via {caller}",
                context.Subject.GetSubjectId(),
                context.Client.ClientName ?? context.Client.ClientId,
                context.RequestedClaimTypes,
                context.Caller);

            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());

            var claims = new List<Claim>
            {
                new Claim("role", "dataEventRecords.admin"),
                new Claim("role", "dataEventRecords.user"),
                new Claim("username", user.UserName),
                new Claim("email", user.Email)
            };

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());
            context.IsActive = user != null;
        }
    }
}

The CustomResourceOwnerPasswordValidator implements the validation.

using IdentityServer4.Validation;
using IdentityModel;
using System.Threading.Tasks;

namespace CustomIdentityServer4.UserServices
{
    public class CustomResourceOwnerPasswordValidator : IResourceOwnerPasswordValidator
    {
        private readonly IUserRepository _userRepository;

        public CustomResourceOwnerPasswordValidator(IUserRepository userRepository)
        {
            _userRepository = userRepository;
        }

        public Task ValidateAsync(ResourceOwnerPasswordValidationContext context)
        {
            if (_userRepository.ValidateCredentials(context.UserName, context.Password))
            {
                var user = _userRepository.FindByUsername(context.UserName);
                context.Result = new GrantValidationResult(user.SubjectId, OidcConstants.AuthenticationMethods.Password);
            }

            return Task.FromResult(0);
        }
    }
}

The AccountController is configured to use the IUserRepository interface.

   public class AccountController : Controller
    {
        private readonly IIdentityServerInteractionService _interaction;
        private readonly AccountService _account;
        private readonly IUserRepository _userRepository;

        public AccountController(
            IIdentityServerInteractionService interaction,
            IClientStore clientStore,
            IHttpContextAccessor httpContextAccessor,
            IUserRepository userRepository)
        {
            _interaction = interaction;
            _account = new AccountService(interaction, httpContextAccessor, clientStore);
            _userRepository = userRepository;
        }

        /// <summary>
        /// Show login page
        /// </summary>
        [HttpGet]

Setting up a grant type ResourceOwnerPasswordAndClientCredentials to use refresh tokens

The grant type ResourceOwnerPasswordAndClientCredentials is configured in the GetClients method in the IdentityServer4 application. To use refresh tokens, you must add the IdentityServerConstants.StandardScopes.OfflineAccess to the allowed scopes. Then the other refresh token settings can be set as required.

public static IEnumerable<Client> GetClients()
{
	return new List<Client>
	{
		new Client
		{
			ClientId = "resourceownerclient",

			AllowedGrantTypes = GrantTypes.ResourceOwnerPasswordAndClientCredentials,
			AccessTokenType = AccessTokenType.Jwt,
			AccessTokenLifetime = 120, //86400,
			IdentityTokenLifetime = 120, //86400,
			UpdateAccessTokenClaimsOnRefresh = true,
			SlidingRefreshTokenLifetime = 30,
			AllowOfflineAccess = true,
			RefreshTokenExpiration = TokenExpiration.Absolute,
			RefreshTokenUsage = TokenUsage.OneTimeOnly,
			AlwaysSendClientClaims = true,
			Enabled = true,
			ClientSecrets=  new List<Secret> { new Secret("dataEventRecordsSecret".Sha256()) },
			AllowedScopes = {
				IdentityServerConstants.StandardScopes.OpenId, 
				IdentityServerConstants.StandardScopes.Profile,
				IdentityServerConstants.StandardScopes.Email,
				IdentityServerConstants.StandardScopes.OfflineAccess,
				"dataEventRecords"
			}
		}
	};
}

When the token client requests a token, the offline_access must be sent in the HTTP request, to recieve a refresh token.

private static async Task<TokenResponse> RequestTokenAsync(string user, string password)
{
	return await _tokenClient.RequestResourceOwnerPasswordAsync(
		user,
		password,
		"email openid dataEventRecords offline_access");
}

Running the application

When all three applications are started, the console application gets the tokens from the IdentityServer4 application and the required claims are returned to the console application in the token. Not all the claims need to be added to the access_token, only the ones which are required on the resource server. If the user info is required in the UI, a separate request can be made for this info.

Here’s the token payload returned from the server to the client in the token. You can see the extra data added in the profile service, for example the role array.

{
  "nbf": 1492161131,
  "exp": 1492161251,
  "iss": "https://localhost:44318",
  "aud": [
    "https://localhost:44318/resources",
    "dataEventRecords"
  ],
  "client_id": "resourceownerclient",
  "sub": "123",
  "auth_time": 1492161130,
  "idp": "local",
  "role": [
    "dataEventRecords.admin",
    "dataEventRecords.user"
  ],
  "username": "damienbod",
  "email": "damienbod@email.ch",
  "scope": [
    "email",
    "openid",
    "dataEventRecords",
    "offline_access"
  ],
  "amr": [
    "pwd"
  ]
}

The token is used to get the data from the resource server. The client uses the access_token and adds it to the header of the HTTP request.

HttpClient httpClient = new HttpClient();
httpClient.SetBearerToken(access_token);

var payloadFromResourceServer = await httpClient.GetAsync("https://localhost:44365/api/DataEventRecords");
if (!payloadFromResourceServer.IsSuccessStatusCode)
{
	Console.WriteLine(payloadFromResourceServer.StatusCode);
}
else
{
	var content = await payloadFromResourceServer.Content.ReadAsStringAsync();
	Console.WriteLine(JArray.Parse(content));
}

The resource server validates each request using the UseIdentityServerAuthentication middleware extension method.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
IdentityServerAuthenticationOptions identityServerValidationOptions = new IdentityServerAuthenticationOptions
{
	Authority = "https://localhost:44318/",
	AllowedScopes = new List<string> { "dataEventRecords" },
	ApiSecret = "dataEventRecordsSecret",
	ApiName = "dataEventRecords",
	AutomaticAuthenticate = true,
	SupportedTokens = SupportedTokens.Both,
	// TokenRetriever = _tokenRetriever,
	// required if you want to return a 403 and not a 401 for forbidden responses
	AutomaticChallenge = true,
};

app.UseIdentityServerAuthentication(identityServerValidationOptions);

Each API is protected using the Authorize attribute with policies if needed. The HttpContext can be used to get the claims sent with the token, if required. The username is sent with the access_token in the header.

[Authorize("dataEventRecordsUser")]
[HttpGet]
public IActionResult Get()
{
	var userName = HttpContext.User.FindFirst("username")?.Value;
	return Ok(_dataEventRecordRepository.GetAll());
}

The client gets a refresh token and updates periodically in the client. You could use a background task to implement this in a desktop or mobile application.

public static async Task RunRefreshAsync(TokenResponse response, int milliseconds)
{
	var refresh_token = response.RefreshToken;

	while (true)
	{
		response = await RefreshTokenAsync(refresh_token);

		// Get the resource data using the new tokens...
		await ResourceDataClient.GetDataAndDisplayInConsoleAsync(response.AccessToken);

		if (response.RefreshToken != refresh_token)
		{
			ShowResponse(response);
			refresh_token = response.RefreshToken;
		}

		Task.Delay(milliseconds).Wait();
	}
}

The application then loops forever.

Links:

https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

https://github.com/IdentityModel/IdentityModel2

https://github.com/IdentityServer/IdentityServer4

https://github.com/IdentityServer/IdentityServer4.Samples



Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: Implementing OpenID Implicit Flow using OpenIddict and Angular

This article shows how to implement the OpenID Connect Implicit Flow using OpenIddict hosted in an ASP.NET Core application, an ASP.NET Core web API and an Angular application as the client.

Code: https://github.com/damienbod/AspNetCoreOpeniddictAngularImplicitFlow

Three different projects are used to implement the application. The OpenIddict Implicit Flow Server is used to authenticate and authorise, the resource server is used to provide the API, and the Angular application implements the UI.

OpenIddict Server implementing the Implicit Flow

To use the OpenIddict NuGet packages to implement an OpenID Connect server, you need to use the myget server. You can add a NuGet.config file to your project to configure this, or add it to the package sources in Visual Studio 2017.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet" value="https://api.nuget.org/v3/index.json" />
    <add key="aspnet-contrib" value="https://www.myget.org/F/aspnet-contrib/api/v3/index.json" />
  </packageSources>
</configuration>

Then you can use the NuGet package manager to download the required packages. You need to select the key for the correct source in the drop down on the right hand side, and select the required pre-release packages.

Or you can just add them directly to the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <OutputType>Exe</OutputType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="AspNet.Security.OAuth.Validation" Version="1.0.0-rtm-0241" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Google" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Twitter" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Cors" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Openiddict" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.EntityFrameworkCore" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.Mvc" Version="1.0.0-beta2-0598" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite.Design" Version="1.1.1" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0" />
    <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0" />
  </ItemGroup>

  <ItemGroup>
    <None Update="damienbodserver.pfx">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>

</Project>

The OpenIddict packages are configured in the ConfigureServices and the Configure methods in the Startup class. The following code configures the OpenID Connect Implicit Flow with a SQLite database using Entity Framework Core. The required endpoints are enabled, and Json Web tokens are used.

public void ConfigureServices(IServiceCollection services)
{
	services.AddDbContext<ApplicationDbContext>(options =>
	{
		options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"));
		options.UseOpenIddict();
	});

	services.AddIdentity<ApplicationUser, IdentityRole>()
		.AddEntityFrameworkStores<ApplicationDbContext>();

	services.Configure<IdentityOptions>(options =>
	{
		options.ClaimsIdentity.UserNameClaimType = OpenIdConnectConstants.Claims.Name;
		options.ClaimsIdentity.UserIdClaimType = OpenIdConnectConstants.Claims.Subject;
		options.ClaimsIdentity.RoleClaimType = OpenIdConnectConstants.Claims.Role;
	});

	services.AddOpenIddict(options =>
	{
		options.AddEntityFrameworkCoreStores<ApplicationDbContext>();
		options.AddMvcBinders();
		options.EnableAuthorizationEndpoint("/connect/authorize")
			   .EnableLogoutEndpoint("/connect/logout")
			   .EnableIntrospectionEndpoint("/connect/introspect")
			   .EnableUserinfoEndpoint("/api/userinfo");

		options.AllowImplicitFlow();
		options.AddSigningCertificate(_cert);
		options.UseJsonWebTokens();
	});

	var policy = new Microsoft.AspNetCore.Cors.Infrastructure.CorsPolicy();

	policy.Headers.Add("*");
	policy.Methods.Add("*");
	policy.Origins.Add("*");
	policy.SupportsCredentials = true;

	services.AddCors(x => x.AddPolicy("corsGlobalPolicy", policy));

	services.AddMvc();

	services.AddTransient<IEmailSender, AuthMessageSender>();
	services.AddTransient<ISmsSender, AuthMessageSender>();
}

The Configure method defines JwtBearerAuthentication so the userinfo API can be used, or any other authorisered API. The OpenIddict middlware is also added. The commented out method InitializeAsync is used to add OpenIddict data to the existing database. The database was created using Entity Framework Core migrations from the command line.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
		app.UseDatabaseErrorPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCors("corsGlobalPolicy");

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();

	var jwtOptions = new JwtBearerOptions()
	{
		AutomaticAuthenticate = true,
		AutomaticChallenge = true,
		RequireHttpsMetadata = true,
		Audience = "dataEventRecords",
		ClaimsIssuer = "https://localhost:44319/",
		TokenValidationParameters = new TokenValidationParameters
		{
			NameClaimType = OpenIdConnectConstants.Claims.Name,
			RoleClaimType = OpenIdConnectConstants.Claims.Role
		}
	};

	jwtOptions.TokenValidationParameters.ValidAudience = "dataEventRecords";
	jwtOptions.TokenValidationParameters.ValidIssuer = "https://localhost:44319/";
	jwtOptions.TokenValidationParameters.IssuerSigningKey = new RsaSecurityKey(_cert.GetRSAPrivateKey().ExportParameters(false));
	app.UseJwtBearerAuthentication(jwtOptions);

	app.UseIdentity();

	app.UseOpenIddict();

	app.UseMvcWithDefaultRoute();

	// Seed the database with the sample applications.
	// Note: in a real world application, this step should be part of a setup script.
	// InitializeAsync(app.ApplicationServices, CancellationToken.None).GetAwaiter().GetResult();
}

Entity Framework Core database migrations:

> dotnet ef migrations add test
> dotnet ef database update test

The UserinfoController controller is used to return user data to the client. The API requires a token which is validated using the JWT Bearer token validation, configured in the Startup class.
The required claims need to be added here, as the application requires. This example adds some extra role claims which are used in the Angular SPA.

using System.Threading.Tasks;
using AspNet.Security.OAuth.Validation;
using AspNet.Security.OpenIdConnect.Primitives;
using OpeniddictServer.Models;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace OpeniddictServer.Controllers
{
    [Route("api")]
    public class UserinfoController : Controller
    {
        private readonly UserManager<ApplicationUser> _userManager;

        public UserinfoController(UserManager<ApplicationUser> userManager)
        {
            _userManager = userManager;
        }

        //
        // GET: /api/userinfo
        [Authorize(ActiveAuthenticationSchemes = OAuthValidationDefaults.AuthenticationScheme)]
        [HttpGet("userinfo"), Produces("application/json")]
        public async Task<IActionResult> Userinfo()
        {
            var user = await _userManager.GetUserAsync(User);
            if (user == null)
            {
                return BadRequest(new OpenIdConnectResponse
                {
                    Error = OpenIdConnectConstants.Errors.InvalidGrant,
                    ErrorDescription = "The user profile is no longer available."
                });
            }

            var claims = new JObject();
            claims[OpenIdConnectConstants.Claims.Subject] = await _userManager.GetUserIdAsync(user);

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Email))
            {
                claims[OpenIdConnectConstants.Claims.Email] = await _userManager.GetEmailAsync(user);
                claims[OpenIdConnectConstants.Claims.EmailVerified] = await _userManager.IsEmailConfirmedAsync(user);
            }

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Phone))
            {
                claims[OpenIdConnectConstants.Claims.PhoneNumber] = await _userManager.GetPhoneNumberAsync(user);
                claims[OpenIdConnectConstants.Claims.PhoneNumberVerified] = await _userManager.IsPhoneNumberConfirmedAsync(user);
            }

            List<string> roles = new List<string> { "dataEventRecords", "dataEventRecords.admin", "admin", "dataEventRecords.user" };
            claims["role"] = JArray.FromObject(roles);

            return Json(claims);
        }
    }
}

The AuthorizationController controller implements the CreateTicketAsync method where the claims can be added to the tokens as required. The Implict Flow in this example requires both the id_token and the access_token and extra claims are added to the access_token. These are the claims used by the resource server to set the policies.

private async Task<AuthenticationTicket> CreateTicketAsync(OpenIdConnectRequest request, ApplicationUser user)
{
	var identity = new ClaimsIdentity(OpenIdConnectServerDefaults.AuthenticationScheme);

	var principal = await _signInManager.CreateUserPrincipalAsync(user);
	foreach (var claim in principal.Claims)
	{
		if (claim.Type == _identityOptions.Value.ClaimsIdentity.SecurityStampClaimType)
		{
			continue;
		}

		var destinations = new List<string>
		{
			OpenIdConnectConstants.Destinations.AccessToken
		};

		if ((claim.Type == OpenIdConnectConstants.Claims.Name) ||
			(claim.Type == OpenIdConnectConstants.Claims.Email) ||
			(claim.Type == OpenIdConnectConstants.Claims.Role)  )
		{
			destinations.Add(OpenIdConnectConstants.Destinations.IdentityToken);
		}

		claim.SetDestinations(destinations);

		identity.AddClaim(claim);
	}

	// Add custom claims
	var claimdataEventRecordsAdmin = new Claim("role", "dataEventRecords.admin");
	claimdataEventRecordsAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimAdmin = new Claim("role", "admin");
	claimAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimUser = new Claim("role", "dataEventRecords.user");
	claimUser.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	identity.AddClaim(claimdataEventRecordsAdmin);
	identity.AddClaim(claimAdmin);
	identity.AddClaim(claimUser);

	// Create a new authentication ticket holding the user identity.
	var ticket = new AuthenticationTicket(new ClaimsPrincipal(identity),
	new AuthenticationProperties(),
	OpenIdConnectServerDefaults.AuthenticationScheme);

	// Set the list of scopes granted to the client application.
	ticket.SetScopes(new[]
	{
		OpenIdConnectConstants.Scopes.OpenId,
		OpenIdConnectConstants.Scopes.Email,
		OpenIdConnectConstants.Scopes.Profile,
		"role",
		"dataEventRecords"
	}.Intersect(request.GetScopes()));

	ticket.SetResources("dataEventRecords");

	return ticket;
}

If you require more examples, or different flows, refer to the excellent openiddict-samples .

Angular Implicit Flow client

The Angular application uses the AuthConfiguration class to set the options required for the OpenID Connect Implicit Flow. The ‘id_token token’ is defined as the response type so that an access_token is returned as well as the id_token. The jwks_url is required so that the client can ge the signiture from the server to validate the token. The userinfo_url and the logoutEndSession_url are used to define the user data url and the logout url. These could be removed and the data from the jwks_url could be ued to get these parameters. The configuration here has to match the configuration on the server.

import { Injectable } from '@angular/core';

@Injectable()
export class AuthConfiguration {

    // The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public iss = 'https://localhost:44319/';

    public server = 'https://localhost:44319';

    public redirect_url = 'https://localhost:44308';

    // This is required to get the signing keys so that the signiture of the Jwt can be validated.
    public jwks_url = 'https://localhost:44319/.well-known/jwks';

    public userinfo_url = 'https://localhost:44319/api/userinfo';

    public logoutEndSession_url = 'https://localhost:44319/connect/logout';

    // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public client_id = 'angular4client';

    public response_type = 'id_token token';

    public scope = 'dataEventRecords openid';

    public post_logout_redirect_uri = 'https://localhost:44308/Unauthorized';
}

The OidcSecurityService is used to send the login request to the server and also handle the callback which validates the tokens. This class also persists the token data to the local storage.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import { Observable } from 'rxjs/Rx';
import { Router } from '@angular/router';
import { AuthConfiguration } from '../auth.configuration';
import { OidcSecurityValidation } from './oidc.security.validation';
import { JwtKeys } from './jwtkeys';

@Injectable()
export class OidcSecurityService {

    public HasAdminRole: boolean;
    public HasUserAdminRole: boolean;
    public UserData: any;

    private _isAuthorized: boolean;
    private actionUrl: string;
    private headers: Headers;
    private storage: any;
    private oidcSecurityValidation: OidcSecurityValidation;

    private errorMessage: string;
    private jwtKeys: JwtKeys;

    constructor(private _http: Http, private _configuration: AuthConfiguration, private _router: Router) {

        this.actionUrl = _configuration.server + 'api/DataEventRecords/';
        this.oidcSecurityValidation = new OidcSecurityValidation();

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.storage = sessionStorage; //localStorage;

        if (this.retrieve('_isAuthorized') !== '') {
            this.HasAdminRole = this.retrieve('HasAdminRole');
            this._isAuthorized = this.retrieve('_isAuthorized');
        }
    }

    public IsAuthorized(): boolean {
        if (this._isAuthorized) {
            if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
                console.log('IsAuthorized: isTokenExpired');
                this.ResetAuthorizationData();
                return false;
            }

            return true;
        }

        return false;
    }

    public GetToken(): any {
        return this.retrieve('authorizationData');
    }

    public ResetAuthorizationData() {
        this.store('authorizationData', '');
        this.store('authorizationDataIdToken', '');

        this._isAuthorized = false;
        this.HasAdminRole = false;
        this.store('HasAdminRole', false);
        this.store('_isAuthorized', false);
    }

    public SetAuthorizationData(token: any, id_token: any) {
        if (this.retrieve('authorizationData') !== '') {
            this.store('authorizationData', '');
        }

        console.log(token);
        console.log(id_token);
        console.log('storing to storage, getting the roles');
        this.store('authorizationData', token);
        this.store('authorizationDataIdToken', id_token);
        this._isAuthorized = true;
        this.store('_isAuthorized', true);

        this.getUserData()
            .subscribe(data => this.UserData = data,
            error => this.HandleError(error),
            () => {
                for (let i = 0; i < this.UserData.role.length; i++) {
                    console.log(this.UserData.role[i]);
                    if (this.UserData.role[i] === 'dataEventRecords.admin') {
                        this.HasAdminRole = true;
                        this.store('HasAdminRole', true);
                    }
                    if (this.UserData.role[i] === 'admin') {
                        this.HasUserAdminRole = true;
                        this.store('HasUserAdminRole', true);
                    }
                }
            });
    }

    public Authorize() {
        this.ResetAuthorizationData();

        console.log('BEGIN Authorize, no auth data');

        let authorizationUrl = this._configuration.server + '/connect/authorize';
        let client_id = this._configuration.client_id;
        let redirect_uri = this._configuration.redirect_url;
        let response_type = this._configuration.response_type;
        let scope = this._configuration.scope;
        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('AuthorizedController created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url =
            authorizationUrl + '?' +
            'response_type=' + encodeURI(response_type) + '&' +
            'client_id=' + encodeURI(client_id) + '&' +
            'redirect_uri=' + encodeURI(redirect_uri) + '&' +
            'scope=' + encodeURI(scope) + '&' +
            'nonce=' + encodeURI(nonce) + '&' +
            'state=' + encodeURI(state);

        window.location.href = url;
    }

    public AuthorizedCallback() {
        console.log('BEGIN AuthorizedCallback, no auth data');
        this.ResetAuthorizationData();

        let hash = window.location.hash.substr(1);

        let result: any = hash.split('&').reduce(function (result: any, item: string) {
            let parts = item.split('=');
            result[parts[0]] = parts[1];
            return result;
        }, {});

        console.log(result);
        console.log('AuthorizedCallback created, begin token validation');

        let token = '';
        let id_token = '';
        let authResponseIsValid = false;

        this.getSigningKeys()
            .subscribe(jwtKeys => {
                this.jwtKeys = jwtKeys;

                if (!result.error) {

                    // validate state
                    if (this.oidcSecurityValidation.ValidateStateFromHashCallback(result.state, this.retrieve('authStateControl'))) {
                        token = result.access_token;
                        id_token = result.id_token;
                        let decoded: any;
                        let headerDecoded;
                        decoded = this.oidcSecurityValidation.GetPayloadFromToken(id_token, false);
                        headerDecoded = this.oidcSecurityValidation.GetHeaderFromToken(id_token, false);

                        // validate jwt signature
                        if (this.oidcSecurityValidation.Validate_signature_id_token(id_token, this.jwtKeys)) {
                            // validate nonce
                            if (this.oidcSecurityValidation.Validate_id_token_nonce(decoded, this.retrieve('authNonce'))) {
                                // validate iss
                                if (this.oidcSecurityValidation.Validate_id_token_iss(decoded, this._configuration.iss)) {
                                    // validate aud
                                    if (this.oidcSecurityValidation.Validate_id_token_aud(decoded, this._configuration.client_id)) {
                                        // valiadate at_hash and access_token
                                        if (this.oidcSecurityValidation.Validate_id_token_at_hash(token, decoded.at_hash) || !token) {
                                            this.store('authNonce', '');
                                            this.store('authStateControl', '');

                                            authResponseIsValid = true;
                                            console.log('AuthorizedCallback state, nonce, iss, aud, signature validated, returning token');
                                        } else {
                                            console.log('AuthorizedCallback incorrect aud');
                                        }
                                    } else {
                                        console.log('AuthorizedCallback incorrect aud');
                                    }
                                } else {
                                    console.log('AuthorizedCallback incorrect iss');
                                }
                            } else {
                                console.log('AuthorizedCallback incorrect nonce');
                            }
                        } else {
                            console.log('AuthorizedCallback incorrect Signature id_token');
                        }
                    } else {
                        console.log('AuthorizedCallback incorrect state');
                    }
                }

                if (authResponseIsValid) {
                    this.SetAuthorizationData(token, id_token);
                    console.log(this.retrieve('authorizationData'));

                    // router navigate to DataEventRecordsList
                    this._router.navigate(['/dataeventrecords/list']);
                } else {
                    this.ResetAuthorizationData();
                    this._router.navigate(['/Unauthorized']);
                }
            });
    }

    public Logoff() {
        // /connect/endsession?id_token_hint=...&post_logout_redirect_uri=https://myapp.com
        console.log('BEGIN Authorize, no auth data');

        let authorizationEndsessionUrl = this._configuration.logoutEndSession_url;

        let id_token_hint = this.retrieve('authorizationDataIdToken');
        let post_logout_redirect_uri = this._configuration.post_logout_redirect_uri;

        let url =
            authorizationEndsessionUrl + '?' +
            'id_token_hint=' + encodeURI(id_token_hint) + '&' +
            'post_logout_redirect_uri=' + encodeURI(post_logout_redirect_uri);

        this.ResetAuthorizationData();

        window.location.href = url;
    }

    private runGetSigningKeys() {
        this.getSigningKeys()
            .subscribe(
            jwtKeys => this.jwtKeys = jwtKeys,
            error => this.errorMessage = <any>error);
    }

    private getSigningKeys(): Observable<JwtKeys> {
        return this._http.get(this._configuration.jwks_url)
            .map(this.extractData)
            .catch(this.handleError);
    }

    private extractData(res: Response) {
        let body = res.json();
        return body;
    }

    private handleError(error: Response | any) {
        // In a real world app, you might use a remote logging infrastructure
        let errMsg: string;
        if (error instanceof Response) {
            const body = error.json() || '';
            const err = body.error || JSON.stringify(body);
            errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
        } else {
            errMsg = error.message ? error.message : error.toString();
        }
        console.error(errMsg);
        return Observable.throw(errMsg);
    }

    public HandleError(error: any) {
        console.log(error);
        if (error.status == 403) {
            this._router.navigate(['/Forbidden']);
        } else if (error.status == 401) {
            this.ResetAuthorizationData();
            this._router.navigate(['/Unauthorized']);
        }
    }

    private retrieve(key: string): any {
        let item = this.storage.getItem(key);

        if (item && item !== 'undefined') {
            return JSON.parse(this.storage.getItem(key));
        }

        return;
    }

    private store(key: string, value: any) {
        this.storage.setItem(key, JSON.stringify(value));
    }

    private getUserData = (): Observable<string[]> => {
        this.setHeaders();
        return this._http.get(this._configuration.userinfo_url, {
            headers: this.headers,
            body: ''
        }).map(res => res.json());
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        let token = this.GetToken();

        if (token !== '') {
            this.headers.append('Authorization', 'Bearer ' + token);
        }
    }
}

The OidcSecurityValidation class defines the functions used to validate the tokens defined in the OpenID Connect specification for the Implicit Flow.

import { Injectable } from '@angular/core';

// from jsrasiign
declare var KJUR: any;
declare var KEYUTIL: any;
declare var hextob64u: any;

// http://openid.net/specs/openid-connect-implicit-1_0.html

// id_token
//// id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
//// id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
// id_token C3: If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.
// id_token C4: If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
//// id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
//// id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
//// id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
// id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time, limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
//// id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
// id_token C10: If the acr Claim was requested, the Client SHOULD check that the asserted Claim Value is appropriate.The meaning and processing of acr Claim Values is out of scope for this document.
// id_token C11: When a max_age request is made, the Client SHOULD check the auth_time Claim value and request re- authentication if it determines too much time has elapsed since the last End- User authentication.

//// Access Token Validation
//// access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
//// access_token C2: Take the left- most half of the hash and base64url- encode it.
//// access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.

@Injectable()
export class OidcSecurityValidation {

    // id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
    public IsTokenExpired(token: string, offsetSeconds?: number): boolean {

        let decoded: any;
        decoded = this.GetPayloadFromToken(token, false);

        let tokenExpirationDate = this.getTokenExpirationDate(decoded);
        offsetSeconds = offsetSeconds || 0;

        if (tokenExpirationDate == null) {
            return false;
        }

        // Token expired?
        return !(tokenExpirationDate.valueOf() > (new Date().valueOf() + (offsetSeconds * 1000)));
    }

    // id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
    public Validate_id_token_nonce(dataIdToken: any, local_nonce: any): boolean {
        if (dataIdToken.nonce !== local_nonce) {
            console.log('Validate_id_token_nonce failed');
            return false;
        }

        return true;
    }

    // id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public Validate_id_token_iss(dataIdToken: any, client_id: any): boolean {
        if (dataIdToken.iss !== client_id) {
            console.log('Validate_id_token_iss failed');
            return false;
        }

        return true;
    }

    // id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public Validate_id_token_aud(dataIdToken: any, aud: any): boolean {
        if (dataIdToken.aud !== aud) {
            console.log('Validate_id_token_aud failed');
            return false;
        }

        return true;
    }

    public ValidateStateFromHashCallback(state: any, local_state: any): boolean {
        if (state !== local_state) {
            console.log('ValidateStateFromHashCallback failed');
            return false;
        }

        return true;
    }

    public GetPayloadFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[1];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetHeaderFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[0];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetSignatureFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[2];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    // id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
    // id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
    public Validate_signature_id_token(id_token: any, jwtkeys: any): boolean {

        if (!jwtkeys || !jwtkeys.keys) {
            return false;
        }

        let header_data = this.GetHeaderFromToken(id_token, false);
        let kid = header_data.kid;
        let alg = header_data.alg;

        if ('RS256' != alg) {
            console.log('Only RS256 supported');
            return false;
        }

        let isValid = false;

        for (let key of jwtkeys.keys) {
            if (key.kid === kid) {
                let publickey = KEYUTIL.getKey(key);
                isValid = KJUR.jws.JWS.verify(id_token, publickey, ['RS256']);
                return isValid;
            }
        }

        return isValid;
    }

    // Access Token Validation
    // access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
    // access_token C2: Take the left- most half of the hash and base64url- encode it.
    // access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.
    public Validate_id_token_at_hash(access_token: any, at_hash: any): boolean {

        let hash = KJUR.crypto.Util.hashString(access_token, 'sha256');
        let first128bits = hash.substr(0, hash.length / 2);
        let testdata = hextob64u(first128bits);

        if (testdata === at_hash) {
            return true; // isValid;
        }

        return false;
    }

    private getTokenExpirationDate(dataIdToken: any): Date {
        if (!dataIdToken.hasOwnProperty('exp')) {
            return null;
        }

        let date = new Date(0); // The 0 here is the key, which sets the date to the epoch
        date.setUTCSeconds(dataIdToken.exp);

        return date;
    }


    private urlBase64Decode(str: string) {
        let output = str.replace('-', '+').replace('_', '/');
        switch (output.length % 4) {
            case 0:
                break;
            case 2:
                output += '==';
                break;
            case 3:
                output += '=';
                break;
            default:
                throw 'Illegal base64url string!';
        }

        return window.atob(output);
    }
}

The jsrsasign is used to validate the token signature and is added to the html file as a link.

!doctype html>
<html>
<head>
    <base href="./">
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>ASP.NET Core 1.0 Angular IdentityServer4 Client</title>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
	
	<script src="assets/jsrsasign.min.js"></script>
</head>
<body>
    <my-app>Loading...</my-app>
</body>
</html>

Once logged into the application, the access_token is added to the header of each request and sent to the resource server or the required APIs on the OpenIddict server.

 private setHeaders() {

        console.log('setHeaders started');

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.headers.append('Cache-Control', 'no-cache');

        let token = this._securityService.GetToken();
        if (token !== '') {
            let tokenValue = 'Bearer ' + token;
            console.log('tokenValue:' + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

ASP.NET Core Resource Server API

The resource server provides an API protected by security policies, dataEventRecordsUser and dataEventRecordsAdmin.

using AspNet5SQLite.Model;
using AspNet5SQLite.Repositories;

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace AspNet5SQLite.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class DataEventRecordsController : Controller
    {
        private readonly IDataEventRecordRepository _dataEventRecordRepository;

        public DataEventRecordsController(IDataEventRecordRepository dataEventRecordRepository)
        {
            _dataEventRecordRepository = dataEventRecordRepository;
        }

        [Authorize("dataEventRecordsUser")]
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_dataEventRecordRepository.GetAll());
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpGet("{id}")]
        public IActionResult Get(long id)
        {
            return Ok(_dataEventRecordRepository.Get(id));
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPost]
        public void Post([FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Post(value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPut("{id}")]
        public void Put(long id, [FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Put(id, value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpDelete("{id}")]
        public void Delete(long id)
        {
            _dataEventRecordRepository.Delete(id);
        }
    }
}

The policies are implemented in the Startup class and are implemented using the role claims dataEventRecords.user, dataEventRecords.admin and the scope dataEventRecords.

var guestPolicy = new AuthorizationPolicyBuilder()
	.RequireAuthenticatedUser()
	.RequireClaim("scope", "dataEventRecords")
	.Build();

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role",  "dataEventRecords.user");
	});

});

Jwt Bearer Authentication is used to validate the API HTTP requests.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();
			
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
	Authority = "https://localhost:44319/",
	Audience = "dataEventRecords",
	RequireHttpsMetadata = true,
	TokenValidationParameters = new TokenValidationParameters
	{
		NameClaimType = OpenIdConnectConstants.Claims.Subject,
		RoleClaimType = OpenIdConnectConstants.Claims.Role
	}
});

Running the application

When the application is started, all 3 applications are run, using the Visual Studio 2017 multiple project start option.

After the user clicks the login button, the user is redirected to the OpenIddict server to login.

After a successful login, the user is redirected back to the Angular application.

Links:

https://github.com/openiddict/openiddict-core

http://kevinchalet.com/2016/07/13/creating-your-own-openid-connect-server-with-asos-implementing-the-authorization-code-and-implicit-flows/

https://github.com/openiddict/openiddict-core/issues/49

https://github.com/openiddict/openiddict-samples

https://blogs.msdn.microsoft.com/webdev/2017/01/23/asp-net-core-authentication-with-identityserver4/

https://blogs.msdn.microsoft.com/webdev/2016/10/27/bearer-token-authentication-in-asp-net-core/

https://blogs.msdn.microsoft.com/webdev/2017/04/06/jwt-validation-and-authorization-in-asp-net-core/

https://jwt.io/

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows



Andrew Lock: Creating and editing solution files with the .NET CLI

Creating and editing solution files with the .NET CLI

With the release of Visual Studio 2017 and the RTM .NET Core tooling, the .NET command line has gone through a transformation. The project.json format is no more, and instead we have returned back to .csproj files. It's not your grand-daddy's .csproj however - the new .csproj is far leaner than previous MSBuild files, and massively reduces the reliance on GUIDs.

One of the biggest reasons for this is the need to make the files easily editable by hand. With .NET Core being cross platform, relying on Visual Studio to edit the files correctly with the magic GUIDs is no longer acceptable.

As well as a switch from project.json to csproj, the global.json file is no more - instead we're back to .sln files. these are primarily for when you're working with Visual Studio, and they're not entirely necessary for building .NET Core applications. In some cases though, if you're working in a cross-platform environment, you may need to edit sln files on mac/Linux.

Unfortunately, GUIDs in .sln files have survived the great .NET Core purge of 2017, so editing the files by hand isn't particularly fun. For example, the following .sln file contains two projects - a source code project and a test project:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "src", "src", "{2C14C847-9839-4C69-A5A0-C95D64DAECF2}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliApp", "src\CliApp\CliApp.csproj", "{D4DDD205-C160-4179-B8CF-B98E5066A187}"  
EndProject  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "test", "test", "{08F7408C-CA01-4495-A30C-F16F3FCBFDF2}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliAppTests", "test\CliAppTests\CliAppTests.csproj", "{4283F8CC-9575-48E5-AD4C-B628DB5D6301}"  
EndProject  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
    GlobalSection(ProjectConfigurationPlatforms) = postSolution
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x64.ActiveCfg = Debug|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x64.Build.0 = Debug|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x86.ActiveCfg = Debug|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Debug|x86.Build.0 = Debug|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|Any CPU.Build.0 = Release|Any CPU
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x64.ActiveCfg = Release|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x64.Build.0 = Release|x64
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x86.ActiveCfg = Release|x86
        {D4DDD205-C160-4179-B8CF-B98E5066A187}.Release|x86.Build.0 = Release|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x64.ActiveCfg = Debug|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x64.Build.0 = Debug|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x86.ActiveCfg = Debug|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Debug|x86.Build.0 = Debug|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|Any CPU.Build.0 = Release|Any CPU
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x64.ActiveCfg = Release|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x64.Build.0 = Release|x64
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x86.ActiveCfg = Release|x86
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301}.Release|x86.Build.0 = Release|x86
    EndGlobalSection
    GlobalSection(NestedProjects) = preSolution
        {D4DDD205-C160-4179-B8CF-B98E5066A187} = {2C14C847-9839-4C69-A5A0-C95D64DAECF2}
        {4283F8CC-9575-48E5-AD4C-B628DB5D6301} = {08F7408C-CA01-4495-A30C-F16F3FCBFDF2}
    EndGlobalSection
EndGlobal

A bit overwhelming right? Luckily, the .NET Core command line provides a number of commands for creating and editing these files, so you don't have to dive into them with a text editor directly.

Creating a new solution file

In this example, I'll assume you've already created a couple of projects. You can use dotnet new to achieve this, whether you're creating a command line app, web app, library or test project.

You can also create your own dotnet new templates using new experimental features, that should be available in stable form for .NET Core 2.0. You can read about these features here.

You can create a new solution file in the current directory using:

dotnet new sln

You can also provide an optional name for the .sln file using --name filename, otherwise it will have the same name as the current folder.

$ dotnet new sln --name test
Content generation time: 20.8484 ms  
The template "Solution File" created successfully.  

This will create a new .sln file in the current folder. The solution file currently doesn't have any associated projects, but defines a number of build configurations. The command above creates the following file:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
EndGlobal  

Adding a project to a solution file

Once you have a solution file, you can add a project to it using the sln add command, and provide the path to the project's .csproj file. This will add the project to an existing solution file in the current folder. The path to the project can be absolute or relative, but it will be added as a relative path in the .sln file.

dotnet sln add <path-to-project.csproj>

For example, to add a project located at src/CliApp/CliApp.csproj, when you have a single solution file in your current directory, you can use the following:

$ dotnet sln add "src\CliApp\CliApp.csproj"
Project `src\CliApp\CliApp.csproj` added to the solution.  

After running this, you'll see your project has been added to the .sln file, along with a src solution folder:

Microsoft Visual Studio Solution File, Format Version 12.00  
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0  
MinimumVisualStudioVersion = 15.0.26124.0  
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "src", "src", "{FFEC406A-FBFB-4737-8C32-1CF34FAF2D6F}"  
EndProject  
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CliApp", "src\CliApp\CliApp.csproj", "{92B636D5-2C14-4445-B8C1-BBF93A03FA5D}"  
EndProject  
Global  
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Debug|x64 = Debug|x64
        Debug|x86 = Debug|x86
        Release|Any CPU = Release|Any CPU
        Release|x64 = Release|x64
        Release|x86 = Release|x86
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
    GlobalSection(ProjectConfigurationPlatforms) = postSolution
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x64.ActiveCfg = Debug|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x64.Build.0 = Debug|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x86.ActiveCfg = Debug|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Debug|x86.Build.0 = Debug|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|Any CPU.Build.0 = Release|Any CPU
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x64.ActiveCfg = Release|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x64.Build.0 = Release|x64
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x86.ActiveCfg = Release|x86
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D}.Release|x86.Build.0 = Release|x86
    EndGlobalSection
    GlobalSection(NestedProjects) = preSolution
        {92B636D5-2C14-4445-B8C1-BBF93A03FA5D} = {FFEC406A-FBFB-4737-8C32-1CF34FAF2D6F}
    EndGlobalSection
EndGlobal  

Adding a project to a specific solution file

If you have multiple solution files in the current directory, then trying to run the previous command will give you an error similar to the following:

$ dotnet sln add "test\CliAppTests\CliAppTests.csproj"
Found more than one solution file in C:\Users\Sock\Repos\andrewlock\example\. Please specify which one to use.  

Instead, you must specify the name of the solution you wish to amend, by placing the path to the solution after sln:

dotnet sln <path-to-solution.sln> add <path-to-project.csproj>

For example,

dotnet sln "example.sln" add "test\CliAppTests\CliAppTests.csproj"

Note, when I first ran this command, I incorrectly placed the add parameter before the solution name, using dotnet sln add <solution> <project>. Unfortunately, this currently gives you a slightly confusing error (tracked here): Unhandled Exception: Microsoft.Build.Exceptions.InvalidProjectFileException: The project file could not be loaded. Data at the root level is invalid. Line 2, position 1.

Removing a project from a solution file

Removing a project from your solution is the mirror of adding a project - just be aware that, as before, you need to use the path to the .csproj file rather than just the name of the project folder, and if you have multiple .sln files in the current folder then you need to specify which one to modify

dotnet sln remove <path-to-project.csproj>

or

dotnet sln <path-to-solution.sln> remove <path-to-project.csproj>

This will remove the specified project, along with any associated sub folder nodes

$ dotnet sln remove src/CliApp/CliApp.csproj
Project reference `src\CliApp\CliApp.csproj` removed.  

Listing the projects in a solution file

The final command exposed by the .NET CLI is the ability to list the projects in the solution file, instead of having to open it up and wade through the litany of GUIDs:

dotnet sln list

Note that this command not only lists the project names (the paths to the .csproj files), it also lists the sub folders in which they reside. For example, the following solution contains two projects, one inside the src folder, one inside the test folder:

Creating and editing solution files with the .NET CLI

Listing the projects in this solution gives the following:

$ dotnet sln list
Project reference(s)  
--------------------
test  
test\CliAppTests\CliAppTests.csproj  
src  
src\CliApp\CliApp.csproj  

Summary

With .NET Core, cross platform development is a genuine first class citizen. With the .NET CLI, you can now manage your .sln files without needing to use Visual Studio or to mess with GUIDs in a text editor. This lets you create, add, remove and list projects. To see all the options available to you, run dotnet sln --help


Andrew Lock: Getting started with ASP.NET Core

Getting started with ASP.NET Core

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post gives you a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

When to choose ASP.NET Core

I’m going to assume that you’ve a general grasp of what ASP.NET Core is and how it was designed, but the question remains – should you use it? Microsoft is heavily promoting ASP.NET Core as their web framework of choice for the foreseeable future, but switching to or learning a new web stack is a big task for any developer or company. This article describes some of the highlights of ASP.NET Core and gives advice on the type of applications to build with it, as well as the type of applications to avoid.

What type of applications can you build?

ASP.NET Core provides a generalised web framework that can be used in a wide variety of applications. It can most obviously be used for building rich, dynamic websites, whether they are e-commerce sites, content-based websites, or n-tier applications – much the same as the previous version of ASP.NET.

A small number of third-party helper libraries are available for building this sort of complex application, but many are under active development. Many developers are working to port their libraries to work with ASP.NET Core – but it’ll take time for more to become available. For example, the open-source content management system (CMS) Orchard (figure 1) is currently available as an alpha version of Orchard 2, running on ASP.NET Core and .NET Core.

Getting started with ASP.NET Core Figure 1. The ASP.NET Community blogs website (https://weblogs.asp.net) is built using the Orchard CMS. Orchard 2 is available as a pre-alpha version for ASP.NET Core development

Traditional, server-side rendered web applications are the bread and butter of ASP.NET development, both with the previous version of ASP.NET and ASP.NET Core. Additionally, single page applications (SPAs), which use a client-side framework that talks to a REST server, are easy to create with ASP.NET Core. Whether you’re using Angular, Ember, React, or some other client-side framework, it’s easy to create an ASP.NET Core application to act as the server-side API.

DEFINITION REST stands for Representational State Transfer. RESTful applications typically use lightweight and stateless HTTP calls to read, post (create/update), and delete data.

ASP.NET Core isn’t restricted to creating RESTful services. It’s also easy to create a web-service or remote procedure call (RPC)-style service for your application, depending on your requirements, as shown in figure 2. In the simplest case, your application might expose only a single endpoint, narrowing its scope to become a microservice. ASP.NET Core is perfectly designed for building simple services thanks to its cross-platform support and lightweight design.

Getting started with ASP.NET Core Figure 2. ASP.NET Core can act as the server side application for a variety of different clients. It can serve HTML pages for traditional web applications, act as a REST API for client-side SPA applications, or act as an ad-hoc RPC service for client applications.

You must consider multiple factors when choosing a platform, not all of which are technical. One example is the level of support you can expect to receive from the creators. For some organizations, this can be one of the main obstacles to adopting open-source software. Luckily, Microsoft has pledged to provide full support for each major and minor point release of the ASP.NET Core framework for three years. Furthermore, as all development takes place in the open, you can sometimes get answers to your questions from the general community, as well as from Microsoft directly.

Two primary dimensions you must consider when deciding whether to use ASP.NET Core are: whether you’re already a .NET developer (or not); and whether you’re creating a new application or looking to convert an existing one.

If you’re new to .NET development

If you’re new to .NET development, and are considering ASP.NET Core, welcome! Microsoft is pushing ASP.NET Core as an attractive option for web development beginners, but taking .NET cross-platform means it’s competing with many other frameworks on their own turf. ASP.NET Core has many selling points when compared to other cross-platform web frameworks:

  • It’s a modern but stable web framework.
  • It uses familiar design-patterns and paradigms.
  • C# is a great language
  • You can build and run on any platform

ASP.NET Core is a re-imagining of the ASP.NET framework, built with modern software design principles on top of the new .NET Core platform. .NET Core is new in one sense, but has drawn significantly from the mature, stable, and reliable .NET Framework, which has been used for well over a decade. You can rest easy choosing ASP.NET Core and .NET Core because you’ll be getting a dependable platform, as well as a fully featured web framework.

Many of the web frameworks available today use similar, well-established design patterns, and ASP.NET Core is no different. For example, Ruby on Rails is known for its use of the Model-View-Controller (MVC) pattern; node.js is known for the way it processes requests using small discrete modules (called a pipeline); and dependency injection is found in a wide variety of frameworks. If these techniques are familiar, it’s easy to transfer them across to ASP.NET Core; if they’re new to you, you can look forward to using industry best practices!

The primary language of .NET development and ASP.NET Core is C#. This language has a huge following, and for good reason! As an object-oriented C-based language it provides a sense of familiarity to those familiar with C, Java, and many other languages. In addition, it has many powerful features, such as Language Integrated Query (LINQ), closures, and asynchronous programming constructs. The C# language is also designed in the open on GitHub, as is Microsoft’s C# compiler, code-named Roslyn 1.

NOTE If you wish to learn C#, I recommend picking up C# in Depth by Jon Skeet, also published by Manning (ISBN 9781617291340).

One of the major selling points of ASP.NET Core and .NET Core is the ability to develop and run on any platform. Whether you’re using a Mac, Windows, or Linux, you can run the same ASP.NET Core apps and develop across multiple environments. As a Linux user, a wide range of distributions are supported (RHEL, Ubuntu, Debian, CentOS, Fedora and openSUSE, to name a few), and you can be confident that your operating system of choice will be a viable option. Work is underway to enable ASP.NET Core to run on the tiny Alpine distribution, for truly compact deployments to containers.

Built with containers in mind

Traditionally, web applications were deployed directly to a server, or in more recent years, to a virtual machine. Virtual machines allow operating systems to be installed in a layer of virtual hardware, abstracting away the underlying hardware. This has advantages over direct installation, like easy maintenance, deployment, and recovery. Unfortunately, they’re also heavy on file size and resource utilisation.

This is where containers come in. Containers are far more lightweight and don’t have the overhead of virtual machines. They’re built in a series of layers and don’t require you to boot a new operating system when starting. That means they’re quick to start and great for quick provision. Containers and Docker are quickly becoming the go-to platform for building large, scalable systems.

Containers have never been an attractive option for ASP.NET applications, but with ASP.NET Core, .NET Core and Docker for Windows, it’s all changing. A lightweight ASP.NET Core application running on the cross-platform .NET Core framework is perfect for thin container deployments.

As well as running on each platform, one of the selling points of .NET is the ability to only need to write and compile once. Your application is compiled to Intermediate Language (IL) code, which is a platform independent format. If a target system has the .NET Core platform installed, you can run compiled IL from any platform. That means you can, for example, develop on a MacBook or a Windows machine, and deploy the exact same files to your production Linux machines. This compile-once run-anywhere promise has finally been realized with ASP.NET Core and .NET Core.

If you’re a .NET Framework developer creating a new application

If you’re currently a .NET developer, then the choice of whether to invest in ASP.NET Core for new applications is a question of timing. Microsoft has pledged to provide continued support for the older ASP.NET framework, but it’s clear their focus is primarily on the newer ASP.NET Core framework. In the long-term, if you wish to take advantage of new features and capabilities, it’s likely that ASP.NET Core will be the route to take.

Whether ASP.NET Core is right for you now largely depends on your requirements, and your comfort with using products that are early in their lifecycle. The main benefits over the previous ASP.NET framework are:

  • Cross-platform development and deployment
  • A focus on performance as a feature
  • A simplified hosting model
  • Faster releases
  • Open-source
  • Modular features

As a .NET developer, if you aren’t using any Windows-specific constructs, such as the Registry, then the ability to build and deploy applications cross-platform opens the door to a whole new avenue of applications. Take advantage of cheaper Linux VM hosting in the cloud; use Docker containers for repeatable continuous integration; or write .NET code on your MacBook without needing to run a Windows virtual machine. ASP.NET Core in combination with .NET Core makes all this possible.

It’s important to be aware of the limitations of cross-platform applications - not all of the .NET Framework APIs are available in .NET Core. It’s likely that most of the APIs you need will make their way to .NET Core over time, but it’s an important point to note.
The hosting model for the previous ASP.NET framework was a relatively complex one, relying on Windows Internet Information Services (IIS) to provide the web server hosting. In cross-platform environments this kind of symbiotic relationship isn’t possible, and an alternative hosting model has been adopted, which separates web applications from the underlying host. This opportunity has led to the development of Kestrel, a fast cross-platform HTTP server on which ASP.NET Core can run.

Instead of the previous design, whereby IIS calls into specific points of your application, ASP.NET Core applications are a form of console application, which self-hosts a web server and handles requests directly, as shown in figure 3. This hosting model is conceptually much simpler, and allows you to test and debug your applications from the command line, though it doesn’t remove the need to run IIS (or equivalent) in production.

Getting started with ASP.NET Core Figure 3. The difference in hosting models between ASP.NET (top) and ASP.NET Core (bottom). With the previous version of ASP.NET, IIS is tightly coupled to the application, calling into specific exposed methods for different stages of a request. The hosting model in ASP.NET Core is simpler; IIS hands off the request to a self-hosted web server in the ASP.NET Core application and receives the response, but has no deeper knowledge of the application.

Changing the hosting model to use a built-in HTTP web server has created another opportunity. Performance has been a sore point for ASP.NET applications in the past. It’s possible to build highly performant applications – Stack Overflow (http://stackoverflow.com) is testament to that – but the web framework itself isn’t designed with performance as a priority, and can end up being somewhat of an obstacle.

To be competitive cross-platform, the ASP.NET team recently focused on making the Kestrel HTTP server as fast as possible. TechEmpower (www.techempower.com/benchmarks) have been running benchmarks on a whole range of web frameworks from various languages for several years now. In round thirteen of the plaintext benchmarks, TechEmpower announced that ASP.NET Core with Kestrel was now the fastest mainstream fullstack web framework, and among the top ten fastest of all frameworks!

Web servers – naming things is hard

One of the difficult aspects of programing for the web these days is the confusing array of, often conflicting, terminology. For example, if you’ve used IIS in the past you may have described it as a web server, or possibly a web host. Conversely, if you’ve ever built an application using node.js, you may have also referred to that application as a web server. Alternatively, you may have called the physical machines on which your application runs a web server!

Similarly, you may have built an application for the Internet and called it a website or a web application, probably somewhat arbitrarily based on the level of dynamism it displayed.

In this article when I say “web server” in the context of ASP.NET Core, I’m referring to the HTTP server that runs as a part of your ASP.NET Core application. By default, this is the Kestrel web server, but it’s not a requirement. It’d be possible to write a replacement web server and substitute it for Kestrel if you desired.

The web server is responsible for receiving HTTP requests and generating responses. In the previous version of ASP.NET, IIS took this role, but in ASP.NET Core, Kestrel is the web server.

I’ll only use the term web application for describing ASP.NET Core applications, regardless of whether they contain only static content or are completely dynamic. Either way, they’re applications that are accessed via the web, and that name seems appropriate!

Many of the performance improvements made to Kestrel didn’t come from the ASP.NET Team themselves, but from contributors to the open-source project on GitHub . Developing in the open means you should typically see fixes and features make their way to production faster than you would for the previous version of ASP.NET, which was dependent on the .NET Framework, and, as such, had long release-cycles.

In contrast ASP.NET Core is completely decoupled from the underlying .NET platform. The entire web framework is implemented as modular NuGet packages, which can be versioned and updated independently (from the underlying platform on which they are built).

NOTE NuGet is a package manager for .NET that enables importing libraries into your projects. It’s equivalent to Ruby Gems, npm for JavaScript, or Maven for Java.

To enable this, ASP.NET Core was designed to be highly modular, with as little coupling to other features as possible. This modularity lends itself to a pay-for-play approach to dependencies, whereby you start from a bare*bones application and only add the additional libraries you require, as opposed to the kitchen-sink approach of previous ASP.NET applications. Even MVC is an optional package! But don’t worry, this approach doesn’t mean that ASP.NET Core is lacking in features; it just means you need to opt-in those features. Some of the key infrastructure improvements include:

  • Middleware “pipeline” for defining your application’s behaviour
  • Built-in support for dependency injection
  • Combined UI (MVC) and API (Web API) infrastructure
  • Highly extensible configuration system
  • Scalable for cloud platforms by default using asynchronous programming

Each of these features was possible in the previous version of ASP.NET, but required a fair amount of additional work to setup. With ASP.NET Core, they’re all there, ready and waiting to be connected!

Microsoft fully supports ASP.NET Core, and if you’ve a new system you wish to build, there’s no significant reason not to. The largest obstacle you’re likely to come across is a third-party library holding you back, either because they only support older ASP.NET features, or they haven’t converted to work with .NET Core.

Converting an existing ASP.NET application to ASP.NET Core

In contrast to new applications, an existing application is presumably already providing value, and there should always be a tangible benefit to performing what may amount to a significant rewrite in converting from ASP.NET to ASP.NET Core. The advantages to adopting ASP.NET Core are much the same as for new applications; cross-platform deployment, modular features, and a focus on performance. Determining whether the benefits are sufficient depends largely on the particulars of your application, but some characteristics are clear indicators against conversion:

  • Your application uses ASP.NET Web Forms
  • Your application is built on Web Pages, WCF, SignalR, or VB.NET
  • Your application is large, with many “advanced” MVC features

If you’ve an ASP.NET Web Forms application, attempting to convert it to ASP.NET Core isn’t advisable. Web Forms is inextricably tied to System.Web.dll, which will likely never be available in ASP.NET Core. Converting an application to ASP.NET Core would effectively involve rewriting the application from scratch, not only shifting frameworks but also shifting design paradigms. A better approach would be to slowly introduce Web API concepts and try to reduce the reliance on legacy Web Forms constructs, such as ViewData. Numerous resources are online to help you with this approach, like the www.asp.net/web-api website.

Similarly, if your application makes heavy use of Web Pages or SignalR, then now may not be the time to consider an upgrade. These features are under active development (currently under the monikers “Controller-less Razor Pages” and “SignalR 2.0”), but haven’t been released as part of the ASP.NET Core framework. Similarly, VB.NET is pegged for future support, but currently isn’t part of the framework.

Windows Communication Foundation (WCF) is also currently not supported, but it’s possible to consume WCF services by jumping through some slightly obscure hoops. Currently there’s no way to host a WCF service from an ASP.NET Core application; if you need the features WCF provides, and can’t use a more conventional REST service, then ASP.NET Core is probably best avoided.

If your application is complex and makes use of the previous MVC extensibility points or message handlers, then porting your application to ASP.NET Core could prove complex. ASP.NET Core is built with many similar features to the previous version of ASP.NET MVC, but the underlying architecture is different. Several previous features don’t have direct replacements, and will require re-thinking.

The larger the application, the greater the difficulty you’re likely to have in converting to ASP.NET Core. Microsoft suggests that porting an application from ASP.NET MVC to ASP.NET Core is at least as big a rewrite as porting from ASP.NET Web Forms to ASP.NET MVC. If that doesn’t scare you then nothing will!

When should you port an application to ASP.NET Core? As I’ve discussed, the best opportunity for getting started is on small, green-field, new projects instead of existing applications. That said, if the application in question is small, with little custom behaviour, then porting might be a viable option. Small implies reduced risk, and probably reduced complexity. If your application consists primarily of MVC or Web API controllers and associated Razor views, then moving to ASP.NET Core may be feasible.

Summary

Hopefully this article has kindled your interest in using ASP.NET Core for building your new applications. For more information, download the free first chapter of ASP.NET Core in Action and see this Slideshare presentation. Don’t forget to save 37% with code lockaspdotnet at manning.com.

  1. C# language and .NET Compiler Platform GitHub source code repository (https://github.com/dotnet/roslyn)


Anuraj Parameswaran: Working with Azure Blob storage in ASP.NET Core

This post is about uploading and downloading images from Azure Blob storage using ASP.NET Core. First you need to create a blob storage account and then a container which you’ll use to store all the images. You can do this from Azure portal. You need to select Storage > Storage Account - Blob, file, Table, Queue > Create a Storage Account.


Andrew Lock: Adding favicons to your ASP.NET Core website with Real Favicon Generator

Adding favicons to your ASP.NET Core website with Real Favicon Generator

In this post I will show how you can add favicons to your ASP.NET Core MVC application, using the site realfavicongenerator.net.

The days of being able to add a simple favicon.ico to the root of your new web app are long gone. There are so many different browsers, platforms and devices, each of which require slightly different sizes and semantics for their favicons, that figuring out what you actually need to implement can be overwhelming.

Luckily, realfavicongenerator.net does most of the hard work for you. All you need is an initial icon to work with, and it will generate all the image files, and even the markup, for you.

I'll walk through the process of creating a favicon and configuring your ASP.NET Core MVC application to serve it. I'll assume you're starting from an existing ASP.NET core application that uses a base layout view, _layout.cshtml, and is already configured to serve static files using the StaticFileMiddleware. I'll mostly stick to the defaults for favicons here, but the realfavicongenerator.net site does a great job of explaining all the favicon requirements and options, so feel free to play and find something that works best for you.

Thanks to @pbernard, author of RealFavIconGenerator, the instructions to add favicons to your ASP.NET Core application can now be found in the final step of on http://realfavicongenerator.net itself, so they're always to hand. Enjoy!

Adding favicons to your ASP.NET Core website with Real Favicon Generator

1. Design your favicon

Before you can create all the various required permutations, you'll first need to create your base favicon. This needs to be at least 70×70, but if possible, use a larger version that's at least 260×260. The generator works by scaling your image down to the appropriate sizes, so in order to generate the largest required favicons you need to start big.

For this post I'll be using a random image from pixabay, that I've exported (from the SVG) at 300×300:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

2. Import your design into RealFaviconGenerator

Now we have an icon to work with, go to realfavicongenerator.net and upload your icon. Click on Select your Favicon picture at the top right of the page, upload your image, and wait for it to finish processing..

Adding favicons to your ASP.NET Core website with Real Favicon Generator

You will now be presented with a breakdown of all the decisions you need to make to support various browsers and devices.

Favicon for iOS

As explained on RealFaviconGenerator, iOS users can pin a site to their home screen, which will then use a version of your favicon as the link image. Generally favicons work well when they contain transparent regions, to give the icon some shape other than a square, but for iOS they have to be solid.

RealFaviconGenerator provides some suggestions with how to generate the appropriate icon here, allowing you to change the background colour to use for transparency, how much padding to add to your icon, whether to generate additional icons, and whether to use an alternate iOS specific image.

In my case, I chose the basic option of generating a new icon, but using a different background colour to avoid the default black fill it would otherwise have. You can see the before and after of this small change below:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Favicon for Android

Android Chrome has a similar feature to iOS whereby you can pin a website to your home screen. Android is much more flexible with regard to icon design, so in this case transparent icons can generally be used as-is.

As before however, there are a lot of customisations you can make such as adding a background, generating additional images, or using an android-specific image.

The one required field at this point is the name of your application, but I'd recommend adding a theme colour too - that way the Chrome tab-bar and task-switcher colour will change to your website's theme colour too:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Windows Metro

Next in the OS list are Windows 8 and 10. As with iOS and Android, you can pin a website to your desktop. You can choose the background colour for your tile and optionally replace then image with a white silhouette, which can work well if you have a complex image outline.

Again, you can choose to generate additional lesser used images, or replace the image completely on Windows.

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Safari Pinned Tab

Safari 9 adds the concept of pinned tabs, which are represented by a small monochrome SVG icon. By default, RealFaviconGenerator generates a silhouette of your image, but it can also automatically generate a monochrome image by applying a threshold to your default image:

In my case, the threshold didn't quite produce a decent outline, so I uploaded an alternative image instead which would work better with the automatic thresholding:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Generator options

Now we're pretty much done, but there's still a bunch of options you can choose to optimise your favicons.

First of all, you can choose the path in your website that you are going to place your favicons. Given the number of icons that are generated, it can be tempting to place them in a subfolder of your application, but it's actually recommended to keep them in the root of your app.

You can choose to add a version string to the generated HTML links (recommended) and set the application name. Finally, you can choose the amount of compression and the scaling algorithms used, along with a preview of the effect on the final images. Generally you'll find you can compress pretty heavily, but choose what works for you.

Generate!

Once all your settings are specified, click the 'Generate' button and let RealFaviconGenerator do it's thing! You'll be given a zip file of all your icons, and a snippet of HTML to include in your website.

Note, in a recent update, RealFaviconGenerator reduced the number of favicons that are produced by default from 28 to 9. If you would like the higher-compatibility pack with additional files you can click on 'Get the old package'.

3. Add the favicons to your website.

We are going to be serving the favicons as static files, so we can add them inside the web-root folder of our ASP.NET Core application. By default, this is the wwwroot folder in your web project. Simply unzip your downloaded favicon package into this directory:

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Next, we need to add the HTML snippet to the head element of our website. You can do this directly inside _Layout.cshtml if you wish, or you can take the route I favour and create a partial view to encapsulate your favicon links.

Add a new file inside the Views/Shared folder called _Favicons.cshtml and paste the generated favicon HTML snippet in:

<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">  
<link rel="icon" type="image/png" href="/favicon-32x32.png" sizes="32x32">  
<link rel="icon" type="image/png" href="/favicon-16x16.png" sizes="16x16">  
<link rel="manifest" href="/manifest.json">  
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">  
<meta name="theme-color" content="#00ffff">  

All that's left is to render the partial inside the head tag of _layout.cshtml:

<!DOCTYPE html>  
<html>  
<head>  
    <!-- Other head elements --> 
    @Html.Partial("_Favicons")
</head>  
<body>  
    <!-- Other body elements --> 
</body>  
</html>  

And that's it! Now your site should have a great set of favicons, no matter which browser or device your users are on.

Adding favicons to your ASP.NET Core website with Real Favicon Generator

Summary

RealFaviconGenerator makes it very easy to generate all the favicons required to support modern browsers and devices, with optional high-compatibility for older browsers, following all the required guidelines. By following through the guidelines on the site, it is trivial to generate a compatible set of icons for your website.

Adding the icons to your site is simple, requiring a few lines of html, and pasting the downloaded pack into your webroot.

If you do use the site, consider donating to the creators, to ensure it stays as up-to-date and useful as ever!


Anuraj Parameswaran: Simple Static Websites using Azure Blob service

This post is about hosting a static website on Azure Blob service. To enable online presence, most of the small business will setup a Wordpress blog. A fully functional WordPress site is great, however most websites are pretty static and don’t really need all the bells and whistles that come with it. In this post I am talking about an alternative solution which helps you to host your static websites in Azure and leaverage CDN and bandwidth capabilities of Azure with less cost.


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Andrew Lock: Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the path that generated an error with the StatusCodePages Middleware

In my previous post, I showed how to use the re-execute features of the StatusCodePagesMiddleware to generate custom error pages for status-code errors. This allows you to easily create custom error pages for common error status codes like 404 or 500.

Retrieving the path that generated an error with the StatusCodePages Middleware

The re-executing approach using UseStatusCodePagesWithReExecute is generally a better approach than using UseStatusCodePagesWithRedirects as it generates the custom error page in the same request that caused it. This allows you to return the correct error code in response to the original request. This is more 'correct' from an HTTP/SEO/semantic point of view, but it also means the context of the original request is maintained when you generate the error.

In this quick post, I show how you can use this context to obtain the original path that triggered the error status code when the middleware pipeline is re-executed.

Setting up the status code pages middleware

I'll start by adding the StatusCodePagesMiddleware as I did in my previous post. I'm using the same UseStatusCodePagesWithReExecute as before, and providing the error status code when the pipeline is re-executed using a statusCode querystring parameter:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

The corresponding action method that gets invoked is:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

This gives me customised error pages for 404 and 500 status codes:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the original error path

This technique lets you customise the response returned when a URL generates an error status code, but on occasion you may want to know the original path that actually caused the error. From the flow diagram at the top of the page, I want to know the /Home/Problem URL when the HomeController.Error action is executing.

Luckily, the StatusCodePagesMiddleware stores a request-feature with the original path on the HttpContext. You can access it from the Features property:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();
        ViewData["ErrorUrl"] = feature?.OriginalPath;

        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

Adding this to the Error method means you can display or log the path, depending on your needs:

Retrieving the path that generated an error with the StatusCodePages Middleware

Note that I've used the null propagator syntax ?. to retrieve the path, as the feature will only be added if the StatusCodePagesMiddleware is re-executing the pipeline. This will avoid any null reference exceptions if the action is executed without using the StatusCodePagesMiddleware, for example by directly requesting /Home/Error?statusCode=404:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving additional information

The StatusCodePagesMiddleware sets an IStatusCodeReExecuteFeature on the HttpContext when it re-executes the pipeline. This interface exposes two properties; the original path, as you have already seen along with the PathBase

public interface IStatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
}

The one property it doesn't (currently) expose is the original querystring. However the concrete type that is actually set by the middleware is the StatusCodeReExecuteFeature. This contains an additional property OriginalQuerystring:

public interface StatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
    string OriginalPath { get; set; }
}

If you're willing to add some coupling to this implementation in your code, you can access these properties by safely casting the IStatusCodeReExecuteFeature to a StatusCodeReExecuteFeature. For example:

var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();  
var reExecuteFeature = feature as StatusCodeReExecuteFeature  
ViewData["ErrorPathBase"] = reExecuteFeature?.OriginalPathBase;  
ViewData["ErrorQuerystring"] = reExecuteFeature?.OriginalQueryString;  

This lets you display/log the complete path that gave you the error, including the querystring

Retrieving the path that generated an error with the StatusCodePages Middleware

Note: If you look at the dev branch in the Diagnostics GitHub repo you'll notice that the interface actually does contain OriginalQueryString. This will be coming with .NET Core 2.0 / ASP.NET Core 2.0, as it is a breaking change. It'll make the above scenario that little bit easier though

Summary

The StatusCodePagesMiddleware is just one of the pieces needed to provide graceful handling of errors in your application. The re-execute approach is a great way to include custom layouts in your application, but it can obscure the origin of the error. Obviously, logging the error where it is generated provides the best context, but the IStatusCodeReExecuteFeature can be useful for easily retrieving the source of the error when generating the final response.


Damien Bowden: .NET Core, ASP.NET Core logging with NLog and PostgreSQL

This article shows how .NET Core or ASP.NET Core applications can log to a PostgreSQL database using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

Other posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. .NET Core logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Setting up PostgreSQL

pgAdmin can be used to setup the PostgreSQL database which is used to save the logs. A log database was created for this demo, which matches the connection string in the nlog.config file.

Using the pgAdmin, open a query edit view and execute the following script to create a table in the log database.

CREATE TABLE logs
( 
    Id serial primary key,
    Application character varying(100) NULL,
    Logged text,
    Level character varying(100) NULL,
    Message character varying(8000) NULL,
    Logger character varying(8000) NULL, 
    Callsite character varying(8000) NULL, 
    Exception character varying(8000) NULL
)

At present it is not possible to log a date property to PostgreSQL using NLog, only text fields are supported. A github issue exists for this here. Due to this, the Logged field is defined as a text, and uses the DateTime value when the log is created.

.NET or ASP.NET Core Application

The required packages need to be added to the csproj file. For an ASP.NET Core aplication, add NLog.Web.AspNetCore and Npgsql, for a .NET Core application add NLog and Npgsql.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>ConsoleNLogPostgreSQL</AssemblyName>
    <OutputType>Exe</OutputType>
    <PackageId>ConsoleNLog</PackageId>
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
    <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
    <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
    <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.1" />
    <PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />
    <PackageReference Include="Npgsql" Version="3.2.2" />
    <PackageReference Include="System.Data.SqlClient" Version="4.3.0" />
  </ItemGroup>
</Project>

Or use the NuGet package manager in Visual Studio 2017.

The nlog.config file is then setup to log to PostgreSQL using the database target with the dbProvider configured for Npgsql and the connectionString for the required instance of PostgreSQL. The commandText must match the database setup in the TSQL script. If you add, for example extra properties from the NLog.Web.AspNetCore package to the logs, these also need to be added here.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
  
  <targets>
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

    <target name="database" xsi:type="Database"
              dbProvider="Npgsql.NpgsqlConnection, Npgsql"
              connectionString="User ID=damienbod;Password=damienbod;Host=localhost;Port=5432;Database=log;Pooling=true;"
             >

          <commandText>
              insert into logs (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
      
    <logger name="*" minlevel="Trace" writeTo="database" />
      
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

When using ASP.NET Core, the NLog.Web.AspNetCore can be added to the nlog.config file to use the extra properties provided here.

<extensions>
     <add assembly="NLog.Web.AspNetCore"/>
</extensions>
            

Using the log

The logger can be used using the LogManager or added to the NLog log configuration in the Startup class in an ASP.NET Core application.

Basic example:

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

var logger = LogManager.GetLogger("console");
logger.Warn("console logging is great");
logger.Error(new ArgumentException("oh no"));

Startup configuration in an ASP.NET Core application:

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
	// Add framework services.
	services.AddMvc();

	services.AddScoped<LogFilter>();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	//add NLog.Web
	app.AddNLogWeb();

	////foreach (DatabaseTarget target in LogManager.Configuration.AllTargets.Where(t => t is DatabaseTarget))
	////{
	////	target.ConnectionString = Configuration.GetConnectionString("NLogDb");
	////}
	
	////LogManager.ReconfigExistingLoggers();

	LogManager.Configuration.Variables["connectionString"] = Configuration.GetConnectionString("NLogDb");
	LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

	app.UseMvc();
}

When the application is run, the logs are added to the database.

Links

https://www.postgresql.org/

https://www.pgadmin.org/

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

By default, the ASP.NET Core templates include either the ExceptionHandlerMiddleware or the DeveloperExceptionPage. Both of these catch exceptions thrown by the middleware pipeline, but they don't handle error status codes that are returned by the pipeline (without throwing an exception). For that, there is the StatusCodePagesMiddleware.

There are a number of ways to use the StatusCodePagesMiddleware but in this post I will be focusing on the version that re-executes the pipeline.

Default Status Code Pages

I'll start with the default MVC template, but I'll add a helper method for returning a 500 error:

public class HomeController : Controller  
{
    public IActionResult Problem()
    {
        return StatusCode(500);
    }  
}

To start with, I'll just add the default StatusCodePagesMiddleware implementation:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePages();

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

With this in place, making a request to an unknown URL gives the following response:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

The default StatusCodePagesMiddleware implementation will return the simple text response when it detects a status code between 400 and 599. Similarly, if you make a request to /Home/Problem, invoking the helper action method, then the 500 status code text is returned.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute vs Redirect

In reality, it's unlikely you'll want to use status code pages with this default setting in anything but a development environment. If you want to intercept status codes in production and return custom error pages, you'll want to use one of the alternative extension methods that use redirects or pipeline re-execution to return a user-friendly page:

  • UseStatusCodePagesWithRedirects
  • UseStatusCodePagesWithReExecute

These two methods have a similar outcome, in that they allow you to generate user-friendly custom error pages when an error occurs on the server. Personally, I would suggest always using the re-execute extension method rather than redirects.

The problem with redirects for error pages is that they somewhat abuse the return codes of HTTP, even though the end result for a user is essentially the same. With the redirect method, when an error occurs the pipeline will return a 302 response to the user, with a redirect to a provided error path. This will cause a second response to be made to the the URL that is used to generate the custom error page, which would then return a 200 OK code for the second request:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Semantically this isn't really correct, as you're triggering a second response, and ultimately returning a success code when an error actually occurred. This could also cause issues for SEO. By re-executing the pipeline you keep the correct (error) status code, you just return user-friendly HTML with it.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

You are still in the context of the initial response, but the whole pipeline after the StatusCodePagesMiddleware is executed for a second time. The content generated by this second response is combined with the original Status Code to generate the final response that gets sent to the user. This provides a workflow that is overall more semantically correct, and means you don't completely lose the context of the original request.

Adding re-execute to your pipeline

Hopefully you're swayed by the re-execte approach; luckily it's easy to add this capability to your middleware pipeline. I'll start by updating the Startup class to use the re-execute extension instead of the basic one.

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Note, the order of middleware in the pipeline is important. The StatusCodePagesMiddleware should be one of the earliest middleware in the pipeline, as it can only modify the response of middleware that comes after it in the pipeline

There are two arguments to the UseStatusCodePagesWithReExecute method. The first is a path that will be used to re-execute the request in the pipeline and the second is a querystring that will be used.

Both of these paths can include a placeholder {0} which will be replaced with the status code integer (e.g. 404, 500 etc) when the pipeline is re-executed. This allows you to either execute different action methods depending on the error that occurred, or to have a single method that can handle multiple errors.

The following example takes the latter approach, using a single action method to handle all the error status codes, but with special cases for 404 and 500 errors provided in the querystring:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

When a 404 is generated (by an unknown path for example) the status code middleware catches it, and re-executes the pipeline using /Home/Error?StatusCode=404. The Error action is invoked, and executes the 404.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Similarly, a 500 error is special cased:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Any other error executes the default Error.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Summary

Congratulations, you now have custom error pages in your ASP.NET Core application. This post shows how simple it is to achieve by re-executing the pipeline. I strongly recommend you use this approach instead of trying to use the redirects overload. In the next post, I'll show how you can obtain the original URL that triggered the error code during the second pipeline execution.


Anuraj Parameswaran: Working with dependencies in dotnet core

This post is about working with nuget dependencies and project references in ASP.NET Core or .NET Core. In earlier versions of dotnet core, you can add dependencies by modifying the project.json file directly and project references via global.json. This post is about how to do this better with dotnet add command.


Andrew Lock: Deconstructors for non-tuple types in C# 7.0

Deconstructors for non-tuple types in C# 7.0

As well as finally seeing the RTM of the .NET Core tooling, Visual Studio 2017 brought a whole host of new things to the table. Among these is C# 7.0, which introduces a number of new features to the language.

Many of these features are essentially syntactic sugar over things that were already possible, but were harder work or more cumbersome in earlier versions of the language. Tuples feels like one of these features that I'm going to end up using quite a lot.

Deconstructing tuples

Often you'll find that you want to return more than one value from a method. There's a number of ways you can achieve this currently (out parameters, System.Tuple, custom class) but none of them are particularly smooth. If you really are just returning two pieces of data, without any associated behaviour, then the new tuples added in C# 7 are a great fit.

I won't go into much detail on tuples here, so I suggest you checkout one of the many recent articles introducing the feature if they're new to you. I'm just going to look at one of the associated features of tuples - the ability to deconstruct them.

In the following example, the method GetUser() returns a tuple consisting of an integer and a string:

(int id, string name) GetUser()
{
    return (123, "andrewlock");
}

If I call this method from my code, I can access the id and name values by name - so much cleaner than out parameters or the Item1, Item2 of System.Tuple.

Deconstructors for non-tuple types in C# 7.0

Another feature is the ability to automatically deconstruct the tuple values into separate variables. So for example, I could do:

(var userId, var username) = GetUser();
Console.WriteLine($"The user with id {userId} is {username}");  

This creates two variables, an integer called userId and a string called username. The tuple has been automatically deconstructed into these two variables.

Deconstructing non-tuples

This feature is great, but it is actually not limited to just tuples - you can add deconstructors to all your classes!

The following example shows a User class with a deconstructor that returns the FirstName and LastName properties:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

With this in place I can deconstruct any User object:

var user = new User  
{
    FirstName = "Joe",
    LastName = "Bloggs",
    Email = "joe.bloggs@example.com",
    Age = 23
};

(var firstName, var lastName) = user;

Console.WriteLine($"The user's name is {firstName} {lastName}");  
// The user's name is Joe Bloggs

We are creating a User object, and then deconstructing it into the firstName and lastName variables, which are declared as part of the deconstruction (they don't have to be declared inlcline, you can use existing variables too).

To create a deconstructor, create a function of the following form:

public void Deconstruct(out T var1, ..., out T2 var 2);  

The values that are produced are declared as out parameters. You can have as many arguments as you like, the caller just needs to provide the correct number of variables when calling the deconstructor. You can even have multiple overloads with different numbers of parameters:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }

    public void Deconstruct(out string firstName, out string lastName, out int age)
    {
        firstName = FirstName;
        lastName = LastName;
        age = Age;
    }
}

The same user could be deconstructed in multiple ways, depending on the needs of the caller:

(var firstName1, var lastName1) = user;
(var firstName2, var lastName2, var age) = user;

Ambiguous overloads

One thing that might cross your mind is what happens if you have multiple overloads with the same number of parameters. In the following example I add an additional deconstructor also accepts three parameters, where the third parameter is a string rather than an int:

public partial class User  
{
    // remainder of class as before

    public void Deconstruct(out string firstName, out string lastName, out string email)
    {
        firstName = FirstName;
        lastName = LastName;
        email = Email;
    }
}

This code compiles, but if you try and actually deconstruct the object you'll get some red squigglies:

Deconstructors for non-tuple types in C# 7.0

At first this seems like it's just a standard C# type inference error - there are two candidate method calls so you need to disambiguate between them by providing explicit types instead of var. However, even explicitly declaring the type won't clear this one up:

Deconstructors for non-tuple types in C# 7.0

You'll still get the following error:

The call is ambiguous between the following methods or properties: 'Program.User.Deconstruct(out string, out string, out int)' and 'Program.User.Deconstruct(out string, out string, out string)'  

So make sure not to overload multiple Deconstruct methods in a type with the same numbers of parameters!

Bonus: Predefined type 'System.ValueTuple`2' is not defined or imported

When you first start using tuples, you might get this confusing error:

Predefined type 'System.ValueTuple`2' is not defined or imported  

But don't panic, you just need to add the System.ValueTuple NuGet package to your project, and all will be good again:

Deconstructors for non-tuple types in C# 7.0

Summary

This was just a quick look at the deconstruction feature that came in C# 7.0. For a more detailed look, check out some of the links below:


Damien Bowden: ASP.NET Core Error Management with elmah.io

This article shows how to use elmah.io error management with an ASP.NET Core application. The error, log data is added to elmah.io using different elmah.io nuget packages, directly from ASP.NET Core and also using an NLog elmah.io target.

Code: https://github.com/damienbod/AspNetCoreElmah

elmah.io is an error management system which can help you monitor, find and fix application problems fast. While structured logging is supported, the main focus of elmah.io is handling errors.

Getting started with Elmah.Io

Before you can start logging to elmah.io, you need to create an account and setup a log. Refer to the documentation here.

Logging exceptions, errors with Elmah.Io.AspNetCore and Elmah.Io.Extensions.Logging

You can add logs, exceptions to elmah.io directly from an ASP.NET Core application using the Elmah.Io.AspNetCore and the Elmah.Io.Extensions.Logging nuget packages. These packages can be added to the project using the nuget package manager.

Or you can just add the packages directly in the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <PropertyGroup>
    <UserSecretsId>AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4</UserSecretsId>
  </PropertyGroup>
  
  <ItemGroup>
    <Content Include="wwwroot\index.html" />
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="Elmah.Io.AspNetCore" Version="3.2.39-pre" />
    <PackageReference Include="Elmah.Io.Extensions.Logging" Version="3.1.22-pre" />
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="1.1.1" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0" />
  </ItemGroup>

</Project>

The Elmah.Io.AspNetCore package is used to catch unhandled exceptions in the application. This is configured in the Startup class. The OnMessage method is used to set specific properties in the messages which are sent to elmah.io. Setting the Hostname and the Application properties are very useful when evaluating the logs in elmah.io.

app.UseElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId),
	new ElmahIoSettings()
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

The Elmah.Io.Extensions.Logging package is used to log messages using the built in ILoggerFactory. You should only send warning, errors, critical messages and not just log everything to elmah.io, but it is possible to do this. Again the OnMessage method can be used to set the Hostname and the Application name for each log.

loggerFactory.AddElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId), 
	new FilterLoggerSettings
	{
		{"ValuesController", LogLevel.Information}
	},
	new ElmahIoProviderOptions
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

Using User Secrets for the elmah.io API-KEY and LogID

ASP.NET Core user secrets can be used to set the elmah.io API-KEY and the LogID as you don’t want to commit these to your source. The AddUserSecrets method can be used to set this.

private string _elmahAppKey;
private string _elmahLogId;

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddEnvironmentVariables();

	if (env.IsDevelopment())
	{
		builder.AddUserSecrets("AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4");
	}

	Configuration = builder.Build();
}

The user secret properties can then be used in the ConfigureServices method.

 public void ConfigureServices(IServiceCollection services)
{
	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];
	// Add framework services.
	services.AddMvc();
}

A dummy exception is thrown in this example, which then sends the data to elmah.io.

[HttpGet("{id}")]
public string Get(int id)
{
	throw new System.Exception("something terrible bad here!");
	return "value";
}

Logging exceptions, errors to elmah.io using NLog

NLog using the Elmah.Io.NLog target can also be used in ASP.NET Core to send messages to elmah.io. This can be added using the nuget package manager.

Or you can just add it to the csproj file.

<PackageReference Include="Elmah.Io.NLog" Version="3.1.28-pre" />
<PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />

NLog for ASP.NET Core applications can be configured in the Startup class. You need to set the target properties with the elmah.io API-KEY and also the LogId. You could also do this in the nlog.config file.

loggerFactory.AddNLog();
app.AddNLogWeb();

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreElmah\\Logs";

foreach (ElmahIoTarget target in LogManager.Configuration.AllTargets.Where(t => t is ElmahIoTarget))
{
	target.ApiKey = _elmahAppKey;
	target.LogId = _elmahLogId;
}

LogManager.ReconfigExistingLoggers();

The IHttpContextAccessor and the HttpContextAccessor also need to be registered to the default IoC in ASP.NET Core to get the extra information from the web requests.

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();

	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];

	// Add framework services.
	services.AddMvc();
}

The nlog.config file can then be configured for the target with the elmah.io type. The application property is also set which is useful in elmah.io.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreElmah\Logs\internal-nlog.txt">

  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
    <add assembly="Elmah.Io.NLog"/>    
  </extensions>

  
  <targets>
    <target name="elmahio" type="elmah.io" apiKey="API_KEY" logId="LOG_ID" application="AspNetCoreElmahUI"/>
    
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} |${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} | ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

  </targets>

  <rules>
    <logger name="*" minlevel="Warn" writeTo="elmahio" />
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>


The About method calls the AspNetCoreElmah application method which throws the dummy exception, so we send exceptions from both applications.

public async Task<IActionResult> About()
{
	_logger.LogInformation("HomeController About called");
	// throws exception
	HttpClient _client = new HttpClient();
	var response = await _client.GetAsync("http://localhost:37209/api/values/1");
	response.EnsureSuccessStatusCode();
	var responseString = System.Text.Encoding.UTF8.GetString(
		await response.Content.ReadAsByteArrayAsync()
	);
	ViewData["Message"] = "Your application description page.";

	return View();
}

Now both applications can be started, and the errors can be viewed in the elmah.io dashboard.

Where you open the dashboard in elmah.io and access you logs, you can view the exceptions.

Here’s the log sent from the AspNetCoreElmah application.

Here’s the log sent from the AspNetCoreElmahUI application using NLog with Elmah.Io.


Links

https://elmah.io/

https://github.com/elmahio/elmah.io.nlog

http://nlog-project.org/



Anuraj Parameswaran: .editorconfig support in Visual Studio 2017

This post is about .editorconfig support in Visual Studio 2017. EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. As part of productivity improvements in Visual Studio, Microsoft introduced support for .editorconfig file in Visual Studio 2017.


Anuraj Parameswaran: Live Unit Testing in Visual Studio 2017

This post is about Live Unit Testing in Visual Studio 2017. With VS2017, Microsoft released Live Unit Testing. Live Unit Testing automatically runs the impacted unit tests in the background as you edit code, and visualizes the results and code coverage, live in the editor.


Anuraj Parameswaran: Create a dotnet new project template in dotnet core

This post is about creating project template for the dotnet new command. As part of the new dotnet command, now you can create Empty Web app, API app, MS Test and Solution file as part of dotnet new command. This post is about creating a Web API template with Swagger support.


Damien Bowden: Testing an ASP.NET Core MVC Protobuf API using HTTPClient and xUnit

The article shows how to test an ASP.NET Core MVC API using xUnit and a HTTPClient client using Protobuf for the content formatters.

Code: https://github.com/damienbod/AspNetMvc6ProtobufFormatters

Posts in this series:

The test project tests the ASP.NET Core API produced here. xUnit is used as a test framework. The xUnit dependencies can be added to the test project using NuGet in Visual Studio 2017 as well as the Microsoft.AspNetCore.TestHost package. Microsoft provide nice docs about Integration testing ASP.NET Core.

When the NuGet packages have been added, you can view these in the csproj file, or install and update directly in this file. A reference to the project containg the API is also added to the test project.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>AspNetCoreProtobuf.IntegrationTests</AssemblyName>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" />
    <PackageReference Include="xunit.runner.console" Version="2.2.0" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
    <PackageReference Include="xunit" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="1.1.1" />
    <PackageReference Include="protobuf-net" Version="2.1.0" />
    <PackageReference Include="xunit.runners" Version="2.0.0" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\AspNetCoreProtobuf\AspNetCoreProtobuf.csproj" />
  </ItemGroup>

  <ItemGroup>
    <Service Include="{82a7f48d-3b50-4b1e-b82e-3ada8210c358}" />
  </ItemGroup>

</Project>

The TestServer is used to test the ASP.NET Core API. This is setup for all the API tests.

private readonly TestServer _server;
private readonly HttpClient _client;

public ProtobufApiTests()
{
	_server = new TestServer(
		new WebHostBuilder()
		.UseKestrel()
		.UseStartup<Startup>());
	_client = _server.CreateClient();
}

HTTP GET request test

The GetProtobufDataAndCheckProtobufContentTypeMediaType test sends a HTTP GET to the test server, and requests the content as application/x-protobuf. The result is deserialized using protobuf and the header and the expected result is checked.

[Fact]
public async Task GetProtobufDataAndCheckProtobufContentTypeMediaType()
{
	// Act
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var response = await _client.GetAsync("/api/values/1");
	response.EnsureSuccessStatusCode();

	var result = ProtoBuf.Serializer.Deserialize<ProtobufModelDto>(await response.Content.ReadAsStreamAsync());

	// Assert
	Assert.Equal("application/x-protobuf", response.Content.Headers.ContentType.MediaType );
	Assert.Equal("My first MVC 6 Protobuf service", result.StringValue);
}
		

HTTP POST request test

The PostProtobufData test method sends a HTTP POST request to the test server with a protobuf serialized content. The status code of the request is validated.

[Fact]
public void PostProtobufData()
{
	// HTTP GET with Protobuf Response Body
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	
	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<ProtobufModelDto>(stream, new ProtobufModelDto
	{
		Id = 2,
		Name= "lovely data",
		StringValue = "amazing this ah"
	
	});

	HttpContent data = new StreamContent(stream);

	// HTTP POST with Protobuf Request Body
	var responseForPost = _client.PostAsync("api/Values", data).Result;

	Assert.True(responseForPost.IsSuccessStatusCode);
}

The tests can be executed or debugged in Visual Studio using the Test Explorer

The tests can also be run with dotnet test in the commandline.

C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests>dotnet test
Build started, please wait...
Build completed.

Test run for C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests\bin\Debug\netcoreapp1.1\AspNetCoreProtobuf.IntegrationTests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Testausführungs-Befehlszeilentool Version 15.0.0.0
Copyright (c) Microsoft Corporation. Alle Rechte vorbehalten.

Die Testausf├╝hrung wird gestartet, bitte warten...
[xUnit.net 00:00:00.5821132]   Discovering: AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.6841246]   Discovered:  AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.7273897]   Starting:    AspNetCoreProtobuf.IntegrationTests
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) with arguments (AspNetCoreProtobuf.Model.ProtobufModelDto) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) in 137.2264ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 346.8796ms 200
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 39.0599ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 43.2983ms 200 application/x-protobuf
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 1.4974ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 3.6715ms 200 application/x-protobuf
[xUnit.net 00:00:01.5669956]   Finished:    AspNetCoreProtobuf.IntegrationTests

Tests gesamt: 3. Bestanden: 3. Fehler: 0. Übersprungen: 0.
Der Testlauf war erfolgreich.
Testausführungszeit: 2.7499 Sekunden

appveyor CI

The project can then be connected to any build server. Appveyor is a easy one to setup and works well with github projects. Create an account and select the github repository to build. Add an appveyor.yml file to the root of your project and configure as required. Docs can be found here:
https://www.appveyor.com/docs/build-configuration/

image: Visual Studio 2017
init:
  - git config --global core.autocrlf true
install:
  - ECHO %APPVEYOR_BUILD_WORKER_IMAGE%
  - dotnet --version
  - dotnet restore
build_script:
- dotnet build
before_build:
- appveyor-retry dotnet restore -v Minimal
test_script:
- cd src/AspNetCoreProtobuf.IntegrationTests
- dotnet test

The appveyor badges can then be used in your project md file.

|                           | Build                                                                                                                                                             |       
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| .NET Core                 | [![Build status](https://ci.appveyor.com/api/projects/status/ihtrq4u81rtsty9k?svg=true)](https://ci.appveyor.com/project/damienbod/aspnetmvc6protobufformatters)  |

This would then be displayed in github as follows:

Links

https://developers.google.com/protocol-buffers/docs/csharptutorial

http://www.stackoverflow.com/questions/7774155/deserialize-long-string-with-protobuf-for-c-sharp-doesnt-work-properly-for-me

https://xunit.github.io/

https://www.appveyor.com/docs/build-configuration/

https://www.nuget.org/packages/protobuf-net/

https://github.com/mgravell/protobuf-net

http://teelahti.fi/using-google-proto3-with-aspnet-mvc/

https://github.com/damienpontifex/ProtobufFormatter/tree/master/src/ProtobufFormatter

http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/

http://blogs.msdn.com/b/webdev/archive/2014/11/24/content-negotiation-in-mvc-5-or-how-can-i-just-write-json.aspx

https://github.com/WebApiContrib/WebApiContrib.Formatting.ProtoBuf

https://damienbod.wordpress.com/2014/01/11/using-protobuf-net-media-formatter-with-web-api-2/

https://docs.microsoft.com/en-us/aspnet/core/testing/integration-testing



Damien Bowden: .NET Core logging to MySQL using NLog

This article shows how to log to MySQL in a .NET Core application using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. ASP.NET Core, logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Set up the MySQL database

MySQL Workbench can be used to add the schema ‘nlog’ which will be used for logging to the MySQL database. The user ‘damienbod’ is also required, which must match the defined user in the connection string. If you configure the MySQL database differently, then you need to change the connection string in the nlog.config file.

nlogmysql_02

You also need to create a log table. The following script can be used. If you decide to use NLog.Web in a ASP.NET Core application and add some extra properties, fields to the logs, then this script needs to be extended and also the database target in the nlog.config.

CREATE TABLE `log` (
  `Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `Application` varchar(50) DEFAULT NULL,
  `Logged` datetime DEFAULT NULL,
  `Level` varchar(50) DEFAULT NULL,
  `Message` varchar(512) DEFAULT NULL,
  `Logger` varchar(250) DEFAULT NULL,
  `Callsite` varchar(512) DEFAULT NULL,
  `Exception` varchar(512) DEFAULT NULL,
  PRIMARY KEY (`Id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

Add NLog and the MySQL provider to the project.

The MySql.Data pre release NuGet package can be used to log to MySQL. Add this to your project.

nlogmysql

The NLog.Web.AspNetCore package also needs to be added or just NLog if you do not require any web extensions.

nlog.config

The database target needs to be configured to log to MySQL. The database provider is set to use the MySQL.Data package which was downloaded using NuGet. If your using a different MySQL provider, this needs to be changed. The connection string is also set here, which matches what was configured previously in the MySQL database using Workbench. If you read the connection string from the app settings, a NLog variable can be used here.

  <target name="database" xsi:type="Database"
              dbProvider="MySql.Data.MySqlClient.MySqlConnection, MySql.Data"
              connectionString="server=localhost;Database=nlog;user id=damienbod;password=1234"
             >

          <commandText>
              insert into nlog.log (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>

NLog can then be used in the application.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using NLog;
using NLog.Targets;

namespace ConsoleNLog
{
    public class Program
    {
        public static void Main(string[] args)
        {

            LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

            var logger = LogManager.GetLogger("console");
            logger.Warn("console logging is great");

            Console.WriteLine("log sent");
            Console.ReadKey();
        }
    }
}

Full nlog.config file:
https://github.com/damienbod/AspNetCoreNlog/blob/master/src/ConsoleNLogMySQL/nlog.config

Links

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://github.com/NLog/NLog/blob/38aef000f916bd5ffd8b80a5576afa2423192e84/examples/targets/Configuration%20API/Database/MSSQL/Example.cs

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Damien Bowden: Implementing an Audit Trail using ASP.NET Core and Elasticsearch with NEST

This article shows how an audit trail can be implemented in ASP.NET Core which saves the audit documents to Elasticsearch using NEST.

Code: https://github.com/damienbod/AspNetCoreElasticsearchNestAuditTrail

Should I just use a logger?

Depends. If you just need to save requests, responses and application events, then a logger would be a better solution for this use case. I would use NLog as it provides everything you need, or could need, when working with ASP.NET Core.

If you only need to save business events/data of the application in the audit trail, then this solution could fit.

Using the Audit Trail

The audit trail is implemented so that it can be used easily. In the Startup class of the ASP.NET Core application, it is added to the application in the ConfigureServices method. The class library provides an extension method, AddAuditTrail, which can be configured as required. It takes 2 parameters, a bool parameter which defines if a new index is created per day or per month to save the audit trail documents, and a second int parameter which defines how many of the previous indices are included in the alias used to select the audit trail items. If this is 0, all indices are included for the search.

Because the audit trail documents are grouped into different indices per day or per month, the amount of documents can be controlled in each index. Usually the application user requires only the last n days, or last 2 months of the audit trails, and so the search does not need to search through all audit trails documents since the application began. This makes it possible to optimize the data as required, or even remove, archive old unused audit trail indices.

public void ConfigureServices(IServiceCollection services)
{
	var indexPerMonth = false;
	var amountOfPreviousIndicesUsedInAlias = 3;
	services.AddAuditTrail<CustomAuditTrailLog>(options => 
		options.UseSettings(indexPerMonth, amountOfPreviousIndicesUsedInAlias)
	);

	services.AddMvc();
}

The AddAuditTrail extension method requires a model definition which will be used to save or retrieve the documents in Elasticsearch. The model must implement the IAuditTrailLog interface. This interface just forces you to implement the property Timestamp which is required for the audit logs.

The model can then be designed, defined as required. NEST attributes can be used for each of the properties in the model. Use the keyword attribute, if the text field should not be analyzed. If you must use enums, then save the string value and NOT the integer value to the persistent layer. If integer values are saved for the enums, then it cannot be used without the knowledge of what each integer value represents, making it dependent on the code.

using AuditTrail.Model;
using Nest;
using System;

namespace AspNetCoreElasticsearchNestAuditTrail
{
    public class CustomAuditTrailLog : IAuditTrailLog
    {
        public CustomAuditTrailLog()
        {
            Timestamp = DateTime.UtcNow;
        }

        public DateTime Timestamp { get; set; }

        [Keyword]
        public string Action { get; set; }

        public string Log { get; set; }

        public string Origin { get; set; }

        public string User { get; set; }

        public string Extra { get; set; }
    }
}

The audit trail can then be used anywhere in the application. The IAuditTrailProvider can be added in the constructor of the class and an audit document can be created using the AddLog method.

private readonly IAuditTrailProvider<CustomAuditTrailLog> _auditTrailProvider;

public HomeController(IAuditTrailProvider<CustomAuditTrailLog> auditTrailProvider)
{
	_auditTrailProvider = auditTrailProvider;
}

public IActionResult Index()
{
	var auditTrailLog = new CustomAuditTrailLog()
	{
		User = User.ToString(),
		Origin = "HomeController:Index",
		Action = "Home GET",
		Log = "home page called doing something important enough to be added to the audit log.",
		Extra = "yep"
	};

	_auditTrailProvider.AddLog(auditTrailLog);
	return View();
}

The audit trail documents can be viewed using QueryAuditLogs which supports paging and uses a simple query search which accepts wildcards. The AuditTrailSearch method returns a MVC view with the audit trail items in the model.

public IActionResult AuditTrailSearch(string searchString, int skip, int amount)
{

	var auditTrailViewModel = new AuditTrailViewModel
	{
		Filter = searchString,
		Skip = skip,
		Size = amount
	};

	if (skip > 0 || amount > 0)
	{
		var paging = new AuditTrailPaging
		{
			Size = amount,
			Skip = skip
		};

		auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString, paging).ToList();
		
		return View(auditTrailViewModel);
	}

	auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString).ToList();
	return View(auditTrailViewModel);
}

How is the Audit Trail implemented?

The AuditTrailExtensions class implements the extension methods used to initialize the audit trail implementations. This class accepts the options and registers the interfaces, classes with the IoC used by ASP.NET Core.

Generics are used so that any model class can be used to save the audit trail data. This changes always with each project, application. The type T must implement the interface IAuditTrailLog.

using System;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Localization;
using AuditTrail;
using AuditTrail.Model;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class AuditTrailExtensions
    {
        public static IServiceCollection AddAuditTrail<T>(this IServiceCollection services) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            return AddAuditTrail<T>(services, setupAction: null);
        }

        public static IServiceCollection AddAuditTrail<T>(
            this IServiceCollection services,
            Action<AuditTrailOptions> setupAction) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            services.TryAdd(new ServiceDescriptor(
                typeof(IAuditTrailProvider<T>),
                typeof(AuditTrailProvider<T>),
                ServiceLifetime.Transient));

            if (setupAction != null)
            {
                services.Configure(setupAction);
            }
            return services;
        }
    }
}

When a new audit trail log is added, it uses the index defined in the _indexName field.

public void AddLog(T auditTrailLog)
{
	var index = new IndexName()
	{
		Name = _indexName
	};

	var indexRequest = new IndexRequest<T>(auditTrailLog, index);

	var response = _elasticClient.Index(indexRequest);
	if (!response.IsValid)
	{
		throw new ElasticsearchClientException("Add auditlog disaster!");
	}
}

The _indexName field is defined using the date pattern, either days or months depending on your options.

private const string _alias = "auditlog";
private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";

index definition per month:

if(_options.Value.IndexPerMonth)
{
	_indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
}

When quering the audit trail logs, a simple query search query is used to find, select the audit trial documents required for the view. This is used so that wildcards can be used. The method accepts a query filter and paging options. If you search without any filter, all documents are returned which are defined in the alias (used indices). By using the simple query, the filter can accept options like AND, OR for the search.

public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
{
	var from = 0;
	var size = 10;
	EnsureAlias();
	if(auditTrailPaging != null)
	{
		from = auditTrailPaging.Skip;
		size = auditTrailPaging.Size;
		if(size > 1000)
		{
			// max limit 1000 items
			size = 1000;
		}
	}
	var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
	{
		Size = size,
		From = from,
		Query = new QueryContainer(
			new SimpleQueryStringQuery
			{
				Query = filter
			}
		),
		Sort = new List<ISort>
			{
				new SortField { Field = TimestampField, Order = SortOrder.Descending }
			}
	};

	var searchResponse = _elasticClient.Search<T>(searchRequest);

	return searchResponse.Documents;
}

The alias is also updated in the search query, if required. Depending on you configuration, the alias uses all the audit trail indices or just the last n days, or n months. This check uses a static field. If the alias needs to be updated, the new alias is created, which also deletes the old one.

private void EnsureAlias()
{
	if (_options.Value.IndexPerMonth)
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}
	else
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}           
}

Here’s how the alias is created for all indices of the audit trail.

private void CreateAliasForAllIndices()
{
	var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
	if (!response.IsValid)
	{
		throw response.OriginalException;
	}

	if (response.Exists)
	{
		_elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	}

	var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	if (!responseCreateIndex.IsValid)
	{
		throw response.OriginalException;
	}
}

The full AuditTrailProvider class which implements the audit trail.

using AuditTrail.Model;
using Elasticsearch.Net;
using Microsoft.Extensions.Options;
using Nest;
using Newtonsoft.Json.Converters;
using System;
using System.Collections.Generic;
using System.Linq;

namespace AuditTrail
{
    public class AuditTrailProvider<T> : IAuditTrailProvider<T> where T : class
    {
        private const string _alias = "auditlog";
        private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";
        private static Field TimestampField = new Field("timestamp");
        private readonly IOptions<AuditTrailOptions> _options;

        private ElasticClient _elasticClient { get; }

        public AuditTrailProvider(
           IOptions<AuditTrailOptions> auditTrailOptions)
        {
            _options = auditTrailOptions ?? throw new ArgumentNullException(nameof(auditTrailOptions));

            if(_options.Value.IndexPerMonth)
            {
                _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
            }

            var pool = new StaticConnectionPool(new List<Uri> { new Uri("http://localhost:9200") });
            var connectionSettings = new ConnectionSettings(
                pool,
                new HttpConnection(),
                new SerializerFactory((jsonSettings, nestSettings) => jsonSettings.Converters.Add(new StringEnumConverter())))
              .DisableDirectStreaming();

            _elasticClient = new ElasticClient(connectionSettings);
        }

        public void AddLog(T auditTrailLog)
        {
            var index = new IndexName()
            {
                Name = _indexName
            };

            var indexRequest = new IndexRequest<T>(auditTrailLog, index);

            var response = _elasticClient.Index(indexRequest);
            if (!response.IsValid)
            {
                throw new ElasticsearchClientException("Add auditlog disaster!");
            }
        }

        public long Count(string filter = "*")
        {
            EnsureAlias();
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = 0,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<AuditTrailLog>(searchRequest);

            return searchResponse.Total;
        }

        public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
        {
            var from = 0;
            var size = 10;
            EnsureAlias();
            if(auditTrailPaging != null)
            {
                from = auditTrailPaging.Skip;
                size = auditTrailPaging.Size;
                if(size > 1000)
                {
                    // max limit 1000 items
                    size = 1000;
                }
            }
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = size,
                From = from,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<T>(searchRequest);

            return searchResponse.Documents;
        }

        private void CreateAliasForAllIndices()
        {
            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw response.OriginalException;
            }
        }

        private void CreateAlias()
        {
            if (_options.Value.AmountOfPreviousIndicesUsedInAlias > 0)
            {
                CreateAliasForLastNIndices(_options.Value.AmountOfPreviousIndicesUsedInAlias);
            }
            else
            {
                CreateAliasForAllIndices();
            }
        }

        private void CreateAliasForLastNIndices(int amount)
        {
            var responseCatIndices = _elasticClient.CatIndices(new CatIndicesRequest(Indices.Parse($"{_alias}-*")));
            var records = responseCatIndices.Records.ToList();
            List<string> indicesToAddToAlias = new List<string>();
            for(int i = amount;i>0;i--)
            {
                if (_options.Value.IndexPerMonth)
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddMonths(-i + 1).ToString("yyyy-MM")}";
                    if(records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
                else
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddDays(-i + 1).ToString("yyyy-MM-dd")}";                   
                    if (records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
            }

            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            Indices multipleIndicesFromStringArray = indicesToAddToAlias.ToArray();
            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(multipleIndicesFromStringArray, _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw responseCreateIndex.OriginalException;
            }
        }

        private static DateTime aliasUpdated = DateTime.UtcNow.AddYears(-50);

        private void EnsureAlias()
        {
            if (_options.Value.IndexPerMonth)
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }
            else
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }           
        }
    }
}

Testing the audit log

The created audit trails can be checked using the following HTTP GET requests:

Counts all the audit trail entries in the alias.
http://localhost:9200/auditlog/_count

Shows all the audit trail indices. You can count all the documents from the indices used in the alias and it must match the count from the alias.
http://localhost:9200/_cat/indices/auditlog*

You can also start the application and the AuditTrail logs can be displayed in the Audit Trail logs MVC view.

01_audittrailview

This view is just a quick test, if implementing properly, you would have to localize the timestamp display and add proper paging in the view.

Notes, improvements

If lots of audit trail documents are written at once, maybe a bulk insert could be used to add the documents in batches, like most of the loggers implement this. You should also define a strategy on how the old audit trails, indices should be cleaned up, archived or whatever. The creating of the alias could be optimized depending on you audit trail data, and how you clean up old audit trail indices.

Links:

https://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-aliases.html

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html

https://docs.microsoft.com/en-us/aspnet/core/

https://www.elastic.co/products/elasticsearch

https://github.com/elastic/elasticsearch-net

https://www.nuget.org/packages/NLog.Web.AspNetCore/



Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 and ASP.NET Core 1.1

aka RC5 – last RC – promised!

The update from ASP.NET Core 1.0 (aka LTS – long term support) to ASP.NET Core 1.1 (aka Current) didn’t go so well (at least IMHO).

There were a couple of breaking changes both on the APIs as well as in behaviour. Especially around challenge/response based authentication middleware and EF Core.

Long story short – it was not possible for us to make IdentityServer support both versions. That’s why we decided to move to 1.1, which includes a bunch of bug fixes, and will also most probably be the version that ships with the new Visual Studio.

To be more specific – we build against ASP.NET Core 1.1 and the 1.0.0-preview2-003131 SDK.

Here’s a guide that describes how to update your host to 1.1. Our docs and samples have been updated.


Filed under: ASP.NET, OAuth, OpenID Connect, WebAPI


Ben Foster: Bare metal APIs with ASP.NET Core MVC

ASP.NET Core MVC now provides a true "one asp.net" framework that can be used for building both APIs and websites. But what if you only want to build an API?

Most of the ASP.NET Core MVC tutorials I've seen advise using the Microsoft.AspNetCore.Mvc package. While this does indeed give you what you need to build APIs, it also gives you a lot more:

  • Microsoft.AspNetCore.Mvc.ApiExplorer
  • Microsoft.AspNetCore.Mvc.Cors
  • Microsoft.AspNetCore.Mvc.DataAnnotations
  • Microsoft.AspNetCore.Mvc.Formatters.Json
  • Microsoft.AspNetCore.Mvc.Localization
  • Microsoft.AspNetCore.Mvc.Razor
  • Microsoft.AspNetCore.Mvc.TagHelpers
  • Microsoft.AspNetCore.Mvc.ViewFeatures
  • Microsoft.Extensions.Caching.Memory
  • Microsoft.Extensions.DependencyInjection
  • NETStandard.Library

A few of these packages are still needed if you're building APIs but many are specific to building full websites.

After installing the above package we typically register MVC in Startup.ConfigureServices like so:

services.AddMvc();

This code is responsible for wiring up the necessary MVC services with application container. Let's look at what this actually does:

public static IMvcBuilder AddMvc(this IServiceCollection services)
{
    var builder = services.AddMvcCore();

    builder.AddApiExplorer();
    builder.AddAuthorization();

    AddDefaultFrameworkParts(builder.PartManager);

    // Order added affects options setup order

    // Default framework order
    builder.AddFormatterMappings();
    builder.AddViews();
    builder.AddRazorViewEngine();
    builder.AddCacheTagHelper();

    // +1 order
    builder.AddDataAnnotations(); // +1 order

    // +10 order
    builder.AddJsonFormatters();

    builder.AddCors();

    return new MvcBuilder(builder.Services, builder.PartManager);
}

Again most of the service registration refers to the components used for rendering web pages.

Bare Metal APIs

It turns out that the ASP.NET team anticipated that developers may only want to build APIs and nothing else, so they gave us the ability to do just that.

First of all, rather than installing Microsoft.AspNetCore.Mvc, only install Microsoft.AspNetCore.Mvc.Core. This will give you the bare MVC middleware (routing, controllers, HTTP results) and not a lot else.

In order to process JSON requests and return JSON responses we also need the Microsoft.AspNetCore.Mvc.Formatters.Json package.

Then, to add both the core MVC middleware and JSON formatter, add the following code to ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddJsonFormatters();
}

The final thing to do is to change your controllers to derive from ControllerBase instead of Controller. This provides a base class for MVC controllers without any View support.

Looking at the final list of packages in project.json, you can see we really don't need that much after all, especially given most of these are related to configuration and logging:

"Microsoft.AspNetCore.Mvc.Core": "1.1.0",
"Microsoft.AspNetCore.Mvc.Formatters.Json": "1.1.0",
"Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
"Microsoft.Extensions.Configuration.Json": "1.1.0",
"Microsoft.Extensions.Configuration.CommandLine": "1.1.0",
"Microsoft.Extensions.Logging": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Microsoft.Extensions.Logging.Debug": "1.1.0"

You can find the complete code on GitHub.


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.