Andrew Lock: Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

ASP.NET Core Identity is an authentication and membership system that lets you easily add login functionality to your ASP.NET Core application. It is designed in a modular fashion, so you can use any "stores" for users and claims that you like, but out of the box it uses Entity Framework Core to store the entities in a database.

By default, EF Core uses naming conventions for the database entities that are typical for SQL Server. In this post I'll describe how to configure your ASP.NET Core Identity app to replace the database entity names with conventions that are more common to PostgreSQL.

I'm focusing on ASP.NET Core Identity here, where the entity table name mappings have already been defined, but there's actually nothing specific to ASP.NET Core Identity in this post. You can just as easily apply this post to EF Core in general, and use more PostgreSQL-friendly conventions for all your EF Core code. See here for the tl;dr code!

Moving to PostgreSql as a SQL Server aficionado

ASP.NET Core Identity can use any database provider that is supported by EF Core - some of which are provided by Microsoft, others are third-party or open source components. If you use the templates that come with the .NET CLI via dotnet new, you can choose SQL Server or SQLite by default. Personally, I've been working more and more with PostgreSQL, the powerful cross-platform, open source database.

As someone who's familiar with SQL Server, one of the biggest differences that can bite you when you start working with PostgreSQL is that table and column names are case sensitive! This certainly takes some getting used to, and, frankly is a royal pain in the arse to work with if you stick to your old habits. If a table is created with uppercase characters in the table or column name, then you have to ensure you get the case right, and wrap the identifiers in double quotes, as I'll show shortly.

This is unfortunate when you come from a SQL Server world, where camel-case is the norm for table and column names. For example, imagine you have a table called AspNetUsers, and you want to retrieve the Id, Email and EmailConfirmed fields:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

To query this table in PostgreSQL, you'd have to do something like:

SELECT "Id", "Email", "EmailConfirmed" FROM "AspNetUsers"  

Notice the quote marks we need? This only gets worse when you need to escape the quotes because you're calling from the command line, or defining a SQL query in a C# string, for example:

$ psql -d DatabaseWithCaseIssues -c "SELECT \"Id\", \"Email\", \"EmailConfirmed\" FROM \"AspNetUsers\" "

Clearly nobody wants to be dealing with this. Instead it's convention to use snake_case for database objects instead of CamelCase.

snake_case > CamelCase in PostreSQL

Snake case uses lowercase for all of the identifiers, and instead of using capitals to demarcate words, it uses an underscore, _. This is perfect for PostgreSQL, as it neatly avoids the case issue. If, we could rename our entity table names to asp_net_users, and the corresponding fields to id, email and email_confirmed, then we'd neatly side-step the quoting issue:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

This makes the PostgreSQL queries way simpler, especially when you would otherwise need to escape the quote marks:

$ psql -d DatabaseWithCaseIssues -c "SELECT id, email, email_confirmed FROM asp_net_users"

If you're using EF Core, then theoretically all this wouldn't matter to you. The whole point is that you don't have to write SQL code yourself, and you can just let the underlying framework generate the necessary queries. If you use CamelCase names, then the EF Core PostgreSQL database provider will happily escape all the entity names for you.

Unfortunately, reality is a pesky beast. It's just a matter of time before you find yourself wanting to write some sort of custom query directly against the database to figure out what's going on. More often than not, if it comes to this, it's because there's an issue in production and you're trying to figure out what went wrong. The last thing you need at this stressful time is to be messing with casing issues!

Consequently, I like to ensure my database tables are easy to query, even if I'll be using EF Core or some other ORM 99% of the time.

EF Core conventions and ASP.NET Core Identity

ASP.NET Core Identity takes care of many aspects of the identity and membership system of your app for you. In particular, it creates and manages the application user, claim and role entities for you, as well as a variety of entities related to third-party logins:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

If you're using the EF Core package for ASP.NET Core Identity, these entities are added to an IdentityDbContext, and configured within the OnModelCreating method. If you're interested, you can view the source online - I've shown a partial definition below, that just includes the configuration for the Users property which represents the users of your app

public abstract class IdentityDbContext<TUser>  
{
    public DbSet<TUser> Users { get; set; }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("UserNameIndex").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("EmailIndex");
            b.ToTable("AspNetUsers");
        }
        // additional configuration
    }
}

The IdentityDbContext uses the OnModelCreating method to configure the database schema. In particular, it defines the name of the user table to be "AspNetUsers" and sets the name of a number of indexes. The column names of the entities default to their C# property values, so they would also be CamelCased.

In your application, you would typically derive your own DbContext from the IdentityDbContext<>, and inherit all of the schema associated with ASP.NET Core Identity. In the example below I've done this, and specified TUser type for the application to be ApplicationUser:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }
}

With the configuration above, the database schema would use all of the default values, including the table names, and would give the database schema we saw previously. Luckily, we can override these values and replace them with our snake case values instead.

Replacing specific values with snake case

As is often the case, there are multiple ways to achieve our desired behaviour of mapping to snake case properties. The simplest conceptually is to just overwrite the values specified in IdentityDbContext.OnModelCreating() with new values. The later values will be used to generate the database schema. We simply override the OnModelCreating() method, call the base method, and then replace the values with our own:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("user_name_index").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("email_index");
            b.ToTable("asp_net_users");
        }
        // additional configuration
    }
}

Unfortunately, there's a problem with this. EF Core uses conventions to set the names for entities and properties where you don't explicitly define their schema name. In the example above, we didn't define the property names, so they will be CamelCase by default.

If we want to override these, then we need to add additional configuration for each entity property:

b.Property(b => b.EmailConfirmation).HasColumnName("email_confirmation");  

Every. Single. Property.

Talk about laborious and fragile…

Clearly we need another way. Instead of trying to explicitly replace each value, we can use a different approach, which essentially creates alternative conventions based on the existing ones.

Replacing the default conventions with snake case

The ModelBuilder instance that is passed to the OnModelCreating() method contains all the details of the database schema that will be created. By default, the database object names will all be CamelCased.

By overriding the OnModelCreating method, you can loop through each table, column, foreign key and index, and replace the existing value with its snake case equivalent. The following example shows how you can do this for every entity in the EF Core model. The ToSnakCase() extension method (shown shortly) converts a camel case string to a snake case string.

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        foreach(var entity in builder.Model.GetEntityTypes())
        {
            // Replace table names
            entity.Relational().TableName = entity.Relational().TableName.ToSnakeCase();

            // Replace column names            
            foreach(var property in entity.GetProperties())
            {
                property.Relational().ColumnName = property.Name.ToSnakeCase();
            }

            foreach(var key in entity.GetKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var key in entity.GetForeignKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var index in entity.GetIndexes())
            {
                index.Relational().Name = index.Relational().Name.ToSnakeCase();
            }
        }
    }
}

The ToSnakeCase() method is just a simple extension method that looks for a lower case letter or number, followed by a capital letter, and inserts an underscore. There's probably a better / more efficient way to achieve this, but it does the job!

public static class StringExtensions  
{
    public static string ToSnakeCase(this string input)
    {
        if (string.IsNullOrEmpty(input)) { return input; }

        var startUnderscores = Regex.Match(input, @"^_+");
        return startUnderscores + Regex.Replace(input, @"([a-z0-9])([A-Z])", "$1_$2").ToLower();
    }
}

These conventions will replace all the database object names with snake case values, but there's one table that won't be modified, the actual migrations table. This is defined when you call UseNpgsql() or UseSqlServer(), and by default is called __EFMigrationsHistory. You'll rarely need to query it outside of migrations, so I won't worry about it for now.

With our new conventions in place, we can add the EF Core migrations for our snake case schema. If you're starting from one of the VS or dotnet new templates, delete the default migration files created by ASP.NET Core Identity:

  • 00000000000000_CreateIdentitySchema.cs
  • 00000000000000_CreateIdentitySchema.Designer.cs
  • ApplicationDbContextModelSnapshot.cs

and create a new set of migrations using:

$ dotnet ef migrations add SnakeCaseIdentitySchema

Finally, you can apply the migrations using

$ dotnet ef database update

After the update, you can see that the database schema has been suitably updated. We have snake case table names, as well as snake case columns (you can take my word for it on the foreign keys and indexes!)

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Now we have the best of both worlds - we can use EF Core for all our standard database action, but have the option of hand crafting SQL queries without crazy amounts of ceremony.

Note, although this article focused on ASP.NET Core Identity, it is perfectly applicable to EF Core in general.

Summary

In this post, I showed how you could modify the OnModelCreating() method so that EF Core uses snake case for database objects instead of camel case. You can look through all the entities in EF Core's model, and change the table names, column names, keys, and indexes to use snake case. For more details on the default EF Core conventions, I recommend perusing the documentation!


Andrew Lock: Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

I was recently creating a new GitHub project and I wanted to target ASP.NET Core 2.0 preview 2. I like to use AppVeyor for the CI build and for publishing to MyGet/NuGet, as I can typically just copy and paste a single file between projects to get my standard build pipeline. Unfortunately, targeting the latest preview is easier said than done! In this post, I'll show how to update your appveyor.yml file so you can build your .NET Core preview libraries on AppVeyor.

Building .NET Core projects on AppVeyor

If you are targeting a .NET Core SDK version that AppVeyor explicitly supports, then you don't really have to do anything - a simple appveyor.yml file will handle everything for you. For example, the following is a (somewhat abridged) version I use on some of my existing projects:

version: '{build}'  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxxxxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

There's really nothing fancy here, most of this configuration is used to define when AppVeyor should run a build, and how to deploy the NuGet package to NuGet. There's essentially no configuration of the target environment required - the build simply calls the build.ps1 file to restore and build the project.

I've switched to using Cake for most of my projects these days, often based on a script from Muhammad Rehan Saeed. If this is your first time using AppVeyor to build your projects, I suggest you take a look at my previous post on using AppVeyor.

Unfortunately, if you try and build a .NET Core 2.0 preview 2 project with this script, you'll be out of luck. I found I got random, nondescript errors, such as this one:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Installing .NET Core 2.0 preview 2 in AppVeyor

Luckily, AppVeyor makes it easy to install additional dependencies before running your build script - you just add additional commands under the install node:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
install:  
  # Run additional commands here

The tricky part is working out exactly what to run! I couldn't find any official guidance on scripting the install, so I went hunting in some of the Microsoft GitHub repos. In particular I found the JavaScriptServices repo which manually installs .NET Core. The install node at the time of writing (for preview 1) was:

install:  
   # .NET Core SDK binaries
   - ps: $urlCurrent = "https://download.microsoft.com/download/3/7/F/37F1CA21-E5EE-4309-9714-E914703ED05A/dotnet-dev-win-x64.2.0.0-preview1-005977.exe"
   - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
   - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
   - ps: $tempFileCurrent = [System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), [System.IO.Path]::GetRandomFileName())
   - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
   - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
   - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"

There's a lot of commands in there. Most of it we can copy and paste, but the trickiest point is that download URL - GUIDs, really?

Luckily there's an easy way to find the URL for preview 2 - you can look at the release notes for the version of .NET Core you want to target.

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

The link you want is the Windows 64-bit SDK binaries. Just right-click, copy the link and paste into the appveyor.yml, to give the final file. The full AppVeyor file from my recent CommonPasswordValidator repository is shown below:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
environment:  
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true
  DOTNET_CLI_TELEMETRY_OPTOUT: true
install:  
  # Download .NET Core 2.0 Preview 2 SDK and add to PATH
  - ps: $urlCurrent = "https://download.microsoft.com/download/F/A/A/FAAE9280-F410-458E-8819-279C5A68EDCF/dotnet-sdk-2.0.0-preview2-006497-win-x64.zip"
  - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
  - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
  - ps: $tempFileCurrent = [System.IO.Path]::GetTempFileName()
  - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
  - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
  - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  server: https://www.myget.org/F/andrewlock-ci/api/v2/package
  api_key:
    secure: xxxxxx
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

Now when AppVeyor runs, you can see it running the install steps before running the build script:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Using predictable download URLs

Shortly after battling with this issue, I took another look at the JavaScriptServices project, and noticed they'd switched to using nicer URLs for the SDK binaries. Instead of using the horrible GUIDy URLs, you can use zip files stored on an Azure CDN instead. These URLs just require you know the SDK version (including the build number) For example:

It looks like preview 2 is the first to be available at this URL, but as you can see, later builds are also available if you want to work with the bleeding edge builds.

Summary

In this post I showed how you could use the install node of an appveyor.yml file to install ASP.NET Core 2.0 preview 2 into your AppVeyor build pipeline. This lets you target preview versions of .NET Core in your build pipeline, before they're explicitly supported by AppVeyor.


Andrew Lock: Creating a validator to check for common passwords in ASP.NET Core Identity

Creating a validator to check for common passwords in ASP.NET Core Identity

In my last post, I showed how you can create a custom validator for ASP.NET Core. In this post, I introduce a package that lets you validate that a password is not one of the most common passwords users choose.

You can find the package on GitHub and on NuGet, and can install it using dotnet add package CommonPasswordValidator. Currently, it supports ASP.NET Core 2.0 preview 2.

Full disclosure, this post is 100% inspired by the codinghorror.com article by Jeff Atwood on how they validate passwords in Discourse. If you haven't read it yet, do it now!

As Jeff describes in the appropriately named article Password Rules Are Bullshit, password rules can be a real pain. Obviously in theory, password rules make sense, but reality can be a bit different. The default Identity templates require:

  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

All these rules will theoretically increase the entropy of any passwords a user enters. But you just know that's not really what happens.

All it means is that instead of entering password, they enter Password1!

And on top of that, if you're using a password manager, these password rules can get in the way. So your 40 character random password happens to not have a digit in this time? Pretty sure it's still OK... should you really have to generate a new password?

Instead, Jeff Attwood suggests 5 pieces of advice when designing your password validation:

  1. Password rules are bullshit - These rarely achieve their goal, don't make the passwords of average users better, and penalise users using password managers.

    You can easily disable password rules in ASP.NET Core Identity by disabling the composition rules.

  2. Enforce a minimum Unicode password length - Length is an easy rule for users to grasp, and in general, a longer password will be more secure than a short one

    You can similarly set the minimum length in ASP.NET Core Identity using the options pattern, e.g. options.Password.RequiredLength = 10

  3. Check for common passwords - There's plenty of stats on the terrible password choices user make to their own devices, and you an create your own by checking out password lists available online. For example, 30% have a password from the top 10,000 most common passwords!

    In this post I'll describe a custom validator you can add to your ASP.NET Core Identity project to prevent users using the most common passwords

  4. Check for basic entropy - Even with a length requirement, and checking for common passwords, users can make terrible password choices like 9999999999. A simple approach to tackling this is to require a minimum number of unique digits.

    In ASP.NET Core Identity 2.0, you can require a minimum number of required digits using options.Password.RequiredUniqueChars = 6

  5. Check for special case passwords - User's shouldn't be allowed to use their username, email or other obvious values as their password.

You can create custom validators for ASP.NET Core Identity, as I showed in my previous post.

Whether you agree 100% with these rules doesn't really matter, but I think most people will agree with at least a majority of them. Either way, preventing the most common passwords is somewhat of a no-brainer.

There's no built-in way of achieving this, but thanks to ASP.NET Core Identity's extensibility, we can create a custom validator instead.

Creating a validator to check for common passwords

ASP.NET Core Identity lets you register custom password validators. These are executed when a user registers on your site, or changes their password, and let you apply additional constraints to the password.

In my last post, I showed how to create custom validators. Creating a validator to check for common passwords is pretty simple - we load the list of forbidden passwords into a HashSet, and check that the user's password is not one of them:

public class Top100PasswordValidator<TUser> : IPasswordValidator<TUser>  
        where TUser : class
{
    static readonly HashSet<string> Passwords { get; } = PasswordLists.Top100Passwords;

    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager,
                                                TUser user,
                                                string password)
    {
        if(Passwords.Contains(password))
        {
            var result = IdentityResult.Failed(new IdentityError
            {
                Code = "CommonPassword",
                Description = "The password you chose is too common."
            });
            return Task.FromResult(result);
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator is pretty standard. We have a list of passwords that you are not allowed to use, stored in the static HashSet<string>. ASP.NET Core Identity will call ValidateAsync when a new user registers, passing in the new user object, and the new password.

As we don't need to access the user object itself, we can make this validator completely generic to TUser, instead of limiting it to IdentityUser<TKey> as we did in my last post.

There's plenty of different passwords list we could choose from, so I chose to implement a few different variations, based on the 10 million passwords from 2016, depending on how restrictive you want to be.

  • Block passwords in the top 100 most common
  • Block passwords in the top 500 most common
  • Block passwords in the top 1,000 most common
  • Block passwords in the top 10,000 most common
  • Block passwords in the top 100,000 most common

Each of these passwords lists is stored as an embedded resource in the NuGet package. In the new .csproj file format, you do this by removing it from the normal wildcard inclusion, and marking as EmbeddedResource:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <None Remove="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <EmbeddedResource Include="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Identity" Version="2.0.0-preview2-final" />
  </ItemGroup>

</Project>  

With the lists embedded in the dll, we can simply load the passwords from the embedded resource into a HashSet.

Loading a list of strings from an embedded resource

You can read an embedded resource as a stream from the assembly using the GetManifestResourceStream() method on the Assembly type. I created a small helper class that loads the embedded file from the assembly, reads it line by line, and adds the password to the HashSet (using a case-insensitive string comparer).

internal static class PasswordLists  
{
    private static HashSet<string> LoadPasswordList(string resourceName)
    {
        HashSet<string> hashset;

        var assembly = typeof(PasswordLists).GetTypeInfo().Assembly;
        using (var stream = assembly.GetManifestResourceStream(resourceName))
        {
            using (var streamReader = new StreamReader(stream))
            {
                hashset = new HashSet<string>(
                    GetLines(streamReader),
                    StringComparer.OrdinalIgnoreCase);
            }
        }
        return hashset;
    }

    private static IEnumerable<string> GetLines(StreamReader reader)
    {
        while (!reader.EndOfStream)
        {
            yield return reader.ReadLine();
        }
    }
}

NOTE: When you pass in the resourceName to load, it must be properly namespaced. The namespace is based on the namespace of the Assembly, and the subfolder of the resource file.

Adding the custom validator to ASP.NET Core Identity

That's all there is to the validator itself. You can add it to the ASP.NET Core Identity validators collection using the AddPasswordValidator<>() method. For example:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddPasswordValidator<Top100PasswordValidator<ApplicationUser>>();

It's somewhat of a convention to create helper extension methods in ASP.NET Core, so we can easily add an additional extension that simplifies the above slightly:

public static class IdentityBuilderExtensions  
{        
    public static IdentityBuilder AddTop100PasswordValidator<TUser>(this IdentityBuilder builder) where TUser : class
    {
        return builder.AddPasswordValidator<Top100PasswordValidator<TUser>>();
    }
}

With this extension, you can add the validator using the following:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddTop100PasswordValidator<ApplicationUser>();

With the validator in place, if a user tries to use a password that's too common, they'll get a standard warning when registering on your site:

Creating a validator to check for common passwords in ASP.NET Core Identity

Summary

This post was based on the suggestion by Jeff Attwood that we should limit password composition rules, focus on length, and ensure users can't choose common passwords.

ASP.NET Core Identity lets you add custom validators. This post showed how you could create a validator that ensures the entered password isn't in the top 100 - 100,000 of the 10 million most common passwords.

You can view the source code for the validator on GitHub, or you can install the NuGet package using the command

dotnet add package CommonPasswordValidator  

Currently, the package targets .NET Core 2.0 preview 2. If you have any comments, suggestions, or bugs, please raise an issue or leave a comment! Thanks


Anuraj Parameswaran: Send Mail Using SendGrid In .NET Core

This post is about sending emails using Send Grid API in .NET Core. SendGrid is a cloud-based SMTP provider that allows you to send email without having to maintain email servers. SendGrid manages all of the technical details, from scaling the infrastructure to ISP outreach and reputation monitoring to whitelist services and real time analytics.


Damien Bowden: Implementing Two-factor authentication with IdentityServer4 and Twilio

This article shows how to implement two factor authentication using Twilio and IdentityServer4 using Identity. On the Microsoft’s Two-factor authentication with SMS documentation, Twilio and ASPSMS are promoted, but any SMS provider can be used.

Code: https://github.com/damienbod/AspNetCoreID4External

Setting up Twilio

Create an account and login to https://www.twilio.com/

Now create a new phone number and use the Twilio documentation to set up your account to send SMS messages. You need the Account SID, Auth Token and the Phone number which are required in the application.

The phone number can be configured here:
https://www.twilio.com/console/phone-numbers/incoming

Adding the SMS support to IdentityServer4

Add the Twilio Nuget package to the IdentityServer4 project.

<PackageReference Include="Twilio" Version="5.5.2" />

The Twilio settings should be a secret, so these configuration properties are added to the app.settings.json file with dummy values. These can then be used for the deployments.

"TwilioSettings": {
  "Sid": "dummy",
  "Token": "dummy",
  "From": "dummy"
}

A configuration class is then created so that the settings can be added to the DI.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace IdentityServerWithAspNetIdentity.Services
{
    public class TwilioSettings
    {
        public string Sid { get; set; }
        public string Token { get; set; }
        public string From { get; set; }
    }
}

Now the user secrets configuration needs to be setup on your dev PC. Right click the IdentityServer4 project and add the user secrets with the proper values which you can get from your Twilio account.

{
  "MicrosoftClientId": "your_secret..",
  "MircosoftClientSecret":  "your_secret..",
  "TwilioSettings": {
    "Sid": "your_secret..",
    "Token": "your_secret..",
    "From": "your_secret..",
  }
}

The configuration class is then added to the DI in the Startup class ConfigureServices method.

var twilioSettings = Configuration.GetSection("TwilioSettings");
services.Configure<TwilioSettings>(twilioSettings);

Now the TwilioSettings can be added to the AuthMessageSender class which is defined in the MessageServices file, if using the IdentityServer4 samples.

private readonly TwilioSettings _twilioSettings;

public AuthMessageSender(ILogger<AuthMessageSender> logger, IOptions<TwilioSettings> twilioSettings)
{
	_logger = logger;
	_twilioSettings = twilioSettings.Value;
}

This class is also added to the DI in the startup class.

services.AddTransient<ISmsSender, AuthMessageSender>();

Now the TwilioClient can be setup to send the SMS in the SendSmsAsync method.

public Task SendSmsAsync(string number, string message)
{
	// Plug in your SMS service here to send a text message.
	_logger.LogInformation("SMS: {number}, Message: {message}", number, message);
	var sid = _twilioSettings.Sid;
	var token = _twilioSettings.Token;
	var from = _twilioSettings.From;
	TwilioClient.Init(sid, token);
	MessageResource.CreateAsync(new PhoneNumber(number),
		from: new PhoneNumber(from),
		body: message);
	return Task.FromResult(0);
}

The SendCode.cshtml view can now be changed to send the SMS with the style, layout you prefer.

<form asp-controller="Account" asp-action="SendCode" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="SelectedProvider" type="hidden" value="Phone" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <div class="row">
        <div class="col-md-8">
            <button type="submit" class="btn btn-default">Send a verification code using SMS</button>
        </div>
    </div>
</form>

In the VerifyCode.cshtml, the ReturnUrl from the model property must be added to the form as a hidden item, otherwise your client will not be redirected back to the calling app.

<form asp-controller="Account" asp-action="VerifyCode" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <div asp-validation-summary="All" class="text-danger"></div>
    <input asp-for="Provider" type="hidden" />
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <h4>@ViewData["Status"]</h4>
    <hr />
    <div class="form-group">
        <label asp-for="Code" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Code" class="form-control" />
            <span asp-validation-for="Code" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <div class="checkbox">
                <input asp-for="RememberBrowser" />
                <label asp-for="RememberBrowser"></label>
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">Submit</button>
        </div>
    </div>
</form>

Testing the application

If using an existing client, you need to update the Identity in the database. Each user requires that the TwoFactoredEnabled field is set to true and a mobile phone needs to be set in the phone number field, (Or any phone which can accept SMS)

Now login with this user:

The user is redirected to the send SMS page. Click the send SMS button. This sends a SMS to the phone number defined in the Identity for the user trying to authenticate.

You should recieve an SMS. Enter the code in the verify view. If no SMS was sent, check your Twilio account logs.

After a successful code validation, the user is redirected back to the consent page for the client application. If not redirected, the return url was not set in the model.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/2fa

https://www.twilio.com/

http://docs.identityserver.io/en/release/

https://www.twilio.com/use-cases/two-factor-authentication



Andrew Lock: Creating custom password validators for ASP.NET Core Identity

Creating custom password validators for ASP.NET Core Identity

ASP.NET Core Identity is a membership system that lets you add user accounts to your ASP.NET Core applications. It provides the low-level services for creating users, verifying passwords and signing users in to your application, as well as additional features such as two-factor authentication (2FA) and account lockout after too many failed attempts to login.

When users register on an application, they typically provide an email/username and a password. ASP.NET Core Identity lets you provide validation rules for the password, to try and prevent users from using passwords that are too simple.

In this post, I'll talk about the default password validation settings and how to customise them. Finally, I'll show how you can write your own password validator for ASP.NET Core Identity.

The default settings

By default, if you don't customise anything, Identity configures a default set of validation rules for new passwords:

  • Passwords must be at least 6 characters
  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

If you want to change these values, to increase the minimum length for example, you can do so when you add Identity to the DI container in ConfigureServices. In the following example I've increased the minimum password length from 6 to 10, and disabled the other validations:

Disclaimer: I'm not saying you should do this, it's just an example!

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequireLowercase = false;
    options.Password.RequireUppercase = false;
    options.Password.RequireNonAlphanumeric = false;
    options.Password.RequireDigit = false;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

Coming in ASP.NET Core 2.0

In ASP.NET Core Identity 2.0, which uses ASP.NET Core 2.0 (available as 2.0.0-preview2 at time of writing) you get another configurable default setting:

  • Passwords must use at least n different characters

This lets you guard against the (stupidly popular) password "111111" for example. By default, this setting is disabled for compatibility reasons (you only need 1 unique character), but you can enable it in a similar way. The following example requires passwords of length 10, with at least 6 unique characters, one upper, one lower, one digit, and one special character.

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequiredUniqueChars = 6;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

When the default validators aren't good enough..

Whether having all of these rules when creating a password is a good idea is up for debate, but it's certainly nice to have the options there. Unfortunately, sometimes these rules aren't enough to really protect users from themselves.

For example, it's quite common for a sub-set of users to use their username/email as their password. This is obviously a bad idea, but unfortunately the default password rules won't necessarily catch it! For example, in the following example I've used my username as my password:

Creating custom password validators for ASP.NET Core Identity

and it meets all the rules: more than 6 characters, upper and lower, number, even a special character @!

And voilà, we're logged in...

Creating custom password validators for ASP.NET Core Identity

Luckily, ASP.NET Core Identity lets you write your own password validators. Let's create a validator to catch this common no-no.

Writing a custom validator for ASP.NET Core Identity

You can create a custom validator for ASP.NET Core Identity by implementing the IPasswordValidator<TUser> interface:

public interface IPasswordValidator<TUser> where TUser : class  
{
    Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password);
}

One thing to note about this interface is that the TUser type parameter is only limited to class - that means that if you create the most generic implementation of this interface, you won't be able to use properties of the user parameter.

That's fine if you're validating the password by looking at the password itself, checking the length and which character types are in it etc. Unfortunately, it's no good for the validator we're trying to create - we need access to the UserName property so we can check if the password matches.

We can get round this by implementing the validator and restricting the TUser type parameter to an IdentityUser. This is the default Identity user type created by the templates (which use EF Core under the hood), so it's still pretty generic, and it means we can now build our validator.

public class UsernameAsPasswordValidator<TUser> : IPasswordValidator<TUser>  
    where TUser : IdentityUser
{
    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
    {
        if (string.Equals(user.UserName, password, StringComparison.OrdinalIgnoreCase))
        {
            return Task.FromResult(IdentityResult.Failed(new IdentityError
            {
                Code = "UsernameAsPassword",
                Description = "You cannot use your username as your password"
            }));
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator checks if the UserName of the new TUser object passed in matches the password (ignoring case). If they match, then it rejects the password using the IdentityResult.Failed method, passing in an IdentityError (and wrapping in a Task<>).

The IdentityError class has both a Code and a Description - the Code property is used by the Identity system internally to localise the errors, and the Description is obviously an English description of the error which is used by default.

Note: Your errors won't be localised by default - I'll write a follow up post about this soon.

If the password and username are different, then the validator returns IdentityResult.Success, indicating it has no problems.

Note: The default templates use the email address for both the UserName and Email properties. If your user entities are configured differently, the username is separate from the email for example, you could check the password doesn't match either property by updating the ValidateAsync method accordingly.

Now we have a validator, we just need to make Identity aware of it. You do this with the AddPasswordValidator<> method exposed on IdentityBuilder when configuring your app:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator<ApplicationUser>>();

    // EF Core, MVC service config etc
}

It looks a bit long-winded because we need to pass in the TUser generic parameter. If we're just building the validator for a single app, we could always remove the parameter altogether and simplify the signature somewhat:

public class UsernameAsPasswordValidator : IPasswordValidator<ApplicationUser>  
{
    public Task<IdentityResult> ValidateAsync(UserManager<ApplicationUser> manager, ApplicationUser user, string password)
    {
        // as before
    }
}

And then our Identity configuration becomes:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator>();

    // EF Core, MVC service config etc
}

Now when you try and use your username as a password to register a new user you'll get a nice friendly warning to tell you to stop being stupid!

Creating custom password validators for ASP.NET Core Identity

Summary

The default password validation in ASP.NET Core Identity includes a variety of password rules that you configure, such as password length, and required character types.

You can write your own password validators by implementing IPasswordValidator<TUser> and calling .AddPasswordValidator<T> when configuring Identity.

I have created a small NuGet package containing the validator from this blog post, a similar validator for validating the password does not equal the email, and one that looks for specific phrases (for example the URL or domain of your website - another popular choice for security-lacking users!).

You can find the package NetEscapades.AspNetCore.Identity.Validators on Nuget, with instructions on how to get started on GitHub. Hope you find it useful!


Damien Bowden: Adding an external Microsoft login to IdentityServer4

This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.

Code https://github.com/damienbod/AspNetCoreID4External

Setting up the App Platform for the Microsoft Account

To setup the app, login using your Microsoft account and open the My Applications link

https://apps.dev.microsoft.com/?mkt=en-gb#/appList

Click the ‘Add an app’ button

Give the application a name and add your email. This app is called ‘microsoft_id4_damienbod’

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

Now Add a new platform. Choose a Web type.

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

Add the permissions as required

Application configuration

Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.

The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.

services.AddDbContext<ApplicationDbContext>(options =>
  options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
  .AddEntityFrameworkStores<ApplicationDbContext>()
  .AddDefaultTokenProviders();

Now the UseMicrosoftAccountAuthentication extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.

app.UseIdentity();
app.UseIdentityServer();

app.UseMicrosoftAccountAuthentication(new MicrosoftAccountOptions
{
	AuthenticationScheme = "Microsoft",
	DisplayName = "Microsoft",
	SignInScheme = "Identity.External",
	ClientId = _clientId,
	ClientSecret = _clientSecret
});

The application can now be tested. An Angular client using OpenID Connect sends a login request to the server. The ClientId and the ClientSecret are saved using user secrets, so that the password is not committed in the src code.

Click the Microsoft button to login.

This redirects the user to the Microsoft Account login for the microsoft_id4_damienbod application.

After a successful login, the user is redirected to the consent page.

Click yes, and the user is redirected back to the IdentityServer4 application. If it’s a new user, a register page will be opened.

Click register and the ID4 consent page is opened.

Then the application opens.

What’s nice about the IdentityServer4 application is that it’s a simple ASP.NET Core application with standard Views and Controllers. This makes it really easy to change the flow, for example, if a user is not allowed to register or whatever.

Links

https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-how-to-configure-microsoft-authentication

http://docs.identityserver.io/en/release/topics/signin_external_providers.html



Anuraj Parameswaran: ASP.NET Core Gravatar Tag Helper

This post is about creating a tag helper in ASP.NET Core for displaying Gravatar images based on the email address. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog.


Andrew Lock: Localising the DisplayAttribute in ASP.NET Core 1.1

Localising the DisplayAttribute in ASP.NET Core 1.1

This is a very quick post in response to a comment asking about the state of localisation for the DisplayAttribute in ASP.NET Core 1.1. A while ago I wrote a series of posts about localising your ASP.NET Core application, using the IStringLocalizer abstraction.

  1. Adding Localisation to an ASP.NET Core application
  2. Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core
  3. Url culture provider using middleware as filters in ASP.NET Core 1.1.0
  4. Applying the RouteDataRequest CultureProvider globally with middleware as filters
  5. Using a culture constraint and catching 404s with the url culture provider
  6. Redirecting unknown cultures to the default culture when using the url culture provider

The IStringLocalizer is a new way of localising your validation messages, view templates, and arbitrary strings. If you're not familiar with localisation in ASP.NET Core, I suggest checking out my first post on localisation for the benefits and pitfalls it brings, but I'll give a quick refresher here.

Brief Recap

Localisation is handled in ASP.NET Core through two main abstractions IStringLocalizer and IStringLocalizer<T>. These allow you to retrieve the localised version of a string by essentially using it as the key into a dictionary; if the key does not exist for that resource, or you are using the default culture, the key itself is returned as the resource:

public class ExampleClass  
{
    public ExampleClass(IStringLocalizer<ExampleClass> localizer)
    {
        // If the resource exists, this returns the localised string
        var localisedString1 = _localizer["I exist"]; // "J'existe"

        // If the resource does not exist, the key itself  is returned
        var localisedString2 = _localizer["I don't exist"]; // "I don't exist"
    }
}

Resources are stored in .resx files that are named according to the class they are localising. So for example, the IStringLocalizer<ExampleClass> localiser would look for a file named (something similar to) ExampleClass.fr-FR.resx. Microsoft recommends that the resource keys/names in the .resx files are the localised values in the default culture. That way you can write your application without having to create any resource files - the supplied string will be used as the resource.

As well as arbitrary strings like this, DataAnnotations which derive from ValidationAttribute also have their ErrorMessage property localised automatically.

Finally, you can localise your Views, either providing whole replacements for your View by using filenames of the form Index.fr-FR.cshtml, or by localising specific strings in your view with another abstraction, the IViewLocalizer, which acts as a view-specific wrapper around IStringLocalizer.

Localising the DisplayAttribute in ASP.NET Core 1.0

Unfortunately, in ASP.NET Core 1.0, there was one big elephant in the room… You could localise the ValidationAttributes on your view models, but you couldn't localise the DisplayAttribute which would generate the associated labels!

Localising the DisplayAttribute in ASP.NET Core 1.1

That's a little unfair - you could localise the DisplayAttribute, it just required jumping through some significant hoops. You had to fall back to the ResourceManager class, use Visual Studio to generate .resx designer files, and move away from the simpler localisation approach adopted everywhere else. Instead of just passing a key to the Name or ErrorMessage property, you had to remember to set the ResourceType too:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address", ResourceType = typeof(Resources.ViewModels_HomeViewModel))]
    public string Email { get; set; }
}

Localising the DisplayAttribute in ASP.NET Core 1.1

Luckily, that issue has all gone away now. In ASP.NET Core 1.1 you can now localise the DisplayAttribute in the same way you do your ValidationAttributes:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address")]
    public string Email { get; set; }
}

These values will be used by the IStringLocalizer as keys in the resx files for localisation (or as the localised value itself if a value can't be found in the .resx). No fuss, no muss.

Note As an aside, I still really dislike this idea of using the English phrase as the key in the dictionary - it's too fragile for my liking, but at least you can easily fix that.

Summary

Localisation in ASP.NET Core still requires a lot of effort, but it's certainly easier than in the previous version of ASP.NET. With the update to the DisplayAttribute in ASP.NET Core 1.1, one of the annoying differences in behaviour was fixed, so that you localize it the same way you would your other DataAnnotations.

As with most of my blog posts, there's a small sample project demonstrating this on GitHub


Dominick Baier: Authorization is hard! Slides and Video from NDC Oslo 2017

A while ago I wrote a controversial article about the problems that can arise when mixing authentication and authorization systems – especially when using identity/access tokens to transmit authorization data – you can read it here.

In the meanwhile Brock and I sat down to prototype a possible solution (or at least an improvement) to the problem and presented it to various customers and at conferences.

Also many people asked me for a more detailed version of my blog post – and finally there is now a recording of our talk from NDC – video here – and slides here. HTH!

 


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: When you use the Polly circuit-breaker, make sure you share your Policy instances!

When you use the Polly circuit-breaker, make sure you share your Policy instances!

This post is somewhat of PSA about using the excellent open source Polly library for handling resiliency to your application. Recently, I was tasked with adding a circuit-breaker implementation to some code calling an external API, and I figured Polly would be perfect, especially as we already used it in our solution!

I hadn't used Polly directly in a little while, but the excellent design makes it easy to add retry handling, timeouts, or circuit-breaking to your application. Unfortunately, my initial implementation had one particular flaw, which meant that my circuit-breaker never actually worked!

In this post I'll outline the scenario I was working with, my initial implementation, the subsequent issues, and what I should have done!

tl;dr; Policy is thread safe, and for the circuit-breaker to work correctly, it must be shared so that you call Execute on the same Policy instance every time!

The scenario - dealing with a flakey external API

A common requirement when working with currencies is dealing with exchange rates. We have happily been using the Open Exchange Rates API to fetch a JSON list of exchange rates for a while now.

The existing implementation consists of three classes:

  • OpenExchangeRatesClient - Responsible for fetching the exchange rates from the API, and parsing the JSON into a strongly typed .NET object.
  • OpenExchangeRatesCache - We don't want to fetch exchange rates every time we need them, so this class caches the latest exchange rates for a day before calling the OpenExchangeRatesClient to get up-to-date rates.
  • FallbackExchangeRateProvider - If the call to fetch the latest rates using the OpenExchangeRatesClient fails, we fallback to a somewhat recent copy of the data, loaded from an embedded resource in the assembly.

All of these classes are registered as Singletons with the IoC container, so there isonly a single instance of each. This setup has been working fine for a while, but there was an issue where the Open Exchange Rates API went down, just as the local cache of exchange rates expired. The series of events was:

  1. A request was made to our internal API, which called a service that required exchange rates.
  2. The service called the OpenExchangeRatesCache which realised the current rates were out of date.
  3. The cache called the OpenExchangeRatesClient to fetch the latest rates.
  4. Unfortunately the service was down, and eventually caused a timeout (after 100 seconds!)
  5. At this point the cache used the FallbackExchangeRateProvider to use the stale rates for this single request.
  6. A separate request was made to our internal API - repeat steps 2-6!

An issue in the external dependency, the exchange rate API going down, was causing our internal services to take 100s to respond to requests, which in turn was causing other requests to timeout. Effectively we had a cascading failure, even though we thought we had accounted for this by providing a fallback.

Note I realise updating cached exchange rates should probably be a background task. This would stop requests failing if there are issues updating, but the general problem is common to many scenarios, especially if you're using micro-services.

Luckily, this outage didn't happen at a peak time, so by the time we came to investigate the issue, the problem had passed, and relatively few people were affected. However, it obviously flagged up a problem, so I set about trying to ensure this wouldn't happen again if the API had issues at a later date!

Fix 1 - Reduce the timeouts

The first fix was a relatively simple one. The OpenExchangeRatesClient was using an HttpClient to call the API and fetch the exchange rate data. This was instantiated in the constructor, and reused for the lifetime of the class. As the client was used as a singleton, the HttpClient was also a singleton (so we didn't have any of these issues).

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }
}

The first fix I made was to set the Timeout property on the HttpClient. In the failure scenario, it was taking 100s to get back an error response. Why 100s? Because that's the default timeout for HttpClient!

Checking our metrics of previous calls to the service, I could see that prior to the failure, virtually all calls were taking approximately 0.25s. Based on that, a 100s timeout was clearly overkill! Setting the timeout to something more modest, but still conservative, say 5s, should help prevent the scenario happening again.

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
            Timeout = TimeSpan.FromSeconds(5),
        };
    }
}

Fix 2 - Add a circuit breaker

The second fix was to add a circuit-breaker implementation to the API calls. The Polly documentation has a great explanation of the circuit-breaker pattern, but I'll give a brief summary here.

Circuit-breakers in brief

Circuit-breakers make sense when calling a somewhat unreliable API. They use a fail-fast approach when a method has failed several times in a row. As an example, in my sceario, there was no point repeatedly calling the API when it hadn't worked several times in a row, and was very likely to fail. All we were doing was adding additional delays to the method calls, when it's pretty likely you're going to have to use the fallback anyway.

The circuit-breaker tracks the number of times an API call has failed. Once it crosses a threshold number of failures in a row, it doesn't even try to call the API for subsequent requests. Instead, it fails immediately, as though the API had failed.

After some timeout, the circuit-breaker will let one method call through to "test" the API and see if it succeeds. If it fails, it goes back to just failing immediately. If it succeeds then the circuit is closed again, and it will go back to calling the API for every request.

When you use the Polly circuit-breaker, make sure you share your Policy instances!

Circuit breaker state diagram taken from the Polly documentation

The circuit-breaker was a perfect fit for the failure scenario in our app, so I set about adding it to the OpenExchangeRatesClient.

Creating a circuit breaker policy

You can create a circuit-breaker Policy in Polly using the CircuitBreakerSyntax. As we're going to be making requests with the HttpClient, I used the async methods for setting up the policy and for calling the API:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var rates = await circuitBreaker  
    .ExecuteAsync(() => CallRatesApi());

This configuration creates a new circuit breaker policy, defines the number of consecutive exceptions to allow before marking the API as broken and opening the breaker, and the amount of time the breaker should stay open for before moving to the half-closed state.

Once you have a policy in place, circuitBreaker, you can call ExecuteAsync and pass in the method to execute. At runtime, if an exception occurs executing CallRatesApi() the circuit breaker will catch it, and keep track of how many exceptions it has raised to control the breaker's state.

Adding a fallback

When an exception occurs in the CallRatesApi() method, the breaker will catch it, but it will re-throw the exception. In my case, I wanted to catch those exceptions and use the FallbackExchangeRateProvider. I could have used a try-catch block, but I decided to stay in the Polly-spirit and use a Fallback policy.

A fallback policy is effectively a try catch block - it simply executes an alternative method if CallRatesApi() throws. You can then wrap the fallback policy around the breaker policy to combine the two. If the circuit breaker fails, the fallback will run instead:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var fallback = Policy  
    .Handle<Exception>()
    .FallbackAsync(()=> GetFallbackRates())
    .WrapAsync(circuitBreaker);


var results = await fallback  
    .ExecuteAsync(() => CallRatesApi());

Putting it together - my failed attempt!

This all looked like it would work as best I could see, so I set about replacing the OpenExchangeRatesClient implementation, and testing it out.

*Note * This isn't correct, don't copy it!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        var fallback = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .WrapAsync(circuitBreaker);


        return fallback
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

In theory, this is the flow I was aiming for when the API goes down:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  4. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  5. ... etc

What I actually saw was:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  4. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  5. ... etc

It was as though the circuit breaker wasn't there at all! No matter how many times the CallRatesApi() method threw, the circuit was never breaking.

Can you see what I did wrong?

Using circuit breakers properly

Every time you call GetLatestRates(), I'm creating a new circuit breaker (and fallback) policy, and then calling ExecuteAsync on that!

The circuit breaker, by it's nature, has state that must be persisted between calls (the number of exceptions that have previously happened, the open/closed state of the breaker etc). By creating new Policy objects inside the GetLatestRates() method, I was effectively resetting the policy back to its initial state, hence why nothing was working!

The answer is simple - make sure the Policy persists between calls to GetLatestRates() so that its state persists. The Policy is thread safe, so there's no issues to worry about there either. As the client is implemented in our app as a singleton, I simply moved the policy configuration to the class constructor, and everything proceeded to work as expected!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    private readonly Policy _policy;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };

        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        _policy = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .Wrap(circuitBreaker);
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        return _policy
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

And that's all it takes! It works brilliantly when you actually use it properly 😉

Summary

This post ended up a lot longer than I intended, as it was a bit of a post-incident brain-dump, so apologies for that! It serves as somewhat of a cautionary tale about having blinkers on when coding something. When I implemented the fix initially I was so caught up in how I was solving the problem I completely overlooked this simple, but crucial, difference in how policies can be implemented.

I'm not entirely sure if in general it's best to use shared policies, or if it's better to create and discard policies as I did originally. Obviously, the latter doesn't work for circuit breaker but what about Retry or WaitAndRetry? Also, creating a new policy each time is probably more "allocate-y", but is it faster due to not having to be thread safe?

I don't know the answer, but personally, and based on this episode, I'm inclined to go with shared policies everywhere. If you know otherwise, do let me know in the comments, thanks!


Damien Bowden: Using Protobuf Media Formatters with ASP.NET Core

Theis article shows how to use Protobuf with an ASP.NET Core MVC application. The API uses the WebApiContrib.Core.Formatter.Protobuf Nuget package to add support for Protobuf. This package uses the protobuf-net Nuget package from Marc Gravell, which makes it really easy to use a really fast serializer, deserializer for your APIs.

Code: https://github.com/damienbod/AspNetCoreWebApiContribProtobufSample

Setting up te ASP.NET Core MVC API

To use Protobuf with ASP.NET Core, the WebApiContrib.Core.Formatter.Protobuf Nuget package can be used in your project. You can add this using the Nuget manager in Visual Studio.

Or you can add it directly in your project file.

<PackageReference Include="WebApiContrib.Core.Formatter.Protobuf" Version="1.0.0" />

Now the formatters can be added in the Startup file.

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc()
		.AddProtobufFormatters();
}

A model now needs to be defined. The protobuf-net attributes are used to define the model class.

using ProtoBuf;

namespace Model
{
    [ProtoContract]
    public class Table
    {
        [ProtoMember(1)]
        public string Name {get;set;}

        [ProtoMember(2)]
        public string Description { get; set; }


        [ProtoMember(3)]
        public string Dimensions { get; set; }
    }
}

The ASP.NET Core MVC API can then be used with the Table class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Model;

namespace AspNetCoreWebApiContribProtobufSample.Controllers
{
    [Route("api/[controller]")]
    public class TablesController : Controller
    {
        // GET api/tables
        [HttpGet]
        public IActionResult Get()
        {
            List<Table> tables = new List<Table>
            {
                new Table{Name= "jim", Dimensions="190x80x90", Description="top of the range from Migro"},
                new Table{Name= "jim large", Dimensions="220x100x90", Description="top of the range from Migro"}
            };

            return Ok(tables);
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            var table = new Table { Name = "jim", Dimensions = "190x80x90", Description = "top of the range from Migro" };
            return Ok(table);
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]Table value)
        {
            var got = value;
            return Created("api/tables", got);
        }
    }
}

Creating a simple Protobuf HttpClient

A HttpClient using the same Table class with the protobuf-net definitions can be used to access the API and request the data with “application/x-protobuf” header.

static async System.Threading.Tasks.Task<Table[]> CallServerAsync()
{
	var client = new HttpClient();

	var request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:31004/api/tables");
	request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var result = await client.SendAsync(request);
	var tables = ProtoBuf.Serializer.Deserialize<Table[]>(await result.Content.ReadAsStreamAsync());
	return tables;
}

The data is returned in the response using Protobuf seriailzation.

If you want to post some data using Protobuf, you can serialize the data to Protobuf and post it to the server using the HttpClient. This example uses “application/x-protobuf”.

static async System.Threading.Tasks.Task<Table> PostStreamDataToServerAsync()
{
	HttpClient client = new HttpClient();
	client.DefaultRequestHeaders
		  .Accept
		  .Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));

	HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post,
		"http://localhost:31004/api/tables");

	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<Table>(stream, new Table
	{
		Name = "jim",
		Dimensions = "190x80x90",
		Description = "top of the range from Migro"
	});

	request.Content = new ByteArrayContent(stream.ToArray());

	// HTTP POST with Protobuf Request Body
	var responseForPost = client.SendAsync(request).Result;

	var resultData = ProtoBuf.Serializer.Deserialize<Table>(await responseForPost.Content.ReadAsStreamAsync());
	return resultData;
}

Links:

https://www.nuget.org/packages/WebApiContrib.Core.Formatter.Protobuf/

https://github.com/mgravell/protobuf-net



Anuraj Parameswaran: ASP.NET Core No authentication handler is configured to handle the scheme Cookies

This post is about ASP.NET Core authentication, which throws an InvalidOperationException - No authentication handler is configured to handle the scheme Cookies. In ASP.NET Core 1.x version, the runtime will throw this exception when you are running ASP.NET Cookie authentication. This can be fixed by setting options.AutomaticChallenge = true in the Configure method.


Andrew Lock: Controller activation and dependency injection in ASP.NET Core MVC

Controller activation and dependency injection in ASP.NET Core MVC

In my last post about disposing IDsiposables in ASP.NET Core, Mark Rendle pointed out that MVC controllers are also disposed at the end of a request. On first glance, this may seem obvious given that scoped resources are disposed at the end of a request, but MVC controllers are actually handled in a slightly different way to most services.

In this post, I'll describe how controllers are created in ASP.NET Core MVC using the IControllerActivator, the options available out of the box, and their differences when it comes to dependency injection.

The default IControllerActivator

In ASP.NET Core, when a request is received by the MvcMiddleware, routing - either conventional or attribute routing - is used to select the controller and action method to execute. In order to actually execute the action, the MvcMiddleware must create an instance of the selected controller.

The process of creating the controller depends on a number of different provider and factory classes, culminating in an instance of the IControllerActivator. This class implements just two methods:

public interface IControllerActivator  
{
    object Create(ControllerContext context);
    void Release(ControllerContext context, object controller);
}

As you can see, the IControllerActivator.Create method is passed a ControllerContext which defines the controller to be created. How the controller is created depends on the particular implementation.

Out of the box, ASP.NET Core uses the DefaultControllerActivator, which uses the TypeActivatorCache to create the controller. The TypeActivatorCache creates instances of objects by calling the constructor of the Type, and attempting to resolve the required constructor argument dependencies from the DI container.

This is an important point. The DefaultControllerActivator doesn't attempt to resolve the Controller instance from the DI container itself, only the Controller's dependencies.

Example of the default controller activator

To demonstrate this behaviour, I've created a simple MVC application, consisting of a single service, and a single controller. The service instance has a name property, that is set in the constructor. By default, it will have the value "default".

public class TestService  
{
    public TestService(string name = "default")
    {
        Name = name;
    }

    public string Name { get; }
}

The HomeController for the app takes a dependency on the TestService, and returns the Name property:

public class HomeController : Controller  
{
    private readonly TestService _testService;
    public HomeController(TestService testService)
    {
        _testService = testService;
    }

    public string Index()
    {
        return "TestService.Name: " + _testService.Name;
    }
}

The final piece of the puzzle is the Startup file. Here I register the TestService as a scoped service in the DI container, and set up the MvcMiddleware and services:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

You'll also notice I've defined a factory method for creating an instance of the HomeController. This registers the HomeController type in the DI container, injecting an instance of the TestService with a custom Name property.

So what do you get if you run the app?

Controller activation and dependency injection in ASP.NET Core MVC

As you can see, the TestService.Name property has the default value, indicating the TestService instance has been sourced directly from the DI container. The factory method we registered to create the HomeController has clearly been ignored.

This makes sense when you remember that the DefaultControllerActivator is creating the controller. It doesn't request the HomeController from the DI container, it just requests its constructor dependencies.

Most of the time, using the DefaultControllerActivator will be fine, but sometimes you may want to create your controllers by using the DI container directly. This is especially true when you are using third-party containers with features such as interceptors or decorators.

Luckily, the MVC framework includes an implementation of IControllerActivator to do just this, and even provides a handy extension method to enable it.

The ServiceBasedControllerActivator

As you've seen, the DefaultControllerActivator uses the TypeActivatorCache to create controllers, but MVC includes an alternative implementation, the ServiceBasedControllerActivator, which can be used to directly obtain controllers from the DI container. The implementation itself is trivial:

public class ServiceBasedControllerActivator : IControllerActivator  
{
    public object Create(ControllerContext actionContext)
    {
        var controllerType = actionContext.ActionDescriptor.ControllerTypeInfo.AsType();

        return actionContext.HttpContext.RequestServices.GetRequiredService(controllerType);
    }

    public virtual void Release(ControllerContext context, object controller)
    {
    }
}

You can configure the DI-based activator with the AddControllersAsServices() extension method, when you add the MVC services to your application:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc()
                .AddControllersAsServices();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

With this in place, hitting the home page will create a controller by loading it from the DI container. As we've registered a factory method for the HomeController, our custom TestService configuration will be honoured, and the alternative Name will be used:

Controller activation and dependency injection in ASP.NET Core MVC

The AddControllersAsServices method does two things - it registers all of the Controllers in your application with the DI container (if they haven't already been registered) and replaces the IControllerActivator registration with the ServiceBasedControllerActivator:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)  
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);

    foreach (var controller in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(controller, controller);
    }

    builder.Services.Replace(ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());

    return builder;
}

If you need to do something esoteric, you can always implement IControllerActivator yourself, but I can't think of any reason that these two implementations wouldn't satisfy all your requirements!

Summary

  • By default, the DefaultControllerActivator is configured as the IControllerActivator for ASP.NET Core MVC.
  • The DefaultControllerActivator uses the TypeActivatorCache to create controllers. This creates an instance of the controller, and loads constructor arguments from the DI container.
  • You can use an alternative activator, the ServiceBasedControllerActivator, which loads controllers directly from the DI container. You can configure this activator by using the AddControllersAsServices() extension method on the MvcBuilder instance in Startup.ConfigureServices.


Anuraj Parameswaran: How to Deploy Multiple Apps on Azure WebApps

This post is about deploying multiple applications on an Azure Web App. App Service Web Apps is a fully managed compute platform that is optimized for hosting websites and web applications. This platform-as-a-service (PaaS) offering of Microsoft Azure lets you focus on your business logic while Azure takes care of the infrastructure to run and scale your apps.


Anuraj Parameswaran: Develop and Run Azure Functions locally

This post is about developing, running and debugging azure functions locally. Trigger on events in Azure and debug C# and JavaScript functions. Azure functions is a new service offered by Microsoft. Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems.


Andrew Lock: Four ways to dispose IDisposables in ASP.NET Core

Four ways to dispose IDisposables in ASP.NET Core

One of the most commonly implemented interfaces in .NET is the IDisposable interface. Classes implement IDisposable when they contain references to unmanaged resources, such as window handles, files or sockets. The garbage collector automatically releases memory for managed (i.e. .NET) objects, but it doesn't know about how to handle the unmanaged resources. Implementing IDisposable provides a hook, so you can properly clean up those resources when your class is disposed.

This post goes over some of the options available to you for disposing services in ASP.NET Core applications, especially when using the built-in dependency injection container.

For the purposes of this post, I'll use the following class that implements IDisposable in the examples. I'm just writing to the console instead of doing any actual cleanup, but it'll serve our purposes for this post.

public class MyDisposable : IDisposable  
{
    public MyDisposable()
    {
        Console.WriteLine("+ {0} was created", this.GetType().Name);
    }

    public void Dispose()
    {
        Console.WriteLine("- {0} was disposed!", this.GetType().Name);
    }
}

Now let's look at our options.

The simple case - a using statement

The typical suggested approach when consuming an IDisposable in your code, is with a using block:

using(var myObject = new MyDisposable())  
{
    // myObject.DoSomething();
}

Using IDisposables in this way ensures they are disposed correctly, whether or not they throw an exception. You could also use a try-finally block instead if necessary:

MyDisposable myObject;  
try  
{
    myObject = new MyDisposable();
    // myObject.DoSomething();
}
finally  
{
    myObject?.Dispose();
}

You'll often find this pattern when working with files or streams - things that you only need transiently, and are finished with in the same scope. Unfortunately, sometimes this won't suit your situation, and you might need to dispose of the object from somewhere else. Depending on your exact situation, there are a number of other options available to you.

Note: Wherever possible, it's best practice to dispose of objects in the same scope they were created. This will help prevent memory leaks and unexpected file locks in your application, where objects go accidentally undisposed.

Disposing at the end of a request - using RegisterForDispose

When you're working in ASP.NET Core, or any web application, it's very common for your objects to be scoped to a single request. That is, anything you create to handle a request you want to dispose when the request finishes.

There are a number of ways to do this. The most common way is to leverage the DI container which I'll come to in a minute, but sometimes that's not possible, and you need to create the disposable in your own code.

If you are manually creating an instance of an IDisposable, then you can register that disposable with the HttpContext, so that when the request ends, the instance will be disposed automatically. Simply pass the instance to HttpContext.Response.RegisterForDispose:

public class HomeController : Controller  
{
    readonly Disposable _disposable;

    public HomeController()
    {
        _disposable = new RegisteredForDispose();
    }

    public IActionResult Index()
    {
        // register the instance so that it is disposed when request ends
        HttpContext.Response.RegisterForDispose(_disposable);
        Console.Writeline("Running index...");
        return View();
    }
}

In this example, I'm creating the Disposable during the constructor of the HomeController, and then registering for its disposal in the action method. This is a little bit contrived, but it shows the mechanism at least.

If you execute this action method, you'll see the following:

$ dotnet run
Hosting environment: Development  
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ MyDisposable was created
Running index...  
- MyDisposable was disposed!

The HttpContext takes care of disposing our object for us!

Warning: I registered the instance in the action method, instead of the constructor method because HttpContext might be null in the constructor!

RegisterForDispose is useful when you are new-ing up services in your code. But given that Dispose is only required for classes using unmanaged resources, you'll probably find that more often than not, your IDisposable classes are encapsulated in services that are registered with the DI container.

As Mark Rendle pointed out, the Controller itself will also be disposed at the end of the request, so you can use that mechanism to dispose any objects you create.

Automatically disposing services - leveraging the built-in DI container

ASP.NET Core comes with a simple, built-in DI container that you can register your services with as either Transient, Scoped, or Singleton. You can read about it here, so I'll assume you already know how to use it to register your services.

Note, this article only discusses the built-in container - third-party containers might have other rules around automatic disposal of services.

Any service that the built-in container creates in order to fill a dependency, which also implements IDisposable, will be disposed by the container at the appropriate point. So Transient and Scoped instances will be disposed at the end of the request (or more accurately, at the end of a scope), and Singleton services will be disposed when the application is torn down and the ServiceProvider itself is disposed.

To clarify, that means the provider will dispose any service you register with it, as long as you don't provide a specific instance. For example, I'll create a number of disposable classes:

public class TransientCreatedByContainer: MyDisposable { }  
public class ScopedCreatedByFactory : MyDisposable { }  
public class SingletonCreatedByContainer: MyDisposable {}  
public class SingletonAddedManually: MyDisposable {}  

And register each of them in a different way in Startup.ConfigureServices. I am registering

  • TransientCreatedByContainer as a transient
  • ScopedCreatedByFactory as scoped, using a lambda function as a factory
  • SingletonCreatedByContainer as a singleton
  • SingletonAddedManually as a singleton by passing in a specific instance of the object.
public void ConfigureServices(IServiceCollection services)  
{
    // other services

    // these will be disposed
    services.AddTransient<TransientCreatedByContainer>();
    services.AddScoped(ctx => new ScopedCreatedByFactory());
    services.AddSingleton<SingletonCreatedByContainer>();

    // this one won't be disposed
    services.AddSingleton(new SingletonAddedManually());
}

Finally, I'll inject an instance of each into the HomeController, so the DI container will create / inject instances as necessary:

public class HomeController : Controller  
{
    public HomeController(
        TransientCreatedByContainer transient,
        ScopedCreatedByFactory scoped,
        SingletonCreatedByContainer createdByContainer,
        SingletonAddedManually manually)
    { }

    public IActionResult Index()
    {
        return View();
    }
}

When I run the application, hit the home page, and then stop the application, I get the following output:

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonCreatedByContainer was disposed!

There's a few things to note here:

  • SingletonAddedManually was created before the web host had finished being set up, so it writes to the console before logging starts
  • SingletonCreatedByContainer is disposed after we have started shutting down the server
  • SingletonAddedManually is never disposed as it was provided a specific instance!

Note the behaviour whereby only objects created by the DI container are disposed applies to ASP.NET Core 1.1 and above. In ASP.NET Core 1.0, all objects registered with the container are disposed.

Letting the container handle your IDisposables for you is obviously convenient, especially as you are probably registering your services with it anyway! The only obvious hole here is if you need to dispose an object that you create yourself. As I said originally, if possible, you should favour a using statement, but that's not always possible. Luckily, ASP.NET Core provides hooks into the application lifetime, so you can do some clean up when the application is shutting down.

Disposing when the application ends - hooking into IApplicationLifetime events

ASP.NET Core exposes an interface called IApplicationLifetime that can be used to execute code when an application is starting up or shutting down:

public interface IApplicationLifetime  
{
    CancellationToken ApplicationStarted { get; }
    CancellationToken ApplicationStopping { get; }
    CancellationToken ApplicationStopped { get; }
    void StopApplication();
}

You can inject this into your Startup class (or elsewhere) and register to the events you need. Extending the previous example, we can inject both the IApplicationLifetime and our singleton SingletonAddedManually instance into the Configure method of Startup.cs:

public void Configure(  
    IApplicationBuilder app, 
    IApplicationLifetime applicationLifetime,
    SingletonAddedManually toDispose)
{
    applicationLifetime.ApplicationStopping.Register(OnShutdown, toDispose);

    // configure middleware etc
}

private void OnShutdown(object toDispose)  
{
    ((IDisposable)toDispose).Dispose();
}

I've created a simple helper method that takes the state passed in (the SingletonAddedManually instance), casts it to an IDisposable, and disposes it. This helper method is registered with the CancellationToken called ApplicationStopping, which is fired when closing down the application.

If we run the application again, with this additional registration, you can see that the SingletonAddedManually instance is now disposed, just after the application shutting down trigger.

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonAddedManually was disposed!
- SingletonCreatedByContainer was disposed!

Summary

So there you have it, four different ways to dispose of your IDisposable objects. Wherever possible, you should either use the using statement, or let the DI container handle disposing objects for you. For cases where that's not possible, ASP.NET Core provides two mechanisms you can hook in to: RegisterForDispose and IApplicationLifetime.

Oh, and apparently I missed one final way:


Damien Bowden: Angular OIDC OAuth2 client with Google Identity Platform

This article shows how an Angular client could implement a login for a SPA application using Google Identity Platform OpenID. The Angular application uses the npm package angular-auth-oidc-client to implement the OpenID Connect Implicit Flow to connect with the google identity platform.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

History

2017-07-09 Updated to version 1.1.4, new configuration

Setting up Google Identity Platform

The Google Identity Platform provides good documentation on how to set up its OpenID Connect implementation.

You need to login into google using a gmail account.
https://accounts.google.com

Now open the OpenID Connect google documentation page

https://developers.google.com/identity/protocols/OpenIDConnect

Open the credentials page provided as a link.

https://console.developers.google.com/apis/credentials

Create new credentials for your application, select OAuth Client ID in the drop down:

Select a web application and configure the parameters to match your client application URLs.

Implementing the Angular OpenID Connect client

The client application is implemtented using ASP.NET Core and Angular.

The npm package angular-auth-oidc-client is used to connect to the OpenID server. The package can be added to the package.json file in the dependencies.

"dependencies": {
    ...
    "angular-auth-oidc-client": "1.1.4"
},

Now the AuthModule, OidcSecurityService, AuthConfiguration can be imported. The AuthModule.forRoot() is used and added to the root module imports, the OidcSecurityService is added to the providers and the AuthConfiguration is the configuration class which is used to set up the OpenID Connect Implicit Flow.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { AuthModule, OidcSecurityService, OpenIDImplicitFlowConfiguration } from 'angular-auth-oidc-client';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent
    ],
    providers: [
        OidcSecurityService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

The AuthConfiguration class is used to configure the module.

stsServer
This is the URL where the STS server is located. We use https://accounts.google.com in this example.

redirect_url
This is the redirect_url which was configured on the google client ID on the server.

client_id
The client_id must match the Client ID for Web application which was configured on the google server.

response_type
This must be ‘id_token token’ or ‘id_token’. If you want to use the user service, or access data using using APIs, you must use the ‘id_token token’ configuration. This is the OpenID Connect Implicit Flow. The possible values are defined in the well known configuration URL from the OpenID Connect server.

scope
Scope which are used by the client. The openid must be defined: ‘openid email profile’

post_logout_redirect_uri
Url after a server logout if using the end session API. This is not supported by google OpenID.

start_checksession
Checks the session using OpenID session management. Not supported by google OpenID

silent_renew
Renews the client tokens, once the token_id expires.

startup_route
Angular route after a successful login.

forbidden_route
HTTP 403

unauthorized_route
HTTP 401

log_console_warning_active
Logs all module warnings to the console.

log_console_debug_active
Logs all module debug messages to the console.


export class AppModule {
    constructor(public oidcSecurityService: OidcSecurityService) {

        let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
        openIDImplicitFlowConfiguration.stsServer = 'https://accounts.google.com';
        openIDImplicitFlowConfiguration.redirect_url = 'https://localhost:44386';
        openIDImplicitFlowConfiguration.client_id = '188968487735-b1hh7k87nkkh6vv84548sinju2kpr7gn.apps.googleusercontent.com';
        openIDImplicitFlowConfiguration.response_type = 'id_token token';
        openIDImplicitFlowConfiguration.scope = 'openid email profile';
        openIDImplicitFlowConfiguration.post_logout_redirect_uri = 'https://localhost:44386/Unauthorized';
        openIDImplicitFlowConfiguration.startup_route = '/home';
        openIDImplicitFlowConfiguration.forbidden_route = '/Forbidden';
        openIDImplicitFlowConfiguration.unauthorized_route = '/Unauthorized';
        openIDImplicitFlowConfiguration.log_console_warning_active = true;
        openIDImplicitFlowConfiguration.log_console_debug_active = true;
        openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = 10;
        openIDImplicitFlowConfiguration.override_well_known_configuration = true;
        openIDImplicitFlowConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

        // this.oidcSecurityService.setStorage(localStorage);
        this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);
    }
}

Google OpenID does not support the .well-known/openid-configuration API as defined by OpenID. Google blocks this when using it due to a CORS security restriction, so it can not be used from a browser application. As a workaround, the well known configuration can be configured locally when using angular-auth-oidc-client. The goole OpenID configuration can be downloaded using the following URL:

https://accounts.google.com/.well-known/openid-configuration

The json file can then be downloaded and saved locally on your server and this can then be configured in the authConfiguration class using the override_well_known_configuration_url property.

this.authConfiguration.override_well_known_configuration = true;
this.authConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

The following json is the actual configuration for the google well known configuration. What’s really interesting is that the end session endpoint is not supported, which is strange I think.
It’s also interesting to see that the response_types_supported supports a type which is not supported “token id_token”, this should be “id_token token”.

See: http://openid.net/specs/openid-connect-core-1_0.html

{
  "issuer": "https://accounts.google.com",
  "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
  "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
  "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
  "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
  "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
  "response_types_supported": [
    "code",
    "token",
    "id_token",
    "code token",
    "code id_token",
    "token id_token",
    "code token id_token",
    "none"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid",
    "email",
    "profile"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_post",
    "client_secret_basic"
  ],
  "claims_supported": [
    "aud",
    "email",
    "email_verified",
    "exp",
    "family_name",
    "given_name",
    "iat",
    "iss",
    "locale",
    "name",
    "picture",
    "sub"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}

The AppComponent implements the authorize and the authorizedCallback functions from the OidcSecurityService provider.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { Configuration } from './app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';


import './app.component.css';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html'
})

export class AppComponent implements OnInit {

    constructor(public securityService: OidcSecurityService) {
    }

    ngOnInit() {
        if (window.location.hash) {
            this.securityService.authorizedCallback();
        }
    }

    login() {
        console.log('start login');
        this.securityService.authorize();
    }

    refreshSession() {
        console.log('start refreshSession');
        this.securityService.authorize();
    }

    logout() {
        console.log('start logoff');
        this.securityService.logoff();
    }
}

Running the application

Start the application using IIS Express in Visual Studio 2017. This starts with https://localhost:44386 which is configured in the launch settings file. If you use a differnt URL, you need to change this in the client application and also the servers client credentials configuration.

Login then with your gmail.

And you are redirected back to the SPA.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://developers.google.com/identity/protocols/OpenIDConnect



Damien Bowden: angular-auth-oidc-client Release, an OpenID Implicit Flow client in Angular

I have been blogging and writing code for Angular and OpenID Connect since Nov 1, 2015. Now after all this time, I have decided to create my first npm package for Angular: angular-auth-oidc-client, which makes it easier to use the Angular Auth OpenID client. This is now available on npm.

npm package: https://www.npmjs.com/package/angular-auth-oidc-client

github code: https://github.com/damienbod/angular-auth-oidc-client

issues: https://github.com/damienbod/angular-auth-oidc-client/issues

Using the npm package: see the readme

Samples: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/npm-lib-test/src/AngularClient

OpenID Certification

This library is certified by OpenID Foundation. (Implicit RP)

Features:

Notes:

FabianGosebrink and Roberto Simonetti have decided to help and further develop this npm package which I’m very grateful. Anyone wishing to get involved, please do and create some issues and pull-requests. Help is always most welcome.

The next step is to do the OpenID Relying Parties certification.



Andrew Lock: Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

This quick post is a response to a question about anti-forgery tokens I saw on twitter:

Anti-forgery tokens are a security mechanism to defend against cross-site request forgery (CSRF) attacks. Marius Schulz shared a solution to this problem in a blog post in which he creates a simple middleware to automatically validate the tokens sent in the request. This works, but there's actually an even simpler solution I wanted to share: the built-in AutoValidateAntiforgeryTokenAttribute.

tl;dr Add the AutoValidateAntiforgeryTokenAttribute as a global MVC filter to automatically validate all appropriate action methods.

Defending against cross-site request forgery in ASP.NET Core

I won't go into CSRF attacks in detail - I recommend you check out the docs for details if this is all new to you. In essence, when you send a form to the user, you add an extra hidden field that includes one half of a cryptographic token. Additionally, a cookie is set with the other half of the token. When the form is posted to the server, the two halves of the token are verified, ensuring only valid requests can be made.

Note, this post assumes you are using Razor and server-side rendering. You can use a similar approach to protect your API calls, but I won't go into that here.

In ASP.NET Core, the tokens are added to your forms automatically when you use the asp-* tag helpers, for example:

<form asp-controller="Manage" asp-action="ChangePassword" method="post">  
   <!-- Form details -->
</form>  

This will generate markup similar to the following. You can see the hidden input with the anti-forgery token:

<form method="post" action="/Manage/ChangePassword">  
  <!-- Form details -->
  <input name="__RequestVerificationToken" type="hidden" value="CfDJ8NrAkSldwD9CpLR...LongValueHere!" />
</form>  

Note, in ASP.NET Core 2.0, ASP.NET Core will add anti-forgery tokens to all your forms, whether you have use the asp-* tag helpers or not.

Adding the form field is just one part of the requirement, you also need to actually check that the tokens are valid on the server side. You can do this by decorating your controller actions with the [ValidateAntiForgeryToken] attribute. You'll need to add it to all of your POST actions to properly protect your application:

public class ManageController  
{
  [HttpPost]
  [ValidateAntiForgeryToken]
  public IActionResult ChangePassword()
  {
    // ...
    return View();
  }
}

This works, but is a bit of a pain - you have to decorate each of your POST action methods with the attribute. If you forget, you won't get an error, the action just won't be protected.

Automatically validating all appropriate actions

MVC has the concept of "Global filters", which are applied to every action in your MVC app. Unfortunately we can't just add the [ValidateAntiForgeryToken] attribute globally. We won't receive anti-forgery tokens for certain types of requests like GET or HEAD - if we applied [ValidateAntiForgeryToken] globally, then all of those requests would throw validation errors.

Luckily, ASP.NET Core provides another attribute for just such a use, the [AutoValidateAntiForgeryToken] attribute. This works identically to the [ValidateAntiForgeryToken] attribute, except that it ignores "safe" methods like GET and HEAD.

Adding the attribute to your application is simple - just add it to the global filters collection in your Startup class, when calling AddMvc().

public class Startup  
{
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddMvc(options =>
    {
        options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute());
    });
  }
}

Job done! No need to decorate all your action methods with attributes, you will just be protected automatically!


Damien Bowden: OpenID Connect Session Management using an Angular application and IdentityServer4

The article shows how the OpenID Connect Session Management can be implemented in an Angular application. The OpenID Connect Session Management 1.0 provides a way of monitoring the user session on the server using iframes. IdentityServer4 implements the server side of the specification. This does not monitor the lifecycle of the tokens used in the browser application. This session only monitors the server session. This has nothing to do with the OpenID tokens used by the SPA application.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Code: Angular auth module

Other posts in this series:

The OidcSecurityCheckSession class implements the Session Management from the specification. The init function creates an iframe and adds it to the window document in the DOM. The iframe uses the ‘authWellKnownEndpoints.check_session_iframe’ value, which is the connect/checksession API got from the ‘.well-known/openid-configuration’ service.

The init function also adds the event for the message, which is specified in the OpenID Connect Session Management documentation.

init() {
	this.sessionIframe = window.document.createElement('iframe');
	this.oidcSecurityCommon.logDebug(this.sessionIframe);
	this.sessionIframe.style.display = 'none';
	this.sessionIframe.src = this.authWellKnownEndpoints.check_session_iframe;

	window.document.body.appendChild(this.sessionIframe);
	this.iframeMessageEvent = this.messageHandler.bind(this);
	window.addEventListener('message', this.iframeMessageEvent, false);

	return Observable.create((observer: Observer<any>) => {
		this.sessionIframe.onload = () => {
			observer.next(this);
			observer.complete();
		}
	});
}

The pollServerSession function, posts a message every 3 seconds to the iframe which checks if the session on the server has been changed. The session_state is the value returned in the HTTP callback from a successful authorization.

pollServerSession(session_state: any, clientId: any) {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
			this.oidcSecurityCommon.logDebug(this.sessionIframe);
			this.sessionIframe.contentWindow.postMessage(clientId + ' ' + session_state, this.authConfiguration.stsServer);
		},
		(err: any) => {
			this.oidcSecurityCommon.logError('pollServerSession error: ' + err);
		},
		() => {
			this.oidcSecurityCommon.logDebug('checksession pollServerSession completed');
		});
}

The messageHandler handles the callback from the iframe. If the server session has changed, the output onCheckSessionChanged event is triggered.

private messageHandler(e: any) {
	if (e.origin === this.authConfiguration.stsServer &&
		e.source === this.sessionIframe.contentWindow
	) {
		if (e.data === 'error') {
			this.oidcSecurityCommon.logWarning('error from checksession messageHandler');
		} else if (e.data === 'changed') {
			this.onCheckSessionChanged.emit();
		} else {
			this.oidcSecurityCommon.logDebug(e.data + ' from checksession messageHandler');
		}
	}
}

The onCheckSessionChanged is a public EventEmitter output for this provider.

@Output() onCheckSessionChanged: EventEmitter<any> = new EventEmitter<any>(true);

The OidcSecurityService provider subscribes to the onCheckSessionChanged event and uses its onCheckSessionChanged function to handle this event.

this.oidcSecurityCheckSession.onCheckSessionChanged.subscribe(() => { this.onCheckSessionChanged(); });

After a successful login, and if the tokens are valid, the client application checks if the checksession should be used, and calls the init method and subscribes to it. When ready, it uses the pollServerSession function to activate the monitoring.

if (this.authConfiguration.start_checksession) {
  this.oidcSecurityCheckSession.init().subscribe(() => {
    this.oidcSecurityCheckSession.pollServerSession(
      result.session_state,
      this.authConfiguration.client_id
    );
  });
}

The onCheckSessionChanged function sets a public boolean which can be used to implement the required application logic when the server sesion has changed.

private onCheckSessionChanged() {
  this.oidcSecurityCommon.logDebug('onCheckSessionChanged');
  this.checkSessionChanged = true;
}

In this demo, the navigation bar allows to Angular application to refresh the session if the server session has changed.

<li>
  <a class="navigationLinkButton" *ngIf="securityService.checkSessionChanged" (click)="refreshSession()">Refresh Session</a>
</li>

When the application is started, the unchanged message is returned.

Then open the server application in a tab in the same browser session, and logout.

And the client application notices tht the server session has changed and can react as required.

Links:

http://openid.net/specs/openid-connect-session-1_0-ID4.html

http://docs.identityserver.io/en/release/



Anuraj Parameswaran: Connecting to Azure Cosmos DB emulator from RoboMongo

This post is about connecting to Azure Cosmos DB emulator from RoboMongo. Azure Cosmos DB is Microsoft’s globally distributed multi-model database. It is superset of Azure Document DB. Due to some challenges, one of our team decided to try some new No SQL databases. One of the option was Document Db. I found it quite good option, since it supports Mongo protocol so existing app can work without much change. So I decided to explore that. First step I downloaded the Document Db emulator, now it is Azure Cosmos DB emulator. Installed and started the emulator, it is opening the Data Explorer web page (https://localhost:8081/_explorer/index.html), which helps to explore the Documents inside the database. Then I tried to connect to the same with Robo Mongo (It is a free Mongo Db client, can be downloaded from here). But is was not working. I was getting some errors. Later I spent some time to find some similar issues, blog post on how to connect from Robo Mongo to Document Db emulator. But I couldn’t find anything useful. After spenting almost a day, I finally figured out the solution. Here is the steps.


Andrew Lock: Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

One of the nice features introduced in ASP.NET Core is the universal logging infrastructure. In this post I take a look at one of the helper methods in the ASP.NET Core logging library, and how you can use it to efficiently log messages in your libraries.

Before we get into it, I'll give a quick overview of the ASP.NET Core logging infrastructure. Fell free to skip down to the section on helper methods if you already know the basics of how it works.

Logging overview

The logging infrastructure is exposed in the form of the ILogger<T> and ILoggerFactory interfaces, which you can inject into your services using dependency injection to log messages in a number of ways. For example, in the following ProductController, we log a message when the View action is invoked.

public class ProductController : Controller  
{
    private readonly ILogger _logger;

    public ProductController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<ProductController>();
    }

    public IActionResult View(int id)
    {
        _logger.LogDebug("View Product called with id {Id}", id);
        return View();
    }
}

The ILogger can log messages at a number of different levels given by LogLevel:

public enum LogLevel  
{
    Trace = 0,
    Debug = 1,
    Information = 2,
    Warning = 3,
    Error = 4,
    Critical = 5,
}

The final aspect to the logging infrastructure are logging providers. These are the "sinks" that the logs are actually written to. You can plug in multiple providers, and write logs to a variety of different locations, for example the console, to a file, to Serilog etc.

One of the nice things about the logging infrastructure, and the ubiquitous use of DI in the ASP.NET Core libraries is that these same interfaces and classes are used throughout the libraries themselves, as well as in your application.

Controlling the logs produced by different categories

When creating a logger with CreateLogger<T>, the type name you pass in is used to create a category for the logs. At the application level, you can choose which LogLevels are actually output for a given category.

For example, you could specify that by default, Debug or higher level logs are written to providers, but for logs written by services in the Microsoft namespace, only logs of at least Warning level or above are written.

With this approach you can control the amount of logging produced by the various libraries in your application, increasing logging levels for only those areas that need them.

For example, the following screenshot shows the logs generated by default in an MVC application when we first hit the home page:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

That's a lot of logs! And notice that most of them are coming from internal components, from classes in the Microsoft namespace. It's basically just noise in this case. We can filter out Warning logs in the Microsoft namespace, but keep other logs at the Debug level:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

With the default ASP.NET Core 1.X template, all you need to do is change the appsettings.json file, and set the loglevels to Warning as appropriate:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Warning",
      "Microsoft": "Warning"
    }
  }
}

Note, in ASP.NET Core 1.X, filtering is a little bit of an afterthought. Some logging providers, such as the Console provider let you specify how to filter the categories. Alternatively, you can apply filters to all providers together using the WithFilter method. The logging in ASP.NET Core 2.0 will likely tidy up this approach - it is due to be updated in preview 2, so I won't dwell on it here.

Filtering considerations

It's a good idea to instrument your code with as many log messages are useful, and you can filter out the most verbose Trace and Debug log levels. These filtering capabilities are a really useful way of cutting through the cruft, but there's one particular downside.

If you add thousands of logging statements, that will inevitably start having a performance impact on your libraries, simply by the virtue of the fact you're running more code. One solution to this is to check whether the particular log level is enabled for the current logger, before trying to write to it. For example:

public class ProductController : Controller  
{
    public IActionResult Index()
    {
        if(_logger.IsEnabled(LogLevel.Debug))
        {
            _logger.LogDebug("Calling HomeController.Index");
        }
        return View();
    }
}

That's a bit of a pain to have to do every time you write a log though right? Luckily ASP.NET Core comes with a helper class, LoggerMessage to make using this pattern easier.

Creating logging delegates with the LoggerMessage Helper

The static LoggerMessage class is found in the Microsoft.Extensions.Logging.Abstractions package, and contains a number of static, generic Define methods that return an Action<> which in turn can be used to create strongly-typed logging extensions. That probably all sounds a bit confusing so lets break it down from the top.

The strongly-typed logging extension methods

We'll start with the logging code we want to use in our application. In this really simple example, we're just going to log the time that the HomeController.Index action method executes:

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        _logger.HomeControllerIndexExecuting(DateTimeOffset.Now);
        return View();
    }
}

The HomeControllerIndexExecuting method is a custom extension method that takes a DateTimeOffset parameter. We can define it as follows:

internal static class LoggerExtensions  
{
    private static Action<ILogger, DateTimeOffset, Exception> _homeControllerIndexExecuting;

    static LoggerExtensions()
    {
        _homeControllerIndexExecuting = LoggerMessage.Define<DateTimeOffset>(
            logLevel: LogLevel.Debug,
            eventId: 1,
            formatString: "Executing 'Index' action at '{StartTime}'");
    }

    public static void HomeControllerIndexExecuting(
        this ILogger logger, DateTimeOffset executeTime)
    {
        _homeControllerIndexExecuting(logger, executeTime, null);
    }
}

The HomeControllerIndexExecuting method is an ILogger extension method that invokes a static Action field on our static LoggerExtensions method. The _homeControllerIndexExecuting field is initialised using the ASP.NET Core LoggerMessage.Define method, by providing a logLevel, an eventId and the formatString to use to create the log.

That probably seems like a lot of effort right? Why not just call _logger.LogDebug() directly in the HomeControllerIndexExecuting extension method?

Remember, our goal was to improve performance by only logging messages for unfiltered categories, without having to explicitly write: if(_logger.IsEnabled(LogLevel.Debug). The answer lies in the LoggerMessage.Define<T> method.

The LoggerHelper.Define method

The purpose of the static LoggerMessage.Define<T> method is three-fold:

  1. Encapsulate the if statement to allow performant logging
  2. Enforce the correct strongly-typed parameters are passed when logging the message
  3. Ensure the log message contains the correct number of placeholders for parameters

This post is long enough, so I won't go into to much detail, but in summary, this gives you an idea of what the method looks like:

public static class LoggerMessage  
{
    public static Action<ILogger, T1, Exception> Define<T1>(
        LogLevel logLevel, EventId eventId, string formatString)
    {
        var formatter = CreateLogValuesFormatter(
            formatString, expectedNamedParameterCount: 1);

        return (logger, arg1, exception) =>
        {
            if (logger.IsEnabled(logLevel))
            {
                logger.Log(logLevel, eventId, new LogValues<T1>(formatter, arg1), exception, LogValues<T1>.Callback);
            }
        };
    }
}

First, this method performs a check that the provided format string, ("Executing 'Index' action at '{StartTime}'") contains the correct number of named parameters. Then, it returns an action method with the necessary number of generic parameters, including the if(logger.IsEnabled) clause. There are multiple overloads of the Define method, that take 0-6 generic parameters, depending on the number you need for your custom logging message.

If you want to see the details, including how the LogValues<> class works, check out the source code on GitHub.

Summary

If you take one thing away from the post, consider the way you log messages in your own application or libraries. Consider creating a static LoggerExtensions class, using LoggerMessage.Define to create a set of static fields, and adding strongly typed extension methods like HomeControllerIndexExecuting using the static Action<> fields.

If you want to see the logger messages in action, check out the sample app in the logging repo, or take a look at the ImageSharp library, which puts them to good effect.


Anuraj Parameswaran: Using Node Services in ASP.NET Core

This post is about running Javascript code in Server. Because a huge number of useful, high-quality Web-related open source packages are in the form of Node Package Manager (NPM) modules. NPM is the largest repository of open-source software packages in the world, and the Microsoft.AspNetCore.NodeServices package means that you can use any of them in your ASP.NET Core application.


Anuraj Parameswaran: Detecting AJAX Requests in ASP.NET Core

This post is about detecting Ajax Requests in ASP.NET Core. In earlier versions of ASP.NET MVC, developers could easily determine whether the request is made via AJAX or not with IsAjaxRequest() method which is part of Request method. In this post I am implementing the similar functionlity in ASP.NET Core.


Damien Bowden: Implementing a silent token renew in Angular for the OpenID Connect Implicit flow

This article shows how to implement a silent token renew in Angular using IdentityServer4 as the security token service server. The SPA Angular client implements the OpenID Connect Implicit Flow ‘id_token token’. When the id_token expires, the client requests new tokens from the server, so that the user does not need to authorise again.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Other posts in this series:

When a user of the client app authorises for the first time, after a successful login on the STS server, the AuthorizedCallback function is called in the Angular application. If the server response and the tokens are successfully validated, as defined in the OpenID Connect specification, the silent renew is initialized, and the token validation method is called.

 public AuthorizedCallback() {
        
	...
		
	if (authResponseIsValid) {
		
		...

		if (this._configuration.silent_renew) {
			this._oidcSecuritySilentRenew.initRenew();
		}

		this.runTokenValidatation();

		this._router.navigate([this._configuration.startupRoute]);
	} else {
		this.ResetAuthorizationData();
		this._router.navigate(['/Unauthorized']);
	}

}

The OidcSecuritySilentRenew Typescript class implements the iframe which is used for the silent token renew. This iframe is added to the parent HTML page. The renew is implemented in an iframe, because we do not want the Angular application to refresh, otherwise for example we would lose form data.

...

@Injectable()
export class OidcSecuritySilentRenew {

    private expiresIn: number;
    private authorizationTime: number;
    private renewInSeconds = 30;

    private _sessionIframe: any;

    constructor(private _configuration: AuthConfiguration) {
    }

    public initRenew() {
        this._sessionIframe = window.document.createElement('iframe');
        console.log(this._sessionIframe);
        this._sessionIframe.style.display = 'none';

        window.document.body.appendChild(this._sessionIframe);
    }

    ...
}

The runTokenValidatation function starts an Observable timer. The application subscribes to the Observable which executes every 3 seconds. The id_token is validated, or more precise, checks that the id_token has not expired. If the token has expired and the silent_renew configuration has been activated, the RefreshSession function will be called, to get new tokens.

private runTokenValidatation() {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
		if (this._isAuthorized) {
			if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
				console.log('IsAuthorized: isTokenExpired');

				if (this._configuration.silent_renew) {
					this.RefreshSession();
				} else {
					this.ResetAuthorizationData();
				}
			}
		}
	},
	function (err: any) {
		console.log('Error: ' + err);
	},
	function () {
		console.log('Completed');
	});
}

The RefreshSession function creates the required nonce and state which is used for the OpenID Implicit Flow validation and starts an authentication and authorization of the client application and the user.

public RefreshSession() {
        console.log('BEGIN refresh session Authorize');

        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('RefreshSession created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url = this.createAuthorizeUrl(nonce, state);

        this._oidcSecuritySilentRenew.startRenew(url);
    }

The startRenew sets the iframe src to the url for the OpenID Connect flow. If successful, the id_token and the access_token are returned and the application runs without any interupt.

public startRenew(url: string) {
        this._sessionIframe.src = url;

        return new Promise((resolve) => {
            this._sessionIframe.onload = () => {
                resolve();
            }
        });
}

IdentityServer4 Implicit Flow configuration

The STS server, using IdentityServer4 implements the server side of the OpenID Implicit flow. The AccessTokenLifetime and the IdentityTokenLifetime properties are set to 30s and 10s. After 10s the id_token will expire and the client application will request new tokens. The access_token is valid for 30s, so that any client API requests will not fail. If you set these values to the same value, then the client will have to request new tokens before the id_token expires.

new Client
{
	ClientName = "angularclient",
	ClientId = "angularclient",
	AccessTokenType = AccessTokenType.Reference,
	AccessTokenLifetime = 30,
	IdentityTokenLifetime = 10,
	AllowedGrantTypes = GrantTypes.Implicit,
	AllowAccessTokensViaBrowser = true,
	RedirectUris = new List<string>
	{
		"https://localhost:44311"

	},
	PostLogoutRedirectUris = new List<string>
	{
		"https://localhost:44311/Unauthorized"
	},
	AllowedCorsOrigins = new List<string>
	{
		"https://localhost:44311",
		"http://localhost:44311"
	},
	AllowedScopes = new List<string>
	{
		"openid",
		"dataEventRecords",
		"dataeventrecordsscope",
		"securedFiles",
		"securedfilesscope",
		"role"
	}
}

When the application is run, the user can login, and the tokens are refreshed every ten seconds as configured on the server.

Links:

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/IdentityServer/IdentityServer4

https://identityserver4.readthedocs.io/en/release/



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

This is the next in a series of posts on using ImageSharp to resize images in an ASP.NET Core application. I showed how you could define an MVC action that takes a path to a file stored in the wwwroot folder, resizes it, and serves the resized file.

The biggest problem with this is that resizing an image is relatively expensive, taking multiple seconds to process large images. In the previous post I showed how you could use the IDistributedCache interface to cache the resized image, and use that for subsequent requests.

This works pretty well, and avoids the need to process the image multiple times, but in the implementation I showed, there were a couple of drawbacks. The main issue was the lack of caching headers and features at the HTTP level - whenever the image is requested, the MVC action will return the whole data to the browser, even though nothing has changed.

In the following image, you can see that every request returns a 200 response and the full image data. The subsequent requests are all much faster than the original because we're using data cached in the IDistributedCache, but the browser is not caching our resized image.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

In this post I show a different approach to caching the data - instead of storing the file in an IDistributedCache, we instead write the file to disk in the wwwroot folder. We then use StaticFileMiddleware to serve the file directly, without ever hitting the MVC middleware after the initial request. This lets us take advantage of the built in caching headers and etag behaviour that comes with the StaticFileMiddleware.

Note: James Jackson-South has been working hard on some extensible ImageSharp middleware to provide the functionality in these blog posts. He's even written a post blog introducing it, so check it out!

The system design

The approach I'm using in this post is shown in the following figure:

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

With this design a request for resizing an image, e.g. to /resized/200/120/original.jpg, would go through a number of steps:

  1. A request arrives for /resized/200/120/original.jpg
  2. The StaticFileMiddleware looks for the original.jpg file in the folder wwwroot/resized/200/120/, but it doesn't exist, so the request passes on to the MvcMiddleware
  3. The MvcMiddleware invokes the ResizeImage middleware, and saves the resized file in the folder wwwroot/resized/200/120/.
  4. On the next request, the StaticFileMiddleware finds the resized image in the wwwroot folder, and serves it as usual, short-circuiting the middleware pipeline before the MvcMiddleware can run.
  5. All subsequent requests for the resized file are served by the StaticFileMiddleware.

Writing a resized file to the wwwroot folder

After we first resize an image using the MvcMiddleware, we need to store the resized image in the wwwroot folder. In ASP.NET Core there is an abstraction called IFileProvider which can be used to obtain information about files. The IHostingEnvironment includes two such IFileProvders:

  • ContentRootFileProvider - an `IFileProvider for the Content Root, where your application files are stored, usually the project root or publish folder.
  • WebRootFileProvider - an IFileProvider for the wwwroot folder

We can use the WebRootFileProvider to open a stream to our destination file, which we will write the resized image to. The outline of the method is as follows, with preconditions, and the DOS protection code removed for brevity:`

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;
    public HomeController(IHostingEnvironment env)
    {
        _fileProvider = env.WebRootFileProvider;
    }

    [Route("/resized/{width}/{height}/{*url}")]
    public IActionResult ResizeImage(string url, int width, int height)
    {
        // Preconditions and sanitsation 
        // Check the original image exists
        var originalPath = PathString.FromUriComponent("/" + url);
        var fileInfo = _fileProvider.GetFileInfo(originalPath);
        if (!fileInfo.Exists) { return NotFound(); }

        // Replace the extension on the file (we only resize to jpg currently) 
        var resizedPath = ReplaceExtension($"/resized/{width}/{height}/{url}");

        // Use the IFileProvider to get an IFileInfo
        var resizedInfo = _fileProvider.GetFileInfo(resizedPath);
        // Create the destination folder tree if it doesn't already exist
        Directory.CreateDirectory(Path.GetDirectoryName(resizedInfo.PhysicalPath));

        // resize the image and save it to the output stream
        using (var outputStream = new FileStream(resizedInfo.PhysicalPath, FileMode.CreateNew))
        using (var inputStream = fileInfo.CreateReadStream())
        using (var image = Image.Load(inputStream))
        {
            image
                .Resize(width, height)
                .SaveAsJpeg(outputStream);
        }

        return PhysicalFile(resizedInfo.PhysicalPath, "image/jpg");
    }

    private static string ReplaceExtension(string wwwRelativePath)
    {
        return Path.Combine(
            Path.GetDirectoryName(wwwRelativePath),
            Path.GetFileNameWithoutExtension(wwwRelativePath)) + ".jpg";
    }
}

The overall design of this method is pretty simple.

  1. Check the original file exists.
  2. Create the destination file path. We're replacing the file extension with jpg at the moment because we are always resizing to a jpeg.
  3. Obtain an IFileInfo for the destination file. This is relative to the wwwroot folder as we are using the WebRootFileProvider on IHostingEnvironment.
  4. Open a file stream for the destination file.
  5. Open the original image, resize it, and save it to the output file stream.

With this method, we have everything we need to cache files in the wwwroot folder. Even better, nothing else needs to change in our Startup file, or anywhere else in our program.

Trying it out

Time to take it for a spin! If we make a number of requests for the same page again, and compare it to the first image in this post, you can see that we still have the fast response times for requests after the first, as we only resize the image once. However, you can also see the some of the requests now return a 304 response, and just 208 bytes of data. The browser uses its standard HTTP caching mechanisms on the client side, rather than caching only on the server.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk This is made possible by the etag and Last-Modified headers sent automatically by the StaticFileMiddleware.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Note, we are not actually sending any caching headers by default - I wrote a post on how to do this here, which gives you control over how much caching browsers should do.

It might seem a little odd that there are three 200 requests before we start getting 304s. This is because:

  1. The first request is handled by the ResizeImage MVC method, but we are not adding any cache-related headers like ETag etc - we are just serving the file using the PhysicalFileResult.
  2. The second request is handled by the StaticFileMiddleware. It returns the file from disk, including an ETag and a Last-Modified header.
  3. The third request is made with additional headers - If-Modified-Since and If-None-Match headers. This returns the image data with a new ETag.
  4. Subsequent requests send the new ETag in the If-None-Match header, and the server responds with 304s.

I'm not entirely sure why we need three requests for the whole data here - it seems like two would suffice, given that the third request is made with the If-Modified-Since and If-None-Match headers. Why would the ETag need to change between requests two and three? I presume this is just standard behaviour though, and something I need to look at in more detail when I have time!

Summary

This post takes an alternative approach to caching compared to my last post on ImageSharp. Instead of caching the resized images in an IDistributedCache, we save them directly to the wwwroot folder. That way we can use all of the built in file response capabilities of the StaticFileMiddleware, without having to write it ourselves.

Having said that, James Jackson-South has written some middleware to take a similar approach, which handles all the caching headers for you. If this series has been of interest, I encourage you to check it out!


Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an interview – so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

In the ASP.NET Core 2.0 preview 1 release, Microsoft have released a new metapackage called Microsoft.AspNetCore.All. This package consists of every single package shipped by Microsoft as part of ASP.NET Core 2.0, including all the Razor packages, MVC packages, Entity Framework Core packages, everything!

Now, if you're first reaction to this news is anything like mine, you're quite possibly cradling your head in your hands, thinking they've completely thrown in the towel in creating a small, modular framework. But don't worry, there's more to it than that, and it should help deal with a number of problems.

Note, the Microsoft.AspNetCore.All metapackage will only ever target netcoreapp2.0, as it relies on features of .NET Core as you'll see. IF you are targeting .NET Framework, then you will still be able to use the individual ASP.NET Core packages (as they will target netstandard2.0, it is just the metapackage that will not be available.

Version issues

From the outset, ASP.NET Core has been plagued by confusion around versioning. First of all, you have the confusion between the .NET Core runtime version and the .NET tools version (I discussed those in a previous post). On top of that, you have the somewhat abstract version of ASP.NET Core itself, such as ASP.NET Core 1.0 or 1.1.

And then you have the package versions themselves. Every package in ASP.NET Core revs its version number semi-independently. This makes sense semantically - if a package doesn't change between two versions of ASP.NET Core, then the version numbers shouldn't change.

Unfortunately this can make it tricky to work out which version of a package to install. For example, version 1.0.5 of the Microsoft.AspNetCore metapackage (that I talk about here) adds the following packages to your application:

"Microsoft.AspNetCore.Diagnostics" (>= 1.0.3)
"Microsoft.AspNetCore.Hosting" (>= 1.0.3)
"Microsoft.AspNetCore.Routing" (>= 1.0.4)
"Microsoft.AspNetCore.Server.IISIntegration" (>= 1.0.3)
"Microsoft.AspNetCore.Server.Kestrel" (>= 1.0.4)
"Microsoft.Extensions.Configuration.EnvironmentVariables" (>= 1.0.2)
"Microsoft.Extensions.Configuration.FileExtensions" (>= 1.0.2)
"Microsoft.Extensions.Configuration.Json" (>= 1.0.2)
"Microsoft.Extensions.Logging" (>= 1.0.2)
"Microsoft.Extensions.Logging.Console" (>= 1.0.2)
"Microsoft.Extensions.Options.ConfigurationExtensions" (>= 1.0.2)

As you can see, the version numbers are all over the place - it's basically impossible to know which version to install without polling NuGet to find the latest version of each package. And updating your packages becomes a nightmare, especially if you're working outside of VS or Rider, and are manually editing your .csproj files.

With ASP.NET Core 2.0, Microsoft are moving to a different approach. While strict semantic versioning and many small packages is a great idea, the realities of that sure get confusing, especially when you have a large project.

The Microsoft.AspNetCore.All metapacakge references all of the ASP.NET Core packages, but it does so with a single version number:

<ItemGroup>  
  <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0-preview1-final" />
</ItemGroup>  

Updating your ASP.NET Core packages will just involve updating this one package. No more trying to work out which packages have changed and which haven't.

"But wait", I hear you cry, "if that metapackage includes every package in ASP.NET Core, won't that make my published application huge?". Actually, no, thanks to a new feature in .NET Core 2.0.

Number of packages

.NET Core 2.0 brings a new feature called the Runtime Store. This essentially lets you pre-install packages on a machine, in a central location, so you don't have to include them in the publish output of your individual apps.

You can think of the Runtime Store as a Global Assembly Cache (GAC) for .NET Core. As well serving as a central location for packages, it can also ngen the libraries so they don't have to be jit-ed by your application, which improves startup times for your app.

When you install the .NET Core 2.0 preview 1, it will also install all of the ASP.NET Core packages into the runtime store. That means they're always going to be on a machine that has the .NET Core 2.0 runtime installed. If you've installed the preview of .NET Core 2.0 preview 1, you can find the store here: C:\Program Files\dotnet\store.

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

Surprise, surprise, this setup works brilliantly with the Microsoft.AspNetCore.All metapackage!

When you publish an ASP.NET Core 2.0 app that references the Microsoft.AspNetCore.All you'll find that your publish folder is miraculously free of the ASP.NET Core packages like Microsoft.AspNetCore.Hosting and Microsoft.Extensions.Logging.

Just compare the output of the basic dotnet new mvc template in an ASP.NET Core 1.0 app to the 2.0 preview 1. Look at the size of that scrollbar in the top image - 94 items vs 13 items in ASP.NET Core 2.0.

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

Making the publish size smaller has a number of obvious benefits, but one of the biggest in my mind is making the size of your Docker images smaller. Docker works in layers; when you deploy your app with docker, whole of ASP.NET Core will now be included in the base image, instead of as part of your app. Shipping updates to your app only requires shipping the files in your publish folder; a smaller publish folder means a smaller Docker image file, and a smaller image file means a faster deployment! Win win.

Discoverability of APIs

The final advantage of the Microsoft.AspNetCore.All package is a massive boon to new developers - discoverability of the APIs available. One of the problems with having many small packages in ASP.NET Core 1.x, is that you won't necessarily know what's available to you, or which package an API lives in unless you go checking the docs or browsing on NuGet.

Now, granted, VS 2017 has got much better at this, as you can turn on suggestions for common NuGet packages, but you still have to know what APIs are available to you:

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

If you're working in VS Code, or you don't already know the APIs you're after, this is no use to you.

In ASP.NET Core 2.0, that problem goes away. Your project already references all the packages, so the APIs, and IntelliSense suggestions, are already there!

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

In my mind, this is a massive help for new users. I still find myself trying to remember the exact API I'm looking for to prompt VS to install the associated package, and this problem is just completely solved!

Summary

The Microsoft.AspNetCore.All metapackage includes every package released by Microsoft as part of ASP.NET Core. This has a number of benefits:

  • It reduces the package version management burden
  • It reduces the number of packages that need to be published with the app
  • It makes it easy to use the core packages without having to add them to the solution explicitly

These benefits should make writing ASP.NET Core 2.0 apps against .NET Core just that little bit easier, so kudos to all those involved.


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Andrew Lock: How to use multiple hosting environments on the same machine in ASP.NET Core

How to use multiple hosting environments on the same machine in ASP.NET Core

This short post is in response to a comment I received on a post I wrote a while ago, about how to set the hosting environment in ASP.NET Core. It's a question I've heard a couple of times, so thought I'd write it up here.

The question by Denis Zavershinskiy is as follows:

Do you know if there is a way to overwrite environment variable name? For example, I want my CoolProject to take environment name not from ASPNETCORE_ENVIRONMENT but from COOL_PROJ_ENV. Is it possible?

The answer to that question is a little nuanced. If you already have an app deployed, and want to switch the environment for it without changing other apps on that machine, then you can't do it with Environment variables. Those obviously affect the whole environment!

tl;dr; Create a custom configuration object in your Program.cs file, load the environment variable using a custom key, and call UseEnvironment on the WebHostBuilder.

However, if this is a capability you think you will need, you can use a similar approach to the one I use in that post to set the environment using command line arguments.

This approach involves building a new IConfiguration object, and passing that in to the WebHostBuilder on application startup. This lets you load configuration from any source, just as you would in your normal startup method, and pass that configuration to the WebHostBuilder using UseConfiguration. The WebHostBuilder will look for a key named "Environment" in this configuration, and use that as the environment.

For example, if you use the following configuration.

var config = new ConfigurationBuilder()  
    .AddCommandLine(args)
    .Build();

var host = new WebHostBuilder()  
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseKestrel()
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

You can pass any setting value with this setup, including the "environment variable":

> dotnet run --environment "MyCustomEnv"

Project TestApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.

Hosting environment: MyCustomEnv  
Content root path: C:\Projects\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

This is fine if you can use command line arguments like this, but what if you want to use environment variables? Again, the problem is that they're shared between all apps on a machine.

However, you can use a similar approach, coupled with the UseEnvironment extension method, to set a different environment for each machine. This will override the ASPNETCORE_ENVIRONMENT value, if it exists, with the value you provide for this application alone. No other applications on the machine will be affected.

public class Program  
{
    public static void Main(string[] args)
    {
        const string EnvironmentKey = "MYCOOLPROJECT_ENVIRONMENT";

        var config = new ConfigurationBuilder()
            .AddEnvironmentVariables()
            .Build();

        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseEnvironment(config[EnvironmentKey])
            .UseIISIntegration()
            .UseStartup<Startup>()
            .UseApplicationInsights()
            .Build();

        host.Run();
    }
}

To test this out, I added the MYCOOLPROJECT_ENVIRONMENT key with a value of Staging to the launch.json file VS uses when running the app:

{
  "profiles": {
    "EnvironmentTest": {
      "commandName": "Project",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "MYCOOLPROJECT_ENVIRONMENT": "Staging"
      },
      "applicationUrl": "http://localhost:56172"
    }
  }
}

Running the app using F5, shows that we have correctly picked up the Staging value using our custom environment variable:

Hosting environment: Staging  
Content root path: C:\Users\Sock\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:56172  
Application started. Press Ctrl+C to shut down.  

With this approach you can effectively have a per-app environment variable that you can use to configure the environment for an app individually.

Summary

On shared hosting, you may be in a situation when you want to use a different IHostingEnvironment for multiple apps on the same machine. You can achieve this with the approach outlined in this post, building an IConfiguration object and passing a key to WebHostBuilder.UseEnvironment extension method.


Andrew Lock: Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

One of the brand new features added to ASP.NET Core in version 2.0 preview 1 is Razor Pages. This harks back to the (terribly named) ASP.NET Web Pages framework, which was a simpler, page-based, alternative framework to MVC. It was a completely different stack, and was sort of .NET's answer to PHP.

I know, that might make you shudder, but I've actually had a fair amount of success with it. It allowed our designers who had HTML knowledge but no C# to be productive, without having to deal with full MVC. Web developers might turn their nose up at that, but there's really no reason for a designer to go there, and would you want them to? A developer could just jump on for any dynamic bits of code that were beyond the designer, but otherwise you could leave them to it.

This post dips a toe into the justification of Razor Pages and when you might want to use it in your own ASP.NET Core applications.

ASP.NET Core 2.0 includes a new razor template. This shows the default MVC template completely converted to Razor Pages. Mike Brind has a great rundown of this template on his blog. In this post I'm looking at a hybrid approach - only converting the simplest pages to Razor Pages.

What are Razor Pages?

The new Razor Pages treads a line somewhere between ASP.NET Web Pages and full MVC. It's still a "page based" model, in that all the code for a single page lives in one file, and that file is used to describe the URL structure of the app.

This makes it very easy to reason about the behaviour of simple apps, and removes some of the ceremony of creating new pages. Instead of having to create a controller, create an action, create a model, and create a view, you can simply create the Razor Page instead.

Now granted, for cases where you're doing anything much more than displaying a View, MVC may well be the correct thing to use. And the great thing about Razor Pages is that it's built directly on top of the MVC stack. Essentially all of the primitives like model binding and validation are available to you in Razor Pages. Or even better, you could use Razor Pages for most of the app, and fallback to full MVC for the complex actions.

Putting it to the test - the MVC template

As an example, I thought I'd convert the default MVC template to use Razor pages for the simple actions. This is remarkably simple to do and highlights the advantages of using Razor Pages if you don't have much logic in your app.

The MVC template

The default MVC template is pretty simple - it consists of a HomeController with four actions:

  • Index()
  • About()
  • Contact()
  • Error()

The first three of these are really simply actions, that optionally set some ViewData and return a ViewResult. These are prime candidates for converting to Razor Pages.

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        return View();
    }

    public IActionResult About()
    {
        ViewData["Message"] = "Your application description page.";

        return View();
    }

    public IActionResult Contact()
    {
        ViewData["Message"] = "Your contact page.";

        return View();
    }

    public IActionResult Error()
    {
        return View(new ErrorViewModel 
        { 
            RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier 
        });
    }
}

Each of the simple methods renders a .cshtml file in the Views folder of the project. These are perfect of us to convert to Razor pages - they are basically just HTML files with some dynamic regions. For example, the About.cshtml file looks like this:

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>@ViewData["Message"]</h3>

<p>Use this area to provide additional information.</p>  

So lets convert these to Razor Pages.

The Error action is only slightly more complicated, and could easily be converted to a Razor Page too (as it is in the dotnet new razor template). To highlight the ability to mix the two however, I'm going to leave that as an MVC action.

The Razor Pages version

Converting these pages is super-simple, I'll take the About page as an example.

To turn the page into a Razor Page instead of a view, you simply need to add the @page directive to the top of the page, and move the file from Views/Home/About.cshtml to Pages/Home/About.cshtml.

Razor pages are stored in the pages folder by default.

Finally, we can move the small piece of logic, setting the message, from the action method to the page itself. We could store this in the ViewData["Message"], but there's not a lot of need in this case as it's not used in parent Layouts or anything, so we'll just write it in the markup.

@page

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>Your application description page.</h3>

<p>Use this area to provide additional information.</p>  

With that, we can delete the About action method, and give it a test!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Hmmm, that's not quite right - we appear to have lost the layout! The Views folder contains a _ViewStart.cshtml that defines the default Layout for all the views in its subfolder (unless overwridden by a sub-_ViewStart.cshtml or the .cshtml file itself. Razor Pages uses the exact same concept of Layouts and partial views, so we can add a _ViewStart.cshtml to the Pages folder too:

@{
    Layout = "_Layout";
}

Note that we don't need to create separate layout files themselves, we can still reference the ones in the Views/Shared folder - the Views/Shared folder is searched by Razor Pages (as well as the Pages/Home folder in this case, and the Pages/Shared folder). With the Layout statement in place, we get our original About page back!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

And that's pretty much that! We can easily convert the other pages in the same way, and we can still access them at the same URL paths: /Home/About, /Home/Contact and /Home/Index.

Limitations

I actually bent the truth slightly there, we can't quite use all the same URLs. With the full-MVC template and the default MVC routing template, {controller=home/action=index/id?}, the following URLs all route to the same MVC HomeController.Index action:

  • /Home/Index
  • /Home
  • /

Those first two work perfectly well with Razor Pages, in the same way as with MVC, thanks to Index.cshtml being considered the default document for the folder. The last one is a problem however - you can't get this result with Razor Pages alone. And that's not really surprising - a page-based model implies a single URL - which is essentially what we have.

As far as I can tell, if you want a single action to cover all these URLs you'll need to fall back to MVC. But then, at least you can, and in those cases, maybe you should! Alternatively, you could create two pages, one for / and one for /Home, and use partial views to avoid duplication of markup between them.

So where does that leave us? Well if we just look at the number of files, you can see that we actually have more now than before. The difference is that you can add a new URL handler/Page by just creating a single .cshtml file in the appropriate Pages folder, no model or controller files needed!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Whether this works for you will depend on the sort of application you're building. A simple, mostly-static app with some dynamics sections might be well suited to building a Razor Pages app, while a more fully-featured, complicated app most definitely would not!

The real winner here is the ability to combine Razor with full MVC - you can start out building your application with Razor Pages, and if you find some complex pages you have a choice. You can either add the functionality using Razor Pages alone (as in Mike Brind's follow up post on handlers), or you can drop back to full MVC, whichever works best for you.

Summary

This post just scratches the surface of what you can do with Razor Pages - you can do far more complicated-MVC things if you like, but personally I fell like if you start doing that, just switch to proper MVC actions and encapsulate your logic properly. If you're interested, install the .NET Core 2.0 preview 1, and checkout the docs!


Andrew Lock: Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

One of the goals in ASP.NET Core 2.0 has been to clean up the basic templates, simplify the basic use-cases, and make it easier to get started with new projects.

This is evident in the new Program and Startup classes, which, on the face of it, are much simpler than their ASP.NET Core 1.0 counterparts. In this post, I'll take a look at the new WebHost.CreateDefaultBuilder() method, and see how it bootstraps your application.

Program and Startup responsibilities in ASP.NET Core 1.X

In ASP.NET Core 1.X, the Program class is used to setup the IWebHost. The default template for a web app looks something like this:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

This relatively compact file does a number of things:

  • Configuring a web server (Kestrel)
  • Set the Content directory (the directory containing the appsettings.json file etc)
  • Setup IIS Integration
  • Define the Startup class to use
  • Build(), and Run the IWebHost

The Startup class varies considerably depending on the application you are building. The MVC template shown below is a fairly typical starter template:

public class Startup  
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseBrowserLink();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This is a far more substantial file that has 4 main reponsibilities:

  • Setup configuration in the Startup constructor
  • Setup dependency injection in ConfigureServices
  • Setup Logging in Configure
  • Setup the middleware pipeline in Configure

This all works pretty well, but there are a number of points that the ASP.NET team considered to be less than ideal.

First, setting up configuration is relatively verbose, but also pretty standard; it generally doesn't need to vary much either between applications, or as the application evolves.

Secondly, logging is setup in the Configure method of Startup, after configuration and DI have been configured. This has two draw backs. On the one hand, it makes logging feel a little like a second class citizen - Configure is generally used to setup the middleware pipeline, so having the logging config in there doesn't make a huge amount of sense. Also it means you can't easily log the bootstrapping of the application itself. There are ways to do it, but it's not obvious.

In ASP.NET Core 2.0 preview 1, these two points have been addressed by modifying the IWebHost and by creating a helper method for setting up your apps.

Program and Startup responsibilities in ASP.NET Core 2.0 preview 1

In ASP.NET Core 2.0 preview 1, the responsibilities of the IWebHost have changed somewhat. As well as having the same responsibilities as before, the IWebHost has gained two more:

  • Setup configuration
  • Setup Logging

In addition, ASP.NET Core 2.0 introduces a helper method, CreateDefaultBuilder, that encapsulates most of the common code found in Program.cs, as well as taking care of configuration and logging!

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>()
            .Build();
}

As you can see, there's no mention of Kestrel, IIS integration, configuration etc - that's all handled by the CreateDefaultBuilder method as you'll see in a sec.

Moving the configuration and logging code into this method also simplifies the Startup file:

public class Startup  
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This class is pretty much identical to the 1.0 class with the logging and most of the configuration code removed. Notice too that the IConfiguration object is injected into the class and stored in a property on the class, instead of creating the configuration in the constructor itself

This is new to ASP.NET Core 2.0 - the IConfiguration object is registered with DI by default. in 1.X you had to register the IConfigurationRoot yourself if you needed it to be available in DI.

My initial reaction to CreateDefaultBuilder was that it was just obfuscating the setup, and felt a bit like a step backwards, but in hindsight, that was more just a "who moved my cheese" reaction. There's nothing magical about the CreateDefaultBuilder it just hides a certain amount of standard, ceremonial code that would often go unchanged anyway.

The WebHost.CreateDefaultBuilder helper method

In order to properly understand the static CreateDefaultBuilder helper method, I decided to take a peek at the source code on GitHub! You'll be pleased to know, if you're used to ASP.NET Core 1.X, most of this will look remarkably familiar.

public static IWebHostBuilder CreateDefaultBuilder(string[] args)  
{
    var builder = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .ConfigureAppConfiguration((hostingContext, config) => { /* setup config */  })
        .ConfigureLogging((hostingContext, logging) =>  { /* setup logging */  })
        .UseIISIntegration()
        .UseDefaultServiceProvider((context, options) =>  { /* setup the DI container to use */  })
        .ConfigureServices(services => 
        {
            services.AddTransient<IConfigureOptions<KestrelServerOptions>, KestrelServerOptionsSetup>();
        });

    return builder;
}

There's a few new methods in there that I've elided for now, which I'll explore in follow up posts. You can see that this method is largely doing the same work that Program did in ASP.NET Core 1.0 - it sets up Kestrel, defines the ContentRoot, and sets up IIS integration, just like before. Additionally, it does a number of other things

  • ConfigureAppConfiguration - this contains the configuration code that use to live in the Startup configuration
  • ConfigureLogging - sets up the logging that use to live in Startup.Configure
  • UseDefaultServiceProvider - I'll go into this in a later post, but this sets up the built-in DI container, and lets you customise its behaviour
  • ConfigureServices - Adds additional services needed by components added to the IWebHost. In particular, it configures the Kestrel server options, which lets you easily define your web host setup as part of your normal config.

I'll look a closer look at configuration and logging in this post, and dive into the other methods in a later post.

Setting up app configuration in ConfigureAppConfiguration

The ConfigureAppConfiguration method takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, and an IConfigurationBuilder instance, config:

ConfigureAppConfiguration(hostingContext, config) =>  
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

    if (env.IsDevelopment())
    {
        var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
        if (appAssembly != null)
        {
            config.AddUserSecrets(appAssembly, optional: true);
        }
    }

    config.AddEnvironmentVariables();

    if (args != null)
    {
        config.AddCommandLine(args);
    }
});

As you can see, the hostingContext parameter exposes the IHostingEnvironment (whether we're running in "Development" or "Production") as a property, HostingEnvironment. Apart form that the bulk of the code should be pretty familiar if you've used ASP.NET Core 2.0.

The one exception to this is setting up User Secrets, which is done a little different in ASP.NET Core 2.0. This uses an assembly reference to load the user secrets, though you can still use the generic config.AddUserSecrets<T> version in your own config.

In ASP.NET Core 2.0, the UserSecretsId is stored in an assembly attribute, hence the need for the Assembly code above. You can still define the id to use in your csproj file - it will be embedded in an assembly level attribute at compile time.

This is all pretty standard stuff. It loads configuration from the following providers, in the following order:

  • appsettings.json (optional)
  • appsettings.{env.EnvironmentName}.json (optional)
  • User Secrets
  • Environment Variables
  • Command line arguments

The main difference between this method and the approach in ASP.NET Core 1.X is the location - config is now part of the WebHost itself, instead of sliding in through the backdoor so-to-speak by using the Startup constructor. Also, the initial creation and final call to Build() on the IConfigurationBuilder instance happens in the web host itself, instead of being handled by you.

Setting up logging in ConfigureLogging

The ConfigureLogging method also takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, just like the configuration method, and a LoggerFactory instance, logging:

ConfigureLogging((hostingContext, logging) =>  
{
    logging.UseConfiguration(hostingContext.Configuration.GetSection("Logging"));
    logging.AddConsole();
    logging.AddDebug();
});

The logging infrastructure has changed a little in ASP.NET Core 2.0, but broadly speaking, this code echoes what you would find in the Configure method of an ASP.NET Core 1.0 app, setting up the Console and Debug log providers. You can use the UseConfiguration method to setup the log levels to use by accessing the already-defined IConfiguration, exposed on hostingContext.Configuration.

Customising your WebHostBuilder

Hopefully this dive into the WebHost.CreateDefaultBuilder helper helps show why the ASP.NET team decided to introduce it. There's a fair amount of ceremony in getting an app up and running, and this makes it far simpler.

But what if this isn't the setup you want? Well, then you don't have to use it! There's nothing special about the helper, you could copy-and paste its code into your own app, customise it, and you're good to go.

That's not quite true - the KestrelServerOptionsSetup class referenced in ConfigureServices is currently internal, so you would have to remove this. I'll dive into what this does in a later post.

Summary

This post looked at some of the differences between Program.cs and Startup.cs in moving from ASP.NET Core 1.X to 2.0 preview 1. In particular, I took a slightly deeper look into the new WebHost.CreateDefaultBuilder method which aims to simplify the initial bootstrapping of your app. If you're not keen on the choices it makes for you, or you need to customise them, you can still do this, exactly as you did before. The choice is yours!


Anuraj Parameswaran: What is new in ASP.NET Core 2.0 Preview

This post is about new features of ASP.NET Core 2.0 Preview. Microsoft announced ASP.NET Core 2.0 Preview 1 at Build 2017. This post will introduce some ASP.NET 2.0 features.


Damien Bowden: Anti-Forgery Validation with ASP.NET Core MVC and Angular

This article shows how API requests from an Angular SPA inside an ASP.NET Core MVC application can be protected against XSRF by adding an anti-forgery cookie. This is required, if using Angular, when using cookies to persist the auth token.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

Cross Site Request Forgery

XSRF is an attack where a hacker makes malicious requests to a web app, when the user of the website is already authenticated. This can happen when a website uses cookies to persist the token of an trusted website, user. A pure SPA should not use cookies to as it is hard to protect against this. With a server side rendered application, like ASP.NET Core MVC, anti-forgery cookies can be used to protect against this, which makes it safer, when using cookies.

Angular automatically adds the X-XSRF-TOKEN HTTP Header with the anti-forgery cookie value for each request if the XSRF-TOKEN cookie is present. ASP.NET Core needs to know, that it must use this to validate the request. This can be added to the ConfigureServices method in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddAntiforgery(options => options.HeaderName = "X-XSRF-TOKEN");
	services.AddMvc();
}

The XSRF-TOKEN cookie is added to the response of the HTTP request. The cookie is a secure cookie so this is only sent with HTTPS and not HTTP. All HTTP (Not HTTPS) requests will fail and return a 400 response. The cookie is created and added each time a new server url is called, but not for an API call.

app.Use(async (context, next) =>
{
	string path = context.Request.Path.Value;
	if (path != null && !path.ToLower().Contains("/api"))
	{
		// XSRF-TOKEN used by angular in the $http if provided
		var tokens = antiforgery.GetAndStoreTokens(context);
		context.Response.Cookies.Append("XSRF-TOKEN", 
		  tokens.RequestToken, new CookieOptions { 
		    HttpOnly = false, 
		    Secure = true 
		  }
		);
	}

	...

	await next();
});

The API uses the ValidateAntiForgeryToken attribute to check if the request contains the correct value for the XSRF-TOKEN cookie. If this is incorrect, or not sent, the request is rejected with a 400 response. The attribute is required when data is changed. HTTP GET requests should not require this attribute.

[HttpPut]
[ValidateAntiForgeryToken]
[Route("{id:int}")]
public IActionResult Update(int id, [FromBody]Thing thing)
{
	...

	return Ok(updatedThing);
}

You can check the cookies in the chrome browser.

Or in Firefox using Firebug (Cookies Tab).

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

http://www.fiyazhasan.me/angularjs-anti-forgery-with-asp-net-core/

http://www.dotnetcurry.com/aspnet/1343/aspnet-core-csrf-antiforgery-token

http://stackoverflow.com/questions/43312973/how-to-implement-x-xsrf-token-with-angular2-app-and-net-core-app/43313402

https://en.wikipedia.org/wiki/Cross-site_request_forgery

https://stormpath.com/blog/angular-xsrf



Anuraj Parameswaran: Implementing localization in ASP.NET Core

This post is about implementing localization in ASP.NET Core. Localization is the process of adapting a globalized app, which you have already processed for localizability, to a particular culture/locale. Localization is one of the best practise in web application development.


Damien Bowden: Secure ASP.NET Core MVC with Angular using IdentityServer4 OpenID Connect Hybrid Flow

This article shows how an ASP.NET Core MVC application using Angular in the razor views can be secured using IdentityServer4 and the OpenID Connect Hybrid Flow. The user interface uses server side rendering for the MVC views and the Angular app is then implemented in the razor view. The required security features can be added to the application easily using ASP.NET Core, which makes it safe to use the OpenID Connect Hybrid flow, which once authenticated and authorised, saves the token in a secure cookie. This is not an SPA application, it is an ASP.NET Core MVC application with Angular in the razor view. If you are implementing an SPA application, you should use the OpenID Connect Implicit Flow.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

IdentityServer4 configuration for OpenID Connect Hybrid Flow

IdentityServer4 is implemented using ASP.NET Core Identity with SQLite. The application implements the OpenID Connect Hybrid flow. The client is configured to allow the required scopes, for example the ‘openid’ scope must be added and also the RedirectUris property which implements the URL which is implemented on the client using the ASP.NET Core OpenID middleware.

using IdentityServer4;
using IdentityServer4.Models;
using System.Collections.Generic;

namespace QuickstartIdentityServer
{
    public class Config
    {
        public static IEnumerable<IdentityResource> GetIdentityResources()
        {
            return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
                new IdentityResources.Email(),
                new IdentityResource("thingsscope",new []{ "role", "admin", "user", "thingsapi" } )
            };
        }

        public static IEnumerable<ApiResource> GetApiResources()
        {
            return new List<ApiResource>
            {
                new ApiResource("thingsscope")
                {
                    ApiSecrets =
                    {
                        new Secret("thingsscopeSecret".Sha256())
                    },
                    Scopes =
                    {
                        new Scope
                        {
                            Name = "thingsscope",
                            DisplayName = "Scope for the thingsscope ApiResource"
                        }
                    },
                    UserClaims = { "role", "admin", "user", "thingsapi" }
                }
            };
        }

        // clients want to access resources (aka scopes)
        public static IEnumerable<Client> GetClients()
        {
            // client credentials client
            return new List<Client>
            {
                new Client
                {
                    ClientName = "angularmvcmixedclient",
                    ClientId = "angularmvcmixedclient",
                    ClientSecrets = {new Secret("thingsscopeSecret".Sha256()) },
                    AllowedGrantTypes = GrantTypes.Hybrid,
                    AllowOfflineAccess = true,
                    RedirectUris = { "https://localhost:44341/signin-oidc" },
                    PostLogoutRedirectUris = { "https://localhost:44341/signout-callback-oidc" },
                    AllowedCorsOrigins = new List<string>
                    {
                        "https://localhost:44341/"
                    },
                    AllowedScopes = new List<string>
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        IdentityServerConstants.StandardScopes.OfflineAccess,
                        "thingsscope",
                        "role"

                    }
                }
            };
        }
    }
}

MVC Angular Client Configuration

The ASP.NET Core MVC application with Angular is implemented as shown in this post: Using Angular in an ASP.NET Core View with Webpack

The cookie authentication middleware is used to store the access token in a cookie, once authorised and authenticated. The OpenIdConnectAuthentication middleware is used to redirect the user to the STS server, if the user is not authenticated. The SaveTokens property is set, so that the token is persisted in the secure cookie.

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
	AuthenticationScheme = "Cookies"
});

app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
	AuthenticationScheme = "oidc",
	SignInScheme = "Cookies",

	Authority = "https://localhost:44348",
	RequireHttpsMetadata = true,

	ClientId = "angularmvcmixedclient",
	ClientSecret = "thingsscopeSecret",

	ResponseType = "code id_token",
	Scope = { "openid", "profile", "thingsscope" },

	GetClaimsFromUserInfoEndpoint = true,
	SaveTokens = true
});

The Authorize attribute is used to secure the MVC controller or API.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Authorization;

namespace AspNetCoreMvcAngular.Controllers
{
    [Authorize]
    public class HomeController : Microsoft.AspNetCore.Mvc.Controller
    {
        public IActionResult Index()
        {
            return View();
        }

        public IActionResult Error()
        {
            return View();
        }
    }
}

CSP: Content Security Policy in the HTTP Headers

Content Security Policy helps you reduce XSS risks. The really brilliant NWebSec middleware can be used to implement this as required. Thanks to André N. Klingsheim for this excellent library. The middleware adds the headers to the HTTP responses.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

In this configuration, mixed content is not allowed and unsafe inline styles are allowed.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
	.StyleSources(s => s.UnsafeInline())
);

Set the Referrer-Policy in the HTTP Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Again NWebSec middleware is used to implement this.

           
app.UseReferrerPolicy(opts => opts.NoReferrer());

Redirect Validation

You can secure that application so that only redirects to your sites are allowed. For example, only a redirect to IdentityServer4 is allowed.

// Register this earlier if there's middleware that might redirect.
// The IdentityServer4 port needs to be added here. 
// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

Secure Cookies

Only secure cookies should be used to store the session information.

You can check this in the Chrome browser:

XFO: X-Frame-Options

The X-Frame-Options Headers can be used to prevent an IFrame from being used from within the UI. This helps protect against click jacking.

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

app.UseXfo(xfo => xfo.Deny());

Configuring HSTS: Http Strict Transport Security

The HTTP Header tells the browser to force HTTPS for a length of time.

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

TOFU (Trust on first use) or first time loading.

Once you have a proper cert and a fixed URL, you can configure that the browser to preload HSTS settings for your website.

https://hstspreload.org/

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

X-Xss-Protection NWebSec

Adds a middleware to the ASP.NET Core pipeline that sets the X-Xss-Protection (Docs from NWebSec)

 app.UseXXssProtection(options => options.EnabledWithBlockMode());

CORS

Only the allowed CORS should be enabled when implementing this. Disabled this as much as possible.

Cross Site Request Forgery XSRF

See this blog:
Anti-Forgery Validation with ASP.NET Core MVC and Angular

Validating the security Headers

Once you start the application, you can check that all the security headers are added as required:

Here’s the Configure method with all the NWebsec app settings as well as the authentication middleware for the client MVC application.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IAntiforgery antiforgery)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	loggerFactory.AddSerilog();

	//Registered before static files to always set header
	app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
	app.UseXContentTypeOptions();
	app.UseReferrerPolicy(opts => opts.NoReferrer());

	app.UseCsp(opts => opts
		.BlockAllMixedContent()
		.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
		.StyleSources(s => s.UnsafeInline())
	);

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCookieAuthentication(new CookieAuthenticationOptions
	{
		AuthenticationScheme = "Cookies"
	});

	app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
	{
		AuthenticationScheme = "oidc",
		SignInScheme = "Cookies",

		Authority = "https://localhost:44348",
		RequireHttpsMetadata = true,

		ClientId = "angularmvcmixedclient",
		ClientSecret = "thingsscopeSecret",

		ResponseType = "code id_token",
		Scope = { "openid", "profile", "thingsscope" },

		GetClaimsFromUserInfoEndpoint = true,
		SaveTokens = true
	});

	var angularRoutes = new[] {
		 "/default",
		 "/about"
	 };

	app.Use(async (context, next) =>
	{
		string path = context.Request.Path.Value;
		if (path != null && !path.ToLower().Contains("/api"))
		{
			// XSRF-TOKEN used by angular in the $http if provided
			  var tokens = antiforgery.GetAndStoreTokens(context);
			context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken, new CookieOptions { HttpOnly = false, Secure = true });
		}

		if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
			(ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
		{
			context.Request.Path = new PathString("/");
		}

		await next();
	});

	app.UseDefaultFiles();
	app.UseStaticFiles();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseStaticFiles();

	//Registered after static files, to set headers for dynamic content.
	app.UseXfo(xfo => xfo.Deny());

	// Register this earlier if there's middleware that might redirect.
	// The IdentityServer4 port needs to be added here. 
	// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
	app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

	app.UseXXssProtection(options => options.EnabledWithBlockMode());

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});  
}

Links:

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows

https://docs.nwebsec.com/en/latest/index.html

https://www.nwebsec.com/

https://github.com/NWebsec/NWebsec

https://content-security-policy.com/

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

https://scotthelme.co.uk/a-new-security-header-referrer-policy/

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

https://gun.io/blog/tofu-web-security/

https://en.wikipedia.org/wiki/Trust_on_first_use

http://www.dotnetnoob.com/2013/07/ramping-up-aspnet-session-security.html

http://openid.net/specs/openid-connect-core-1_0.html

https://www.ssllabs.com/



Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Damien Bowden: Using Angular in an ASP.NET Core View with Webpack

This article shows how Angular can be run inside an ASP.NET Core MVC view using Webpack to build the Angular application. By using Webpack, the Angular application can be built using the AOT and Angular lazy loading features and also profit from the advantages of using a server side rendered view. If you prefer to separate the SPA and the server into 2 applications, use Angular CLI or a similiar template.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

The application was created using the .NET Core ASP.NET Core application template in Visual Studio 2017. A packages.json npm file was added to the project. The file contains the frontend build scripts as well as the npm packages required to build the application using Webpack and also the Angular packages.

{
  "name": "angular-webpack-visualstudio",
  "version": "1.0.0",
  "description": "An Angular VS template",
  "author": "",
  "license": "ISC",
    "repository": {
    "type": "git",
    "url": "https://github.com/damienbod/Angular2WebpackVisualStudio.git"
  },
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release",
    "test": "karma start"
  },
  "dependencies": {
    "@angular/common": "4.2.2",
    "@angular/compiler": "4.2.2",
    "@angular/compiler-cli": "4.2.2",
    "@angular/platform-server": "4.2.2",
    "@angular/core": "4.2.2",
    "@angular/forms": "4.2.2",
    "@angular/http": "4.2.2",
    "@angular/platform-browser": "4.2.2",
    "@angular/platform-browser-dynamic": "4.2.2",
    "@angular/router": "4.2.2",
    "@angular/upgrade": "4.2.2",
    "@angular/animations": "4.2.2",
    "angular-in-memory-web-api": "0.3.1",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.10",
    "rxjs": "5.3.0",
    "zone.js": "0.8.8",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
    "@types/node": "7.0.13",
    "@types/jasmine": "2.5.47",
    "angular2-template-loader": "0.6.2",
    "angular-router-loader": "^0.6.0",
    "awesome-typescript-loader": "3.1.2",
    "clean-webpack-plugin": "^0.1.16",
    "concurrently": "^3.4.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.28.0",
    "file-loader": "^0.11.1",
    "html-webpack-plugin": "^2.28.0",
    "jquery": "^3.2.1",
    "json-loader": "^0.5.4",
    "node-sass": "^4.5.2",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.6.1",
    "sass-loader": "^6.0.3",
    "source-map-loader": "^0.2.1",
    "style-loader": "^0.16.1",
    "ts-helpers": "^1.1.2",
    "tslint": "^5.1.0",
    "tslint-loader": "^3.5.2",
    "typescript": "2.3.2",
    "url-loader": "^0.5.8",
    "webpack": "^2.4.1",
    "webpack-dev-server": "2.4.2",
    "jasmine-core": "2.5.2",
    "karma": "1.6.0",
    "karma-chrome-launcher": "2.0.0",
    "karma-jasmine": "1.1.0",
    "karma-sourcemap-loader": "0.3.7",
    "karma-spec-reporter": "0.0.31",
    "karma-webpack": "2.0.3"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

The angular application is added to the angularApp folder. This frontend app implements a default module and also a second about module which is lazy loaded when required (About button clicked). See Angular Lazy Loading with Webpack 2 for further details.

The _Layout.cshtml MVC View is also added here as a template. This will be used to build into the MVC application in the Views folder.

The webpack.prod.js uses all the Angular project files and builds them into pre-compiled AOT bundles, and also a separate bundle for the about module which is lazy loaded. Webpack adds the built bundles to the _Layout.cshtml template and copies this to the Views/Shared/_Layout.cshtml file.

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angularApp/vendor.ts',
        'polyfills': './angularApp/polyfills.ts',
        'app': './angularApp/main-aot.ts' // AoT compilation
    },

    output: {
        path: __dirname + '/wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: ''
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: '../Views/Shared/_Layout.cshtml',
            inject: 'body',
            template: 'angularApp/_Layout.cshtml'
        }),

        new CopyWebpackPlugin([
            { from: './angularApp/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};

The Startup.cs is configured to load the configuration and middlerware for the application using client or server routing as required.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using AspNetCoreMvcAngular.Repositories.Things;
using Microsoft.AspNetCore.Http;

namespace AspNetCoreMvcAngular
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddCors(options =>
            {
                options.AddPolicy("AllowAllOrigins",
                    builder =>
                    {
                        builder
                            .AllowAnyOrigin()
                            .AllowAnyHeader()
                            .AllowAnyMethod();
                    });
            });

            services.AddSingleton<IThingsRepository, ThingsRepository>();

            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                 "/default",
                 "/about"
             };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseCors("AllowAllOrigins");

            app.UseDefaultFiles();
            app.UseStaticFiles();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The application can be built and run using the command line. The client application needs to be built before you can deploy or run!

> npm install
> npm run build-production
> dotnet restore
> dotnet run

You can also build inside Visual Studio 2017 using the Task Runner Explorer. If building inside Visual Studio 2017, you need to configure the NodeJS path correctly to use the right version.

Now you have to best of both worlds in the UI.

Note:
You could also use Microsoft ASP.NET Core JavaScript Services which supports server side pre rendering but not client side lazy loading. If your using Microsoft ASP.NET Core JavaScript Services, configure the application to use AOT builds for the Angular template.

Links:

Angular Templates, Seeds, Starter Kits

https://github.com/damienbod/AngularWebpackVisualStudio

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/aspnet/JavaScriptServices



Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.