Andrew Lock: Running tests with dotnet xunit using Cake

Running tests with dotnet xunit using Cake

In this post I show how you can run tests using the xUnit .NET CLI Tool dotnet xunit when building projects using Cake. Cake includes first class support for running test using dotnet test via the DotNetCoreTest alias, but if you want access to the additional configuration provided by the dotnet-xunit tool, you'll currently need to run the tool using DotNetCoreTool instead.

dotnet test vs dotnet xunit

Typically, .NET Core unit tests are run using the dotnet test command. This runs unit tests for a project regardless of which unit test framework was used - MSTest, NUnit, or xUnit. As long as the test framework has an appropriate adapter, the dotnet test command will hook into it, and provide a standard set of features.

However, the suggested approach to run .NET Core tests with xUnit is to use the dotnet-xunit framework tool. This provides more advanced access to xUnit settings, so you can set a whole variety of properties like how test names are listed, whether diagnostics are enabled, or parallelisation options for the test runs.

You can install the dotnet-xunit tool into a project by adding a DotnNetCliToolReference element to your .csproj file. If you add the xunit.runner.visualstudio and Microsoft.NET.Test.Sdk packages too, then you'll still be able to run your tests using dotnet test and Visual Studio:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFrameworks>netcoreapp2.0</TargetFrameworks>
    <IsPackable>false</IsPackable>
  </PropertyGroup>

  <ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" />
    <PackageReference Include="xunit" Version="2.3.0" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.0" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.0" />
  </ItemGroup>

</Project>  

With these packages installed and restored, you'll be able to run your tests using either dotnet test or dotnet xunit, but if you use the latter option, you'll have a whole host of additional arguments available to you. To see all your options, run dotnet xunit --help. The following is just a small selection of the help screen:

> dotnet xunit --help
xUnit.net .NET CLI Console Runner  
Copyright (C) .NET Foundation.

usage: dotnet xunit [configFile] [options] [reporter] [resultFormat filename [...]]

Note: Configuration files must end in .json (for JSON) or .config (for XML)  
      XML configuration files are only supported on net4x frameworks

Valid options (all frameworks):  
  -framework name        : set the framework (default: all targeted frameworks)
  -configuration name    : set the build configuration (default: 'Debug')
  -nobuild               : do not build the test assembly before running
  -nologo                : do not show the copyright message
  -nocolor               : do not output results with colors
  -failskips             : convert skipped tests into failures
  -stoponfail            : stop on first test failure
  -parallel option       : set parallelization based on option
                         :   none        - turn off parallelization
                         :   collections - parallelize test collections
  -maxthreads count      : maximum thread count for collection parallelization
                         :   default   - run with default (1 thread per CPU thread)
                         :   unlimited - run with unbounded thread count
                         :   (number)  - limit task thread pool size to 'count'
  -wait                  : wait for input after completion
  -diagnostics           : enable diagnostics messages for all test assemblies
[Output truncated]

Running tests with Cake

Cake is my preferred build scripting system for .NET Core projects. In their own words:

Cake (C# Make) is a cross platform build automation system with a C# DSL to do things like compiling code, copy files/folders, running unit tests, compress files and build NuGet packages.

I use cake to build my open source projects. A very simple Cake script consisting of a restore, build, and test, might look something like the following:

var configuration = Argument("Configuration", "Release");  
var solution = "./test.sln";

// Run dotnet restore to restore all package references.
Task("Restore")  
    .Does(() =>
    {
        DotNetCoreRestore();
    });

Task("Build")  
    .IsDependentOn("Restore")
    .Does(() =>
    {
        DotNetCoreBuild(solution
           new DotNetCoreBuildSettings()
                {
                    Configuration = configuration
                });
    });

Task("Test")  
    .IsDependentOn("Build")
    .Does(() =>
    {
        var projects = GetFiles("./test/**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreTest(
                project.FullPath,
                new DotNetCoreTestSettings()
                {
                    Configuration = configuration,
                    NoBuild = true
                });
        }
    });

This will run a restore in the solution directory, build the solution, and then run dotnet test for every project in the test sub-directory. Each of the steps uses a C# alias which calls the dotnet SDK commands:

  • DotNetCoreRestore - restores NuGet packages using dotnet restore
  • DotNetCoreBuild - builds the solution using dotnet build, using the settings provided in the DotNetCoreBuildSettings object
  • DotNetCoreTest - runs the tests in the project using dotnet test and the settings provided in the DotNetCoreTestSettings object.

Customizing the arguments passed to the dotnet tool

In the previous example, we ran tests with dotnet test and were able to set a number of additional options using the strongly typed DotNetCoreTestSettings object. If you want to pass additional options down to the dotnet test call, you can add customise the arguments using the ArgumentCustomization property.

For example, dotnet build implicitly calls dotnet restore, even though we are specifically restoring the solution. You can forgo this second call by passing --no-restore to the dotnet build call using the ArgumentCustomization property:

Task("Build")  
    .IsDependentOn("Restore")
    .Does(() =>
    {
        DotNetCoreBuild(solution
           new DotNetCoreBuildSettings()
                {
                    Configuration = configuration,
                    ArgumentCustomization = args => args.Append($"--no-restore")
                });
    });

With this approach, you can customise the arguments that are passed to the dotnet test command. However you can't customise the command itself to call dotnet xunit. For that, you need a different Cake alias - DotNetCoreTool

Running dotnet xunit tests with Cake

Using the strongly typed *Settings objects makes invoking many of the dotnet tools easy with Cake, but it doesn't include first-class support for all of them. The dotnet-xunit tool is one such tool.

If you need to run a dotnet tool that's not directly supported by a Cake alias, you can use the general purpose DotNetCoreTool alias. You can use this to execute any dotnet tool, by providing the tool name, and the command arguments.

For example, imagine you want to run dotnet xunit with diagnostics enabled, and stop on the first failure. If you were running the tool directly from the command line you'd use:

dotnet xunit -diagnostics -stoponfail  

In Cake, we can use the DotnetCoreTool, and pass in the command line arguments manually. If we update the previous Cake "Test" target to use DotNetCoreTool we have:

Task("Test")  
    .IsDependentOn("Build")
    .Does(() =>
    {
        var projects = GetFiles("./test/**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreTool(
                projectPath: project.FullPath, 
                command: "xunit", 
                arguments: $"-configuration {configuration} -diagnostics -stoponfail"
            );
        }
    });

The DotNetCoreTool isn't as convenient as being able to set strongly typed properties on a dedicated settings object. But it does at give a great deal of flexibility, and effectively lets you drop down to the command line inside your Cake build script.

If you build your scripts using Cake, and need/want to use some of the extra features afforded by dotnet xunit then DotNetCoreTool is currently the best approach, but it shouldn't be hard to create a wrapper alias for dotnet xunit that makes these arguments strongly typed. Assuming noone else has already done it and made this post obsolete, then I'll look at sending a PR as soon as I find the time!


Damien Bowden: Securing an Angular SignalR client using JWT tokens with ASP.NET Core and IdentityServer4

This post shows how an Angular SignalR client can send secure messages using JWT bearer tokens with an API and an STS server. The STS server is implemented using IdentityServer4 and the API is implemented using ASP.NET Core.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Posts in this series:

SignalR and SPAs

At present there are 3 ways which SignalR could be secured:

Comment from Damien Edwards:
The debate over HTTPS URLs (including query strings) is long and on-going. Yes, it’s not ideal to send sensitive data in the URL even when over HTTPS. But the fact remains that when using the browser WebSocket APIs there is no other way. You only have 3 options:

  • Use cookies
  • Send tokens in query string
  • Send tokens over the WebSocket itself after onconnect

A usable sample of the last would be interesting in my mind, but I’m not expecting it to be trivial.

For an SPA client, cookies is not an option and should not be used. It is unknown if the 3rd option will work, so at present, the only way to do this, is to send the access token in the query string using HTTPS. Sending tokens in the query string has its problems, which you will need to accept and/or setup you deployment, logging to protect againt these increased risks when compared with sending the access token in the header.

Setup

The demo app is setup using 3 different projects, the API which hosts the SignalR Hub and the APIs, the STS server using ASP.NET Core and IdentityServer4 and the client application using Angular hosted in ASP.NET Core.

The client is secured using the OpenID Implicit Flow using the “id_token token” flow. The access token is then used to access the API, for both the SignalR messages and also the API calls.
All three apllications run using HTTPS.

Securing the SignalR Hub on the API

The SignalR Hub uses the Authorize attribute like any ASP.NET Core MVC controller. Policies and the scheme can be defined here. The Hub uses the Bearer AuthenticationSchemes.

using ApiServer.Providers;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace ApiServer.SignalRHubs
{
    [Authorize(AuthenticationSchemes = "Bearer")]
    public class NewsHub : Hub
    {
        private NewsStore _newsStore;

        public NewsHub(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        ...
    }
}

The API project configures the API security in the Startup class in the ConfigureServices method. Firstly CORS is configured, incorrectly in this example as it allows everything. Only the required URLs should be allowed. Then the TokenValidationParameters and the JwtSecurityTokenHandler options are configured. The NameClaimType is configured so that the Name property is set from the token in the HTTP context Identity. This is set and added to the access token on the STS server.

The AddAuthentication is added with the JwtBearer token options. This is configured to accept the token in the query string as well as the header. If the request matches the SignalR Hubs, the token is received and used to validate the request.

The AddAuthorization is added and the policies are defined as required. Then the SignalR middleware is added.

public void ConfigureServices(IServiceCollection services)
{
	var sqliteConnectionString = Configuration.GetConnectionString("SqliteConnectionString");
	var defaultConnection = Configuration.GetConnectionString("DefaultConnection");

	var cert = new X509Certificate2(Path.Combine(_env.ContentRootPath, "damienbodserver.pfx"), "");

	services.AddDbContext<DataEventRecordContext>(options =>
		options.UseSqlite(sqliteConnectionString)
	);

	// used for the new items which belong to the signalr hub
	services.AddDbContext<NewsContext>(options =>
		options.UseSqlite(
			defaultConnection
		), ServiceLifetime.Singleton
	);

	services.AddSingleton<IAuthorizationHandler, CorrectUserHandler>();
	services.AddSingleton<NewsStore>();

	var policy = new Microsoft.AspNetCore.Cors.Infrastructure.CorsPolicy();
	policy.Headers.Add("*");
	policy.Methods.Add("*");
	policy.Origins.Add("*");
	policy.SupportsCredentials = true;

	services.AddCors(x => x.AddPolicy("corsGlobalPolicy", policy));

	var guestPolicy = new AuthorizationPolicyBuilder()
		.RequireClaim("scope", "dataEventRecords")
		.Build();

	var tokenValidationParameters = new TokenValidationParameters()
	{
		ValidIssuer = "https://localhost:44318/",
		ValidAudience = "dataEventRecords",
		IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("dataEventRecordsSecret")),
		NameClaimType = "name",
		RoleClaimType = "role", 
	};

	var jwtSecurityTokenHandler = new JwtSecurityTokenHandler
	{
		InboundClaimTypeMap = new Dictionary<string, string>()
	};

	services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
	.AddJwtBearer(options =>
	{
		options.Authority = "https://localhost:44318/";
		options.Audience = "dataEventRecords";
		options.IncludeErrorDetails = true;
		options.SaveToken = true;
		options.SecurityTokenValidators.Clear();
		options.SecurityTokenValidators.Add(jwtSecurityTokenHandler);
		options.TokenValidationParameters = tokenValidationParameters;
		options.Events = new JwtBearerEvents
		{
			OnMessageReceived = context =>
			{
				if (context.Request.Path.Value.StartsWith("/loo") &&
					context.Request.Query.TryGetValue("token", out StringValues token)
				)
				{
					context.Token = token;
				}

				return Task.CompletedTask;
			},
			OnAuthenticationFailed = context =>
			{
				var te = context.Exception;
				return Task.CompletedTask;
			}
		};
	});

	services.AddAuthorization(options =>
	{
		options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
		{
			policyAdmin.RequireClaim("role", "dataEventRecords.admin");
		});
		options.AddPolicy("dataEventRecordsUser", policyUser =>
		{
			policyUser.RequireClaim("role", "dataEventRecords.user");
		});
		options.AddPolicy("dataEventRecords", policyUser =>
		{
			policyUser.RequireClaim("scope", "dataEventRecords");
		});
		options.AddPolicy("correctUser", policyCorrectUser =>
		{
			policyCorrectUser.Requirements.Add(new CorrectUserRequirement());
		});
	});

	services.AddSignalR();

	services.AddMvc(options =>
	{
	   //options.Filters.Add(new AuthorizeFilter(guestPolicy));
	}).AddJsonOptions(options =>
	{
		options.SerializerSettings.ContractResolver = new DefaultContractResolver();
	});

	services.AddScoped<IDataEventRecordRepository, DataEventRecordRepository>();
}

The Configure method in the Startup class of the API defines the SignalR Hubs and adds the Authentication.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole();
	loggerFactory.AddDebug();

	loggerFactory.AddSerilog();

	app.UseExceptionHandler("/Home/Error");
	app.UseCors("corsGlobalPolicy");
	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseSignalR(routes =>
	{
		routes.MapHub<LoopyHub>("loopy");
		routes.MapHub<NewsHub>("looney");
	});

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

Securing the SignalR client in Angular

The Angular SPA application is secured using the oidc Implicit Flow. After a successful client and identity login, the access token can be used to access the Hub or the API. The Hub is initialized after the client has recieved an access token. The Hub connection is then setup, using the same parameter logic defined on the API server. “token=…” Now each message is sent using the access token.

import 'rxjs/add/operator/map';
import { Subscription } from 'rxjs/Subscription';

import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';

import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import { NewsState } from './store/news.state';
import * as NewsActions from './store/news.action';
import { Configuration } from '../app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;
    private actionUrl: string;
    private headers: HttpHeaders;

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(private http: HttpClient,
        private store: Store<any>,
        private configuration: Configuration,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.actionUrl = `${this.configuration.Server}api/news/`;

        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');

        this.init();
    }

    send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    getAllGroups(): Observable<string[]> {

        const token = this.oidcSecurityService.getToken();
        if (token !== '') {
            const tokenValue = 'Bearer ' + token;
            this.headers = this.headers.append('Authorization', tokenValue);
        }

        return this.http.get<string[]>(this.actionUrl, { headers: this.headers });
    }

    private init() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.initHub();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    private initHub() {
        console.log('initHub');
        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }

        this._hubConnection = new HubConnection(`${this.configuration.Server}looney${tokenValue}`);

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.on('History', (newsItems: NewsItem[]) => {
            this.store.dispatch(new NewsActions.ReceivedGroupHistoryAction(newsItems));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

Or here’s a more simple example with everything in the Angular component.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';
import { Configuration } from '../../app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Component({
    selector: 'app-home-component',
    templateUrl: './home.component.html'
})

export class HomeComponent implements OnInit, OnDestroy {
    private _hubConnection: HubConnection;
    async: any;
    message = '';
    messages: string[] = [];

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(
        private configuration: Configuration,
        private oidcSecurityService: OidcSecurityService
    ) {
    }

    ngOnInit() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.init();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    ngOnDestroy(): void {
        this.isAuthorizedSubscription.unsubscribe();
    }

    sendMessage(): void {
        const data = `Sent: ${this.message}`;

        this._hubConnection.invoke('Send', data);
        this.messages.push(data);
    }

    private init() {

        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }

        this._hubConnection = new HubConnection(`${this.configuration.Server}loopy${tokenValue}`);

        this._hubConnection.on('Send', (data: any) => {
            const received = `Received: ${data}`;
            this.messages.push(received);
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }
}

Logs on the server

As the application sends the token in the query string, this can be accessed on the server using standard logging. The 3 applications are configured using Serilog and the logs are saved to Seq server and also to log files. If you open the Seq server, the access_token can be viewed and copied.

You can then use jwt.io to view the details of the token.
https://jwt.io/

Or you can use postman to do API calls for which you might not have the authorization or authentication rights. This token will work as long as it is valid. All you need to do, is add this to the header and you have the same rights as the identity for which the access token was created.

You can also view the token in the log files:

2017-10-15 17:11:33.790 +02:00 [Information] Request starting HTTP/1.1 OPTIONS http://localhost:44390/looney?token=eyJhbGciOiJSUzI1NiIsIm... 
2017-10-15 17:11:33.795 +02:00 [Information] Request starting HTTP/1.1 OPTIONS http://localhost:44390/loopy?token=eyJhbGciOiJSUzI1NiIsImtpZCI6IjA2RDNFNDZFO...
2017-10-15 17:11:33.803 +02:00 [Information] Policy execution successful.

Due to this, you need to check that the deployment admin, developers, devop people can be trusted or reduce the access to the production scenarios. This has also implications with GDPR.

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/aspnet/SignalR/issues/888

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Dominick Baier: SAML2p Identity Provider Support for IdentityServer4

One very common feature request is support for acting as a SAML2p identity provider.

This is not a trivial task, but our friends at Rock Solid Knowledge were working hard, and now published a beta version. Give it a try!

 


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Andrew Lock: Debugging JWT validation problems between an OWIN app and IdentityServer4

Debugging JWT validation problems between an OWIN app and IdentityServer4

This post describes an issue I ran into at work recently, as part of an effort to migrate our identity application from IdentityServer3 to IdentityServer4. One of our services was unable to validate the JWT sent as a bearer token, even though other services were able to validate it. I'll provide some background to the migration, a more detailed description of the problem, and the solution.

tl;dr; The problematic service was attempting to call a "validation endpoint" to validate the JWT, instead of using local validation. This works fine with IdentityServer3, but the custom access token validation endpoint has been removed in IdentityServer4. Forcing local validation by calling ValidationMode.Local when adding the middleware with app.UseIdentityServerBearerTokenAuthentication() fixed the issue.

An overview of the system

The system I'm working on, as is common these days, consists of a variety of different services working together. On the front end, we have a JavaScript SPA. This makes HTTP calls to several different server-side apps to fetch the data that it displays to the user. In this case, the services are ASP.NET (not Core) apps using OWIN to provide a Web API. These services may in turn make requests to other back-end APIs, but we'll ignore those for now. In this post I'm going to focus on two services, called the AdminAPI service and the DataAPI service:

Debugging JWT validation problems between an OWIN app and IdentityServer4

For authentication, we have an application running IdentityServer3 which is again non-core ASP.NET (sidenote: I still haven't worked out what we're calling non-core ASP.NET these days!). Users authenticate with the IdentityServer3 app, which returns a JSON Web Token (JWT). The client app sends the JWT in the Authorization header when making requests to the AdminAPI and the DataAPI.

Debugging JWT validation problems between an OWIN app and IdentityServer4

Before the AdminAPI or the DataAPI accept the JWT sent in the Authorization header, they must first validate the JWT. This ensures the token hasn't been tampered with and can be trusted.

A brief background on JWT tokens and Identity

In order to understand the problem, we need to have an understanding of how JWT tokens work, so I'll provide a brief outline here. Feel free to skip this section if this is old news, or check out https://jwt.io/introduction for a more in-depth description.

A JWT provides a mechanism for the IdentityServer app to transfer information to another app (e.g. the AdminAPI or DataAPI) through an insecure medium (the JavaScript app in the browser) in such a way that the data can't be tampered with. It's important they can't be tampered with as they're often used for authentication and authorisation - you don't want users to be able to impersonate other users, or grant themselves additional privileges.

A JWT consists of three parts:

  • Header - A description of the type of token (JWT) and the algorithms used to secure the token
  • Payload - The information to be transferred. This typically includes a set of claims, which describe the entity (i.e. the user), and potentially other details such as the expiry date of the token, who issued it etc.
  • Signature - A cryptographic signature that describes the header and the payload. If either the header or payload are modified, the signature will no longer be correct, so the JWT can be discarded as fraudulent.

In our case, the signature for the JWT is created using an X.509 certificate using asymmetric cryptography. The signature is generated using the private key of the certificate, which is only known to IdentityServer and is not exposed. However, anyone can validate the signature using the public certificate, which IdentityServer makes available at well-known URLs.

Upgrading IdentityServer

I had been tasked with porting the existing ASP.NET IdentityServer3 app to an ASP.NET Core IdentityServer4 app. IdentityServer3 and IdentityServer4 both use the OpenID Connect and OAuth 2 protocols, so from the point of view of the consumers of the app, upgrading IdentityServer in this way should be seamless.

The good news is that for the most part, the upgrade really was painless. IdentityServer 4 has a few different abstractions to IdentityServer3, so you may have to tweak some things and implement some different interfaces, but the changes are relatively minor and make sense. Scott Brady has a great post on IdentityServer 4, or you could watch Dominick Baier explain some of the changes himself on Channel 9.

Trouble in paradise

With the IdentiyServer app ported to .NET Core, all that remained was to test the integration with the AdminAPI and DataAPI. Initial impressions were very positive - the client app would authenticate with the IdentityServer app to retrieve a JWT, and would send this in requests to the AdminAPI. The AdminAPI validated the signature in the JWT token, and used the claims it contained to execute the action. All looking good.

The problem was the DataAPI. When the client app navigated to a given page, it would send a request to the DataAPI with the same JWT as it sent to the AdminAPI. However, the DataAPI failed to validate the signature.

How could that be? Both APIs were configured with the same IdentityServer as the authority, and the same JWT was being sent to both APIs. I tweaked the DataAPI configuration to make sure it was identical to the AdminAPI and logged as much as I could, but try as I might, I couldn't find any differences between the two APIs. Both APIs were even running on the same server, in the same web site, using the same IIS app pool.

Yet the AdminAPI could validate the token, and the DataAPI could not.

IdentityServer3.AccessTokenValidation

At this point, I'll back up slightly, and describe exactly how the AdminAPI and DataAPI validate the JWTs.

We are using the IdentityServer3.AccessTokenValidation library to validate the JWTs. This extracts the identity contained in the JWT to authenticate the incoming request, and assigns it to the IPrincipal of the request. This library includes an OWIN middleware that you can add to your IAppBuilder pipeline something like the following (from the DataAPI):

var options = new IdentityServerBearerTokenAuthenticationOptions  
{
    Authority = https://www.test.domain/identity,
    AuthenticationType = "Bearer",
    RequiredScopes = new []{ "DataAPI" }
};
app.UseIdentityServerBearerTokenAuthentication(options); // add the middleware  

This snippet shows all the configuration required to validate incoming tokens, extract the identity in the JWT payload, and assign the principal for the current thread. In this example, the IdentityServer app is hosted at https://www.test.domain/identity, and incoming JWTs must have the "DataAPI" scope to be considered valid

If you're not familiar with IdentityServer, it might surprise you that no other configuration is required. No client IDs, no secrets, no certificates. Instead, thanks to the use of open standards (OpenID Connect), the validation middleware can contact your IdentityServer app to obtain all the information it needs.

When the validation middleware needs to validate an incoming JWT, it calls a well-known URL on IdentityServer (literally well-known; the URL path is /.well-known/openid-configuration). This returns a JSON document indicating the capabilities of the server, and the location of a variety of useful links. The following is a fragment of a discovery document as an example:

{
    "issuer": "https://www.test.domain/identity",
    "jwks_uri": "https://www.test.domain/identity/.well-known/openid-configuration/jwks",
    "authorization_endpoint": "https://www.test.domain/identity/connect/authorize",
    "token_endpoint": "https://www.test.domain/identity/connect/token",
    "userinfo_endpoint": "https://www.test.domain/identity/connect/userinfo",
    "end_session_endpoint": "https://www.test.domain/identity/connect/endsession",
    "check_session_iframe": "https://www.test.domain/identity/connect/checksession",
    "revocation_endpoint": "https://www.test.domain/identity/connect/revocation",
    "introspection_endpoint": "https://www.test.domain/identity/connect/introspect",
    "scopes_supported": [
        "openid",
        "profile",
        "email",
        "AdminAPI",
        "DataAPI"
    ],
    "claims_supported": [
        "sub",
        "name",
        "family_name",
        "given_name",
        "email",
        "id"
    ],
   ...
}

To actually validate an incoming token, the middleware uses one of two approaches:

  • Local - The middleware uses the discovery document and the jwks_uri link to dynamically download the public certificate required to validate the JWTs.
  • ValidationEndpoint - The middleware sends the JWT to IdentityServer and asks it to validate the token.

You can explicitly choose which validation mode the middleware should use, but it defaults to Both. As stated by Brock Allen on Stack Overflow:

"both" will dynamically determine which of the two approaches described above [to use] based upon some heuristics on the incoming access token presented to the Web API.

That gives us the background, so lets get back to the problem in hand, why one of our apps could validate the JWT, and the other could not.

Back to the problem, and the solution

After much trial and error, I finally discovered that the problem I was having was due to the "both" heuristic, and the validation mode it was choosing. In the AdminAPI (which was able to validate the JWTs issued by IdentityServer4) the middleware was choosing the Local validation mode. It would retrieve the public certificate of the X.509 cert used to sign the token by using the OpenID Connect discovery document, and could verify the signature.

The DataAPI on the other hand, was trying to use ValidationEndpoint validation of the JWT. For some reason, the heuristic decided that local validation wasn't possible, and so was trying to send the JWT to IdentityServer4 for validation.

Unfortunately, the custom access token validation endpoint available in IdentityServer3 was removed in IdentityServer4. Every time the DataAPI attempted to validate the JWT, it was getting a 404 from the IdentityServer4 app, so the validation was failing.

The simple solution was to force the middleware to always use Local validation, by updating the ValidationMode in the middleware options:

var options = new IdentityServerBearerTokenAuthenticationOptions  
{
    Authority = https://www.test.domain/identity,
    AuthenticationType = "Bearer",
    RequiredScopes = new []{ "DataAPI" },
    ValidationMode = ValidationMode.Local // <- add this
};
app.UseIdentityServerBearerTokenAuthentication(options);  

As soon as the DataAPI was updated with the above change, it was able to validate the JWTs created using IdentityServer4, and t app started working again.

An obvious follow-up to this issue would be to figure out why the the DataAPI was choosing ValidationEndpoint validation instead of Local validation. I'm sure the answer lies somewhere in this source code file, but for the life of me I can't figure it out; given it was the same token, and same middleware configuration in both cases it should have been the same validation type as far as I can see!

Ultimately, it just needs to work, so I've moved on.

No, it doesn't irritate me not knowing why it happens.

Honest.

Summary

Upgrading from IdentityServer3 to IdentityServer4, and in the process switching from an ASP.NET app to ASP.NET Core, is not something that should be taken lightly, but overall the process went smoothly. In particular, .NET Core 2.0 made the port much easier.

The only issue was that a consumer of IdentityServer4 was attempting to use ValidationEndpoint to validate tokens, when using the IdentityServer3.AccessTokenValidation library for authentication. IdentityServer4 has removed the custom access token validation endpoint used by this method, so attempts to validate JWTs will fail when it's used.

Instead, you can force the middleware to use Local validation instead. This downloads the public certificate from IdentityServer4, and validates the signature locally, without having to call custom endpoints.

References


Anuraj Parameswaran: How to handle Ajax requests in ASP.NET Core Razor Pages?

This post is about handling Ajax requests in ASP.NET Core Razor Pages. Razor Pages is a new feature of ASP.NET Core MVC that makes coding page-focused scenarios easier and more productive.


Dominick Baier: New in IdentityServer4 v2: Simplified Configuration behind Load-balancers or Reverse-Proxies

Many people struggle with setting up ASP.NET Core behind load-balancers and reverse-proxies. This is due to the fact that Kestrel is often used just for serving up the application, whereas the “real HTTP traffic” is happening one hop earlier. IOW the ASP.NET Core app is actually running on e.g. http://localhost:5000 – but the incoming traffic is directed at e.g. https://myapp.com.

This is an issue when the application needs to generate links (e.g. in the IdentityServer4 discovery endpoint).

Microsoft hides the problem when running in IIS (this is handled in the IIS integration), and for other cases recommends the forwarded headers middleware. This middleware requires some more understanding how the underlying traffic forwarding works, and its default configuration does often not work for more advanced scenarios.

Long story short – we added a shortcut (mostly due to popular demand) to IdentityServer that allows hard-coding the public origin – simply set the PublicOrigin property on the IdentityServerOptions. See the following screenshot where I configured the value https://login.foo.com – but note that Kestrel still runs on localhost.

publicOrigin.png

HTH


Filed under: ASP.NET Core, IdentityServer, Uncategorized, WebAPI


Anuraj Parameswaran: How to create a self contained .Net core application?

There are two ways to deploy a .NET Core application. FDD (Framework-dependent deployments) and SCD (Self-contained deployments), a self-contained deployment (SCD) doesn’t rely on the presence of shared components on the target system. All components, including both the .NET Core libraries and the .NET Core runtime, are included with the application and are isolated from other .NET Core applications. This post is about deploying .NET Core application in Self-contained way.


Dominick Baier: IdentityServer4 v2

Wow – this was probably our biggest update ever! Version 2.0 of IdentityServer4 is not only incorporating all the feedback we got over the last year, it also includes the necessary updates for ASP.NET Core 2 – and also has a couple of brand new features. See the release notes for a complete list as well as links to issues and PRs.

The highlights (from my POV) are:

ASP.NET Core 2 support
The authentication system in ASP.NET Core 1.x was a left-over from Katana and was designed around the fact that no DI system exists. We suggested to Microsoft that this should be updated the next time they have the “luxury” of breaking changes. That’s what happened (see more details here).

This was by far the biggest change in IdentityServer (both from a config and internal plumbing point of view). The new system is superior, but this was a lot of work!

Support for the back-channel logout specification
In addition to the JS/session management spec and front-channel logout spec – we also implemented the back-channel spec. This is for situations where the iframe logout approach for server-side apps is either too brittle or just not possible.

Making federation scenarios more robust
Federation with external providers is a complex topic – both sign-in and sign-out require a lot state management and attention to details.

The main issue was the state keeping when making round–trips to upstream providers. The way the Microsoft handlers implement that is by adding the protected state on the URL. This lead to problems with URL length (either because Azure services default to 2KB of allowed URL length, e.g. Azure AD or because of IE who has the same restriction). We fixed that by including a state cache that you can selectively enable on the external handlers. This way the temporary state is kept in a cache and the URLs stay short.

Internal cleanup and refactoring
We did a lot of cleanup internally – some are breaking changes. Generally speaking we opened up more classes (especially around response generation) for derivation or replacement. One of the most popular requests was e.g. to customize the response of the introspection endpoint and redirect handling in the authorize endpoint. Oh btw – endpoints are now extensible/replaceable as well.

Support for the ASP.NET Core config system
Clients and resources can now be loaded from the ASP.NET config system, which in itself is an extensible system. The main use case is probably JSON-based config files and overriding certain settings (e.g. secrets) using environment variables.

Misc
We also updated our docs and the satellite repos like samples, EF, ASP.NET Identity and the quickstart UI. We gonna work on new templates and VS integration next.

Support
If you need help migrating to v2 – or just in general implementing IdentityServer – let us know. We provide consulting, support and software development services.

Last but not least – we’d like to thank our 89 contributors and everyone who opened/reported an issue and gave us feedback – keep it coming! We already have some nice additions for 2.x lined up. Stay tuned.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OpenID Connect, WebAPI


Anuraj Parameswaran: Token based authentication in ASP.NET Core

This post is about token based authentication in ASP.NET Core. The general concept behind a token-based authentication system is simple. Allow users to enter their username and password in order to obtain a token which allows them to fetch a specific resource - without using their username and password. Once their token has been obtained, the user can offer the token - which offers access to a specific resource for a time period - to the remote site.


Andrew Lock: Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

These days, running your apps over HTTPS is pretty much required. so you need an SSL certificate to encrypt the connection between your app and a user's browser.

I was recently trying to create a self-signed certificate for use in a Linux development environment, to serve requests with ASP.NET Core over SSL when developing locally. Playing with certs is always harder than I think it's going to be, so this post describes the process I took to create and trust a self-signed cert.

Disclaimer I'm very much a Windows user at heart, so I can't give any guarantees as to whether this process is correct. It's just what I found worked for me!

Using Open SSL to create a self-signed certificate

On Windows, creating a self-signed development certificate for development is often not necessary - Visual Studio automatically creates a development certificate for use with IIS Express, so if you run your apps this way, then you shouldn't have to deal with certificates directly.

On the other hand, if you want to host Kestrel directly over HTTPS, then you'll need to work with certificates directly one way or another. On Linux, you'll either need to create a cert for Kestrel to use, or for a reverse-proxy like Nginx or HAProxy. After much googling, I took the approach described in this post.

Creating a basic certificate using openssl

Creating a self-signed cert with the openssl library on Linux is theoretically pretty simple. My first attempt was to use a script something like the following:

openssl req -new -x509 -newkey rsa:2048 -keyout localhost.key -out localhost.cer -days 365 -subj /CN=localhost  
openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.cer  

This creates 3 files:

  • localhost.cer - The public key for the SSL certificate
  • localhost.key - The private key for the SSL certificate
  • localhost.pfx - An X509 certificate containing both the public and private key. This is the file that will be used by our ASP.NET Core app to serve over HTTPS.

The script creates a certificate with a "Common Name" for the localhost domain (the -subj /CN=localhost part of the script). That means we can use it to secure connections to the localhost domain when developing locally.

The problem with this certificate is that it only includes a common name so the latest Chrome versions will not trust it. Instead, we need to create a certificate with a Subject Alternative Name (SAN) for the DNS record (i.e. localhost).

The easiest way I found to do this was to use a .conf file containing all our settings, and to pass it to openssl.

Creating a certificate with DNS SAN

The following file shows the .conf config file that specifies the particulars of the certificate that we're going to create. I've included all of the details that you must specify when creating a certificate, such as the company, email address, location etc.

If you're creating your own self signed certificate, be sure to change these details, and to add any extra DNS records you need.

[ req ]
prompt              = no  
default_bits        = 2048  
default_keyfile     = localhost.pem  
distinguished_name  = subject  
req_extensions      = req_ext  
x509_extensions     = x509_ext  
string_mask         = utf8only

# The Subject DN can be formed using X501 or RFC 4514 (see RFC 4519 for a description).
#   Its sort of a mashup. For example, RFC 4514 does not provide emailAddress.
[ subject ]
countryName     = GB  
stateOrProvinceName = London  
localityName            = London  
organizationName         = .NET Escapades


# Use a friendly name here because its presented to the user. The server's DNS
#   names are placed in Subject Alternate Names. Plus, DNS names here is deprecated
#   by both IETF and CA/Browser Forums. If you place a DNS name here, then you 
#   must include the DNS name in the SAN too (otherwise, Chrome and others that
#   strictly follow the CA/Browser Baseline Requirements will fail).
commonName          = Localhost dev cert  
emailAddress            = test@test.com

# Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ...
[ x509_ext ]

subjectKeyIdentifier        = hash  
authorityKeyIdentifier  = keyid,issuer

# You only need digitalSignature below. *If* you don't allow
#   RSA Key transport (i.e., you use ephemeral cipher suites), then
#   omit keyEncipherment because that's key transport.
basicConstraints        = CA:FALSE  
keyUsage            = digitalSignature, keyEncipherment  
subjectAltName          = @alternate_names  
nsComment           = "OpenSSL Generated Certificate"

# RFC 5280, Section 4.2.1.12 makes EKU optional
#   CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
#   In either case, you probably only need serverAuth.
# extendedKeyUsage  = serverAuth, clientAuth

# Section req_ext is used when generating a certificate signing request. I.e., openssl req ...
[ req_ext ]

subjectKeyIdentifier        = hash

basicConstraints        = CA:FALSE  
keyUsage            = digitalSignature, keyEncipherment  
subjectAltName          = @alternate_names  
nsComment           = "OpenSSL Generated Certificate"

# RFC 5280, Section 4.2.1.12 makes EKU optional
#   CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
#   In either case, you probably only need serverAuth.
# extendedKeyUsage  = serverAuth, clientAuth

[ alternate_names ]

DNS.1       = localhost

# Add these if you need them. But usually you don't want them or
#   need them in production. You may need them for development.
# DNS.5       = localhost
# DNS.6       = localhost.localdomain
# DNS.7       = 127.0.0.1

# IPv6 localhost
# DNS.8     = ::1

We save this config to a file called localhost.conf, and use it to create the certificate using a similar script as before. Just run this script in the same folder as the localhost.conf file.

openssl req -config localhost.conf -new -x509 -sha256 -newkey rsa:2048 -nodes \  
    -keyout localhost.key -days 3650 -out localhost.crt
openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt  

This will ask you for an export password for your pfx file. Be sure that you provide a password and keep it safe - ASP.NET Core requires that you don't leave the password blank. You should now have an X509 certificate called localhost.pfx that you can use to add HTTPS to your app.

Trusting the certificate

Before we use the certificate in our apps, we need to trust it on our local machine. Exactly how you go about this varies depending on which flavour of Linux you're using. On top of that, some apps seem to use their own certificate stores, so trusting the cert globally won't necessarily mean it's trusted in all of your apps.

The following example worked for me on Ubuntu 16.04, and kept Chrome happy, but I had to explicitly add an exception to Firefox when I first used the cert.

#Install the cert utils
sudo apt install libnss3-tools  
# Trust the certificate for SSL 
pk12util -d sql:$HOME/.pki/nssdb -i localhost.pfx  
# Trust a self-signed server certificate
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n 'dev cert' -i localhost.crt  

As I said before, I'm not a Linux guy, so I'm not entirely sure if you need to run both of the trust commands, but I did just in case! If anyone knows a better approach I'm all ears :)

We've now created a self-signed certificate with a DNS SAN name for localhost, and we trust it on the development machine. The last thing remaining is to use it in our app.

Configuring Kestrel to use your self-signed certificate

For simplicity, I'm just going to show how to load the localhost.pfx certificate in your app from the .pfx file, and how configure Kestrel to use it to serve requests over HTTPS. I've hard-coded the .pfx password in this example for simplicity, but you should load it from configuration instead.

Warning You should never include the password directly like this in a production app.

The following example is for ASP.NET Core 2.0 - Shawn Wildermuth has an example of how to add SSL in ASP.NET Core 1.X (as well as how to create a self-signed cert on Windows).

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        return WebHost.CreateDefaultBuilder()
            .UseKestrel(options =>
            {
                // Configure the Url and ports to bind to
                // This overrides calls to UseUrls and the ASPNETCORE_URLS environment variable, but will be 
                // overridden if you call UseIisIntegration() and host behind IIS/IIS Express
                options.Listen(IPAddress.Loopback, 5001);
                options.Listen(IPAddress.Loopback, 5002, listenOptions =>
                {
                    listenOptions.UseHttps("localhost.pfx", "testpassword");
                });
            })
            .UseStartup<Startup>()
            .Build();
}

Although CreateDefaultBuilder() adds Kestrel to the app anyway, you can call UseKestrel() again and specify additional options. Here we are defining two URLs and ports to listen on (The IPAddress.Loopback address corresponds to localhost or 127.0.0.1):

We add HTTPS to the second Listen() call with the UseHttps() extension method. There are several overloads of the method, which allow you to provide an X509Certificate2 object directly, or as in this case, a filename and password to a certificate.

If everything is configured correctly, you should be able to view the app in Chrome, and see a nice, green, Secure padlock:

Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

As I said at the start of this post, I'm not 100% on all of this, so if anyone has any suggestions or improvements, please let me know in the comments.

Resources


Damien Bowden: Using EF Core and SQLite to persist SignalR Group messages in ASP.NET Core

The article shows how SignalR messages can be saved to a database using EF Core and SQLite. The post uses the SignalR Hub created in this blog; SignalR Group messages with ngrx and Angular, and extends it so that users can only join an existing SignalR group. The group history is then sent to the client that joined.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Posts in this series:

Creating the Database Store.

To create a store for the SignalR Hub, an EF Core Context is created and also the store logic which is responsible for accessing the database. The NewsContext class is really simple and just provides 2 DbSets, NewsItemEntities which will be used to save the SignalR messages, and NewsGroups which is used to validate and create the groups in the SignalR Hub.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AspNetCoreAngularSignalR.Providers
{
    public class NewsContext : DbContext
    {
        public NewsContext(DbContextOptions<NewsContext> options) :base(options)
        { }

        public DbSet<NewsItemEntity> NewsItemEntities { get; set; }

        public DbSet<NewsGroup> NewsGroups { get; set; }
    }
}

The NewsStore provides the methods which will be used in the Hub and also an ASP.NET Core Controller to create, select the groups if required. The NewsStore uses the NewsContext class.

using AspNetCoreAngularSignalR.SignalRHubs;
using System;
using System.Collections.Generic;
using System.Linq;

namespace AspNetCoreAngularSignalR.Providers
{
    public class NewsStore
    {
        public NewsStore(NewsContext newsContext)
        {
            _newsContext = newsContext;
        }

        private readonly NewsContext _newsContext;

        public void AddGroup(string group)
        {
            _newsContext.NewsGroups.Add(new NewsGroup
            {
                Name = group
            });
            _newsContext.SaveChanges();
        }

        public bool GroupExists(string group)
        {
            var item = _newsContext.NewsGroups.FirstOrDefault(t => t.Name == group);
            if(item == null)
            {
                return false;
            }

            return true;
        }

        public void CreateNewItem(NewsItem item)
        {
            if (GroupExists(item.NewsGroup))
            {
                _newsContext.NewsItemEntities.Add(new NewsItemEntity
                {
                    Header = item.Header,
                    Author = item.Author,
                    NewsGroup = item.NewsGroup,
                    NewsText = item.NewsText
                });
                _newsContext.SaveChanges();
            }
            else
            {
                throw new System.Exception("group does not exist");
            }
        }

        public IEnumerable<NewsItem> GetAllNewsItems(string group)
        {
            return _newsContext.NewsItemEntities.Where(item => item.NewsGroup == group).Select(z => 
                new NewsItem {
                    Author = z.Author,
                    Header = z.Header,
                    NewsGroup = z.NewsGroup,
                    NewsText = z.NewsText
                });
        }

        public List<string> GetAllGroups()
        {
            return _newsContext.NewsGroups.Select(t =>  t.Name ).ToList();
        }
    }
}

The NewsStore and the NewsContext are registered in the ConfigureServices method in the Startup class. The SignalR Hub is a singleton and so the NewsContext and the NewsStore classes are added as singletons. The AddDbContext requires the ServiceLifetime.Singleton parameter as this is not default. This is not optimal when using the NewsContext in the ASP.NET Core controller, as you need to consider the possible multiple client requests.

public void ConfigureServices(IServiceCollection services)
{
	var sqlConnectionString = Configuration.GetConnectionString("DefaultConnection");

	services.AddDbContext<NewsContext>(options =>
		options.UseSqlite(
			sqlConnectionString
		), ServiceLifetime.Singleton
	);

	services.AddCors(options =>
	{
		options.AddPolicy("AllowAllOrigins",
			builder =>
			{
				builder
					.AllowAnyOrigin()
					.AllowAnyHeader()
					.AllowAnyMethod();
			});
	});

	services.AddSingleton<NewsStore>();
	services.AddSignalR();
	services.AddMvc();
}

Updating the SignalR Hub

The SignalR NewsHub uses the NewsStore which is injected using constructor injection. If a message is sent, or received, it is persisted using the CreateNewItem method from the store. When a new user joins an existing group, the history is sent to the client by invoking the “History” message.

using AspNetCoreAngularSignalR.Providers;
using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsHub : Hub
    {
        private NewsStore _newsStore;

        public NewsHub(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        public Task Send(NewsItem newsItem)
        {
            if(!_newsStore.GroupExists(newsItem.NewsGroup))
            {
                throw new System.Exception("cannot send a news item to a group which does not exist.");
            }

            _newsStore.CreateNewItem(newsItem);
            return Clients.Group(newsItem.NewsGroup).InvokeAsync("Send", newsItem);
        }

        public async Task JoinGroup(string groupName)
        {
            if (!_newsStore.GroupExists(groupName))
            {
                throw new System.Exception("cannot join a group which does not exist.");
            }

            await Groups.AddAsync(Context.ConnectionId, groupName);
            await Clients.Group(groupName).InvokeAsync("JoinGroup", groupName);

            var history = _newsStore.GetAllNewsItems(groupName);
            await Clients.Client(Context.ConnectionId).InvokeAsync("History", history);
        }

        public async Task LeaveGroup(string groupName)
        {
            if (!_newsStore.GroupExists(groupName))
            {
                throw new System.Exception("cannot leave a group which does not exist.");
            }

            await Clients.Group(groupName).InvokeAsync("LeaveGroup", groupName);
            await Groups.RemoveAsync(Context.ConnectionId, groupName);
        }
    }
}

A NewsController is used to select all the existing groups, or add a new group, which is used by the SignalR Hub.

using AspNetCoreAngularSignalR.SignalRHubs;
using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.SignalR;
using AspNetCoreAngularSignalR.Providers;

namespace AspNetCoreAngularSignalR.Controllers
{
    [Route("api/[controller]")]
    public class NewsController : Controller
    {
        private NewsStore _newsStore;

        public NewsController(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        [HttpPost]
        public IActionResult AddGroup([FromQuery] string group)
        {
            if (string.IsNullOrEmpty(group))
            {
                return BadRequest();
            }
            _newsStore.AddGroup(group);
            return Created("AddGroup", group);
        }

        public List<string> GetAllGroups()
        {
            return _newsStore.GetAllGroups();
        }
    }
}

Using the SignalR Hub

The NewsService Angular service, listens for SignalR events and handles these using the ngrx store.

import 'rxjs/add/operator/map';

import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';

import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import { NewsState } from './store/news.state';
import * as NewsActions from './store/news.action';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;
    private actionUrl: string;
    private headers: HttpHeaders;

    constructor(private http: HttpClient,
        private store: Store<any>
    ) {
        this.init();
        this.actionUrl = 'http://localhost:5000/api/news/';

        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');
    }

    send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    getAllGroups(): Observable<string[]> {
        return this.http.get<string[]>(this.actionUrl, { headers: this.headers });
    }

    private init() {

        this._hubConnection = new HubConnection('/looney');

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.on('History', (newsItems: NewsItem[]) => {
            this.store.dispatch(new NewsActions.ReceivedGroupHistoryAction(newsItems));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

In the Angular application, when the user joins a group, he/she receives all the existing messages.

original pic: https://damienbod.files.wordpress.com/2017/09/signlargroups.gif

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Damien Bowden: Auto redirect to an STS server in an Angular app using oidc Implicit Flow

This article shows how to implement an auto redirect in an Angular application, if using the OIDC Implicit Flow with an STS server. When a user opens the application, it is sometimes required that the user is automatically redirected to the login page on the STS server. This can be tricky to implement, as you need to know when to redirect and when not. The OIDC client is implemented using the angular-auth-oidc-client npm package.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

The angular-auth-oidc-client npm package provides an event when the OIDC module is ready to use and also can be configured to emit an event to inform the using component when the callback from the STS server has been processed. These 2 events, can be used to implement the auto redirect to the STS server, when not authorized.

The app.component can subscribe to these 2 events in the constructor.

constructor(public oidcSecurityService: OidcSecurityService,
	private router: Router
) {
	if (this.oidcSecurityService.moduleSetup) {
		this.onOidcModuleSetup();
	} else {
		this.oidcSecurityService.onModuleSetup.subscribe(() => {
			this.onOidcModuleSetup();
		});
	}

	this.oidcSecurityService.onAuthorizationResult.subscribe(
		(authorizationResult: AuthorizationResult) => {
			this.onAuthorizationResultComplete(authorizationResult);
		});
}

The onOidcModuleSetup function handles the onModuleSetup event. The Angular app is configured not to use hash (#) urls so that the STS callback is the only redirect which uses the hash. Due to this, all urls with a hash can be sent on to be processed from the OIDC module. If any other path is called, apart from the auto-login, the path is saved to the local storage. This is done so that after a successful token validation, the user is redirected back to the correct route. If the user is not authorized, the auto-login component is called.

private onOidcModuleSetup() {
	if (window.location.hash) {
		this.oidcSecurityService.authorizedCallback();
	} else {
		if ('/autologin' !== window.location.pathname) {
			this.write('redirect', window.location.pathname);
		}
		console.log('AppComponent:onModuleSetup');
		this.oidcSecurityService.getIsAuthorized().subscribe((authorized: boolean) => {
			if (!authorized) {
				this.router.navigate(['/autologin']);
			}
		});
	}
}

The onAuthorizationResultComplete function handles the onAuthorizationResult event. If the response from the server is valid, the user of the application is redirected using the saved path from the local storage.

private onAuthorizationResultComplete(authorizationResult: AuthorizationResult) {
	console.log('AppComponent:onAuthorizationResultComplete');
	const path = this.read('redirect');
	if (authorizationResult === AuthorizationResult.authorized) {
		this.router.navigate([path]);
	} else {
		this.router.navigate(['/Unauthorized']);
	}
}

The onAuthorizationResult event is only emitted if the trigger_authorization_result_event configuration property is set to true.

constructor(public oidcSecurityService: OidcSecurityService) {

 let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
 openIDImplicitFlowConfiguration.stsServer = 'https://accounts.google.com';
 ...
 openIDImplicitFlowConfiguration.trigger_authorization_result_event = true;

 this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);
}

The auto-login component redirects correctly to the STS server with the correct parameters for the application without any user interaction.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Router } from '@angular/router';
import { Subscription } from 'rxjs/Subscription';
import { OidcSecurityService, AuthorizationResult } from 'angular-auth-oidc-client';

@Component({
    selector: 'app-auto-component',
    templateUrl: './auto-login.component.html'
})

export class AutoLoginComponent implements OnInit, OnDestroy {
    lang: any;

    constructor(public oidcSecurityService: OidcSecurityService
    ) {
        this.oidcSecurityService.onModuleSetup.subscribe(() => { this.onModuleSetup(); });
    }

    ngOnInit() {
        if (this.oidcSecurityService.moduleSetup) {
            this.onModuleSetup();
        }
    }

    ngOnDestroy(): void {
        this.oidcSecurityService.onModuleSetup.unsubscribe();
    }

    private onModuleSetup() {
        this.oidcSecurityService.authorize();
    }
}

The auto-login component is also added to the routes.

import { Routes, RouterModule } from '@angular/router';
import { AutoLoginComponent } from './auto-login/auto-login.component';
...

const appRoutes: Routes = [
    { path: '', component: HomeComponent },
    { path: 'home', component: HomeComponent },
    { path: 'autologin', component: AutoLoginComponent },
    ...
];

export const routing = RouterModule.forRoot(appRoutes);

Now the user is auto redirected to the login page of the STS server when opening the Angular SPA application.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://github.com/damienbod/angular-auth-oidc-client



Andrew Lock: Using anonymous types and tuples to attach correlation IDs to scope state with Serilog and Seq in ASP.NET Core

Using anonymous types and tuples to attach correlation IDs to scope state with Serilog and Seq in ASP.NET Core

In my last post I gave an introduction to structured logging, and described how you can use scopes to add additional key-value pairs to log messages. Unfortunately, the syntax for key-value pairs using ILogger.BeginScope() can be a bit verbose, so I showed an extension method you can use to achieve the same result with a terser syntax.

In this post, I'll show some extension methods you can use to add multiple key-value pairs to a log message. To be honest, I'm not entirely convinced by any of them, so this post is more a record of my attempt rather than a recommendation! Any suggestions on ways to simplify / improve them are greatly received.

Background: the code to optimise

In the last post, I provided some sample code that we're looking to optimise. Namely, the calls to _logger.BeginScope() where we provide a dictionary of key-value pairs:

public void Add(int productId, int basketId)  
{
    using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
    {
        _logger.LogDebug("Adding product to basket");
        var product = _service.GetProduct();
        var basket = _service.GetBasket();

        using (var transaction = factory.Create())
        using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))
        {
            basket.Add(product);
            transaction.Submit();
            _logger.LogDebug("Product added to basket");
        }
    }
}

I showed how we could optimise the second call, with a simple extension method that takes a string key and an object value as two separate parameters:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, string key, object value)
    {
        return logger.BeginScope(new Dictionary<string, object> { { key, value } });
    }
}

This overload simply wraps the creation of the Dictionary<string, object> required by Serilog / Seq to attach key-value pairs to a log message. With this overload, the second BeginScope() call is reduced to:

using (_logger.BeginScope("transactionId", transaction.Id))  

This post describes my attempts to generalise this, so you can pass in multiple key-value pairs. All of these extensions will wrap the creation of Dictionary<> so that Serilog (or any other structured logging provider) can attach the key-value pairs to the log message.

Attempt 1: BeginScope extension method using Tuples

It's worth noting, that if you're initialising a dictionary with several KeyValuePair<>s, the syntax isn't actually very verbose, apart from the new Dictionary<> definition. You can use the dictionary initialisation syntax {key, value} to add multiple keys:

 using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))

I was hoping to create an overload for BeginScope() that had a similar terseness for the key creation, but without the need to explicitly create a Dictionary<> in the calling code.

My first thought was C# 7 tuples. KeyValuePairs are essentially tuples, so it seemed like a good fit. The following extension method accepts a params array of tuples, where the key value is a string and the value is an object:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, params (string key, object value)[] keys)
    {
        return logger.BeginScope(keys.ToDictionary(x => x.key, x => x.value));
    }
}

With this extension method, we could write our sample BeginScope() call as the following:

 using (_logger.BeginScope((key: nameof(productId), value: productId ), ( key: nameof(basketId), value: basketId)))

or even simpler, by omitting the tuple names entirely:

 using (_logger.BeginScope((nameof(productId), productId ), ( nameof(basketId), basketId)))

Initially, I was pretty happy with this. It's nice and concise and it achieves what I was aiming for. Unfortunately, it has some flaws if you try and use it with only a single tuple.

Overload resolutions with a single tuple

The BeginScope overload works well when you have multiple key-value pairs, but you would expect the behaviour to be the same no matter how many tuples you pass to the method. Unfortunately, if you try and call it with just a single tuple you'll be out of luck:

using (_logger.BeginScope((key: "transactionId", value: transaction.Id))  

We're clearly passing a tuple here, so you might hope our overload would be used. Unfortunately, the main ILogger.BeginScope<T>(T state) is a generic method, so it tends to be quite greedy when it comes to overload selection. Our tuple definition (string key, object value) state is less specific than the generic T state so it is never called. The transactionId value isn't added to the log as a correlation ID, instead it's serialised and added to the Scope property.

Changing our extension method to be generic ((string key, T value) state) doesn't work either; the main generic overload is always selected. How annoying.

Attempt 2: Avoiding overload resolution conflicts with Tuples

There's a simple solution to this problem: don't try and overload the ILogger.BeginScope<T>() method, just call it something else. For example, if we rename the extension to BeginScopeWith, then there won't be any overload resolution issues, and we can happily use single tuples without any issues:

public static class LoggerExtensions  
{
    public static IDisposable BeginScopeWith(this ILogger logger, params (string key, object value)[] keys)
    {
        return logger.BeginScope(keys.ToDictionary(x => x.key, x => x.value));
    }
}

By renaming the extension, we can avoid any ambiguity in method selection. That also gives us some compile-time safety, as only tuples that actually are (string key, object value) can be used:

using (_logger.BeginScopeWith((key: "transactionId", value: transactionId))) // <- explicit names  
using (_logger.BeginScopeWith(("transactionId", transactionId))) // <- names inferred  
using (_logger.BeginScopeWith((oops: "transactionId", wrong: transactionId))) // <- incorrect names ignored, only order and types matter  
using (_logger.BeginScopeWith((key: transactionId, value: "transactionId"))) // <- wrong order, won't compile  
using (_logger.BeginScopeWith(("transactionId", transactionId, basketId))) // <- to many items, wrong tuple type, won't compile  

That's about as far as I could get with tuples. It's not a bad compromise, but I wanted to try something else.

Attempt 3: Anonymous types as dictionaries of key value pairs

I only actually started exploring tuples after I went down this next avenue - using anonymous types as a Dictionary<string, object>. My motivation came from when you would specify additional HTML attributes using the HtmlHelper classes in ASP.NET Razor templates, for example:

@Html.InputFor(m => m.Name, new { placeholder = "Enter your name"})

Note You can still use Html Helpers in ASP.NET Core, but they have been largely superseded by the (basically superior in every way) Tag Helpers.

This syntax was what I had in mind when initially thinking about the problem. It's concise, it clearly contains key-value pairs, and it's familiar. The only problem is, converting an anonymous type to Dictionary<string, object> is harder than the Html Helpers make it seem!

Converting an anonymous type to a Dictionary<string, object>

Given I was trying to replicate the behaviour of the ASP.NET Html Helpers, checking out the source code seemed like a good place to start. I actually used the original ASP.NET source code, rather than the ASP.NET Core source code, but you can find similar code there too.

The code I found initially was for the HttpRouteValueDictionary. This uses a similar behaviour to the Html Helpers, in that it converts an object into a dictionary. It reads an anonymous type's properties, and uses the property name and values as key-value pairs. It also handles the case where the provided object is already a dictionary.

I used this class the basis for a helper method that converts an object into a dictionary (actually an IEnumerable<KeyValuePair<string, object>>):

public static IEnumerable<KeyValuePair<string, object>> GetValuesAsDictionary(object values)  
{
    var valuesAsDictionary = values as IEnumerable<KeyValuePair<string, object>>;
    if (valuesAsDictionary != null)
    {
        return valuesAsDictionary;
    }
    valuesAsDictionary = new Dictionary<string, object>();
    if (values != null)
    {
        foreach (PropertyHelper property in PropertyHelper.GetProperties(values))
        {
            // Extract the property values from the property helper
            // The advantage here is that the property helper caches fast accessors.
            valuesAsDictionary.Add(property.Name, property.GetValue(values));
        }
    }
    return valuesAsDictionary;
}

If the values object provided is already the correct type, we can just return it directly. Otherwise, we create a new dictionary and populate it with the properties from our object. This uses a helper class called PropertyHelper.

This is an internal helper class that's used both in the ASP.NET stack and in ASP.NET Core to extract property names and values from an object. It basically uses reflection to loop over an object's properties, but it's heavily optimised and cached. There's over 500 lines in the class, so I won't list it here, but at the heart of the class is a method that reflects over the provided object type's properties:

IEnumerable<PropertyInfo> properties = type  
    .GetProperties(BindingFlags.Public | BindingFlags.Instance)
    .Where(prop => prop.GetIndexParameters().Length == 0 && prop.GetMethod != null);

This is the magic that allows us to get all the properties as key-value pairs.

Using anonymous types to add scope state

With the GetValuesAsDictionary() helper method, we can now build an extension method that lets us use anonymous types to pass key-value pairs to the BeginScope method:

public static class LoggerExtensions  
{
    public static IDisposable BeginScopeWith(this ILogger logger, object values)
    {
        var dictionary = DictionaryHelper.GetValuesAsDictionary(values);
        return logger.BeginScope(dictionary);
    }
}

Unfortunately, we have the same method overload problem as we saw with the tuples. If we call the method BeginScope, then the "native" generic method overload will always win (unless you explicitly cast to object in the calling code - yuk), and our extension would never be called. The anonymous type object would end up serialised to the Scope property instead of being dissected into key-value pairs. We can avoid the problem by giving our extension a different name, but it does feel a bit clumsy.

Still, this extension means our using statements are a lot shorter and easier to read, even more so than the tuple syntax I think:

using (_logger.BeginScopeWith(new { productId = productId, basketId = basketId)))  
using (_logger.BeginScopeWith(new { transactionId = transaction.Id)))  

A downside is that we can't use nameof() expressions to avoid typos in the key names any more. Instead, we could use inference, which also gets rid of the duplication:

using (_logger.BeginScopeWith(new { productId, basketId)))  
using (_logger.BeginScopeWith(new { transactionId = transaction.Id)))  

Personally, I think that's a pretty nice syntax, which is much easier to read compared to the original:

 using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
 using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))

Other possibilities

The main other possibility that came to mind for this was to use dynamic and ExpandoObject, given that these are somewhat equivalent to a Dictionary<string,object> anyway. After a bit of playing I couldn't piece together anything that really worked, but if somebody else can come up with something, I'm all ears!

Summary

It bugs me that there's no way of writing either the tuple-based or object-based extension methods as overloads of BeginScope, rather than having to create a new name. It's maybe a bit silly, but I suspect it means that in practice I just won't end up using them, even though I think the anonymous type version in particular is much easier to read than the original dictionary-based version.

Even so, it was interesting to try and tackle this problem, and to look through the code that the ASP.NET team used to solve it previously. Even if I / you don't use these extension methods, it's another tool in the coding belt should it be useful in a different situation. As always, let me know if you have any comments / suggestions / issues, and thanks for reading.


Anuraj Parameswaran: jQuery Unobtrusive Ajax Helpers in ASP.NET Core

This post is about getting jQuery Unobtrusive Ajax helpers in ASP.NET Core. AjaxHelper Class represents support for rendering HTML in AJAX scenarios within a view. If you’re migrating your existing ASP.NET MVC project to ASP.NET Core MVC, but there is no tag helpers available out of the box as replacement. Instead ASP.NET Core team recommends data-* attributes. All the existing @Ajax.Form attributes are available as data-* attributes.


Darrel Miller: OpenAPI is not what I thought

Sometimes I do my best thinking in the car and today was an excellent example of this.  I had a phone call today with a the digital agency Authentic who have been hired to help you stop saying Swagger, when you mean OpenApi. I’m only partially kidding. They asked me some hard questions about why I got involved in the OpenAPI Initiative, and experiences I have had with OpenAPI delivering value.  Apparently this started a chain reaction of noodling in my subconscious because while driving my daughter to ballet, it hit me.  I’ve been thinking about OpenAPI all wrong.



Let me be categorically clear. In the beginning, I was not a fan of Swagger.  Or WADL, or RAML, or API Blueprint or RADL.  I was, and largely still am, a card carrying Restafarian.  I wear that slur with pride.  I like to eat REST Pragmatists for breakfast.   Out of band coupling is the scourge of the Internet.  We’ve tried interface definition languages before. Remember WSDL?  Been there, done that. Please, not again.

An Inflection Point

The first chink in my amour of objections appeared at the API Strategy Conference in Austin in November 2015 (There’s another one coming up soon in Portand http://apistrat.com/).  I watched Tony Tam do a workshop on Swagger.  Truth be told, I only attended to see what trouble I could cause.  Turns out he showed a tool called Swagger-Inflector and I was captivated.  Inflector used Swagger for a different purpose.  It became a DSL for driving the routing of a Java based HTTP API.  

An Inside Job

It wasn’t too long after that when I was asked if I would be interested in joining the OpenAPI Initiative.  It was clear from fighting the hypermedia fight for more than 10 years, we were losing the war.  The Swagger tooling provided value that developers building APIs wanted.  Hypermedia wasn’t solving problems they were facing, it was a promise to solve problems that they might face in a few years by doing extra work up front.  I understood the problems that Swagger/OpenAPI could cause, but I had a higher chance of convincing developers to stop drinking Mountain Dew than to pry a documentation generator from the hands of a dev with a deadline.  If I were going to have any chance of having an impact, I was going to have to work from the inside.

No Escape

A big part of my day job involves dealing with OpenAPI descriptions.  Our customer’s import them, export them.  My team uses it to describe our management API, as do all the other Azure teams.  As the de-facto “OpenAPI Guy” at Microsoft I end up having a fair number of interactions with other teams about what works and what doesn’t work in OpenAPI. A re-occurring theme is people keep wanting to put stuff in OpenAPI that has no business in an OpenAPI description.  At least that’s how I perceived it until today.

Scope Creep?

OpenApi descriptions are primarily used to drive consumer facing artifacts.  HTML documentation and client libraries are the most prominent examples.  Interface descriptions should not contain implementation details. But I keep running into scenarios where people want to add stuff that seems like it would be useful.  Credentials, rate limiting details, transformations, caching, CORS… the list continues to grow.  I’ve considered the need for a second description document that contains those details and augments the OpenAPI description.  I’ve considered adding an “x-implementation” object to the OpenApi description.  I’ve considered “x-impl-“ prefixes to distinguish between implementation details and the actual an interface description.  But nothing has felt right. I didn’t know why. Now I do.  It’s all implementation with some subset being the interface. Which subset depends on your perspective.

Pivot And Think Bigger

Remember Swagger-inflector?  It didn’t use OpenAPI to describe the interface at all.  It was purely an implementation artifact.  You know why Restafarians get all uppity about OpenAPI?  Because as an interface description language it encourages out of band coupling that makes independent evolvability hard.  That thing that micro-services need so badly.

What if OpenAPI isn’t an interface definition language at all?  What if it is purely a declarative description of an HTTP implementation?  What if tooling allowed you to project some subset of the OpenAPI description to drive HTML documentation? And another subset could be used for client code generation? And a different subset for driving server routing, middleware, validation and language bindings. OpenAPI descriptions become a platform independent definition of all aspects of an HTTP API.

Common Goals

One of my original objections to OpenAPI descriptions is that they contained information that I didn’t think belonged in an interface description.  Declaring a fixed subset of status codes an operation returned seemed unnecessary and restrictive.  But for scaffolding servers, generating mock responses and ensuring consistency across APIs, having the needed status codes identified is definitely valuable.

For the hypermedia folks, their generated documentation would only be based on a projection of the link relations, media types, entry points and possibly schemas.  For those who want a more traditional operation based  documentation, that is fine too.  It is the recognition of the projection step that is important.  It allows us to ensure that private implementation details are not leaked to interface driven artifacts.

Back to Work

Now I’m sure many people already perceive OpenAPI descriptions this way.  Well, where have you been my friends?  We need you contributing to the Github repo.  Me, I’m a bit slower and this only dawned on me today.  But hopefully, this will help me find even more ways to deliver value to developers via OpenAPI descriptions.

The other possibility of course is that people think I’m just plain wrong and that OpenAPI really is the description of an interface.


Anuraj Parameswaran: Getting started with SignalR using ASP.NET Core

This post is about getting started SignalR in ASP.NET Core. SignalR is a framework for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available.


Andrew Lock: Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

This is the first in a short series of posts exploring the process I went through to make working with scopes a little nicer in ASP.NET Core (and Serilog / Seq). In this post I'll create an extension method for logging a single key-value pair as a scope. In the next post, I'll extend this to multiple key-value pairs.

I'll start by presenting an overview of structured logging and why you should be using it in your applications. This is largely the same introduction as in my last post so feel to skip ahead if I'm preaching to the choir!

Next, I'll show how scopes are typically recorded in ASP.NET Core, with a particular focus on Serilog and Seq. This will largely demonstrate the semantics described by Nicholas Blumhardt in his post on the semantics of ILogger.BeginScope(), but it will also set the scene for the meat of this post. In particular, we'll take a look at the syntax needed to record scope state as a series of key-value pairs.

Finally, I'll show an extension method you can add to your application to make recording key-value scope state that little bit easier.

Introduction to structured logging

Structured logging involves associating key-value pairs with each log entry, instead of just outputting an unstructured string "message". For example, an unstructured log message, something that might be output to the console, might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You're limited to what you can achieve in a text editor - doable to an extent, but a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available for filtering without having to do any messy string processing.

Some of the most popular options for storing and searching structured logs are Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

Once you're storing logs in a structured manner, it becomes far easier to query and analyse your log files. Structured logging can extract parameters from the format string passed in the log message, and attach these to the log itself.

For example, the log message Request starting {protocol} {method} {url} contains three parameters, protocol, method, and url, which can all be extracted as properties on the log.

The ASP.NET Core logging framework also includes the concept of scopes which lets you attach arbitrary additional data to all log messages inside the scope. For example, the following log entry has a format string parameter, {ActionName}, which would be attached to the log message, but it also contains four scopes:

using (_logger.BeginScope("Some name"))  
using (_logger.BeginScope(42))  
using (_logger.BeginScope("Formatted {WithValue}", 12345))  
using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))  
{
    _logger.LogInformation("Hello from the {ActionName}!", name);
}

The state passed in the call to ILogger.BeginScope(state) can be anything, as shown in this example. The problem is, how this state should be logged is not clearly defined by the ILogger interface, so it's up to the logger implementation to decide.

Luckily Nicholas Blumhardt has thought hard about this problem, and has baked his rules into the Serilog / Seq implementation. There are effectively three different rules:

  1. If the state is an IEnumerable<KeyValuePair<string, object>>, attach each KeyValuePair as a property to the log.
  2. If the state is a format string / message template, add the parameters as properties to the log, and the formatted string to a Scope property.
  3. For everything else, add it to the Scope property.

For the LogInformation call shown previously, these rules result in the WithValue, ViaDictionary, and Scope values being attached to the log:

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Adding correlation IDs using scope

Of all these rules, the most interesting to me is the IEnumerable<KeyValuePair<string, object>> rule, which allows attaching arbitrary key-values to the log as properties. A common problem when looking through logs is looking for relationships. For example, I want to see all logs related to a particular product ID, a particular user ID, or a transaction ID. These are commonly referred to as correlation IDs as they allow you to easily determine the relationship between different log messages.

My one bugbear, is the somewhat lengthy syntax required in order to attach these correlation IDs to the log messages. Lets start with the following, highly contrived code. We're simply adding a product to a basket, but I've added correlation IDs in scopes for the productId, the basketId and the transactionId:

public void Add(int productId, int basketId)  
{
    using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
    {
        _logger.LogDebug("Adding product to basket");
        var product = _service.GetProduct();
        var basket = _service.GetBasket();

        using (var transaction = factory.Create())
        using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))
        {
            basket.Add(product);
            transaction.Submit();
            _logger.LogDebug("Product added to basket");
        }
    }
}

This code does exactly what I want, but it's a bit of an eye-sore. All those dictionaries flying around and nameof() to avoid typos is a bit ugly, so I wanted to see if I could tidy it up. I didn't want to go messing with the framework code, so I thought I would create a couple of extension methods to tidy up these common patterns.

Creating a single key-value pair scope state extension

In this post we'll start with the inner-most call to BeginScope<T>, in which we create a dictionary with a single key, transactionId. For this case I created a simple extension method that takes two parameters, the key name as a string, and the value as an object. These are used to initialise a Dictionary<string, object> which is passed to the underlying ILogger.BeginScope<T> method:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, string key, object value)
    {
        return logger.BeginScope(new Dictionary<string, object> { { key, value } });
    }
}

The underlying ILogger.BeginScope<T>(T state) method only has a single argument, so there's no issue with overload resolution here. With this small addition, our second using call has gone from this:

using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))  

to this:

using (_logger.BeginScope("transactionId", transaction.Id))  

Much nicer, I think you'll agree!

This was the most common use case that I was trying to tidy up, so stopping at this point would be perfectly reasonable. In fact, I could already use this to tidy up the first using method too, if I was happy to change the semantics somewhat. For example

using (_logger.BeginScope(new Dictionary<string, object> {{ nameof(productId), productId }, { nameof(basketId), basketId} }))  

could become

using (_logger.BeginScope(nameof(productId), productId))  
using (_logger.BeginScope(nameof(basketId), basketId))  

Not strictly the same, but not too bad. Still, I wanted to do better. In the next post I'll show some of the avenues I explored, their pros and cons, and the final extension method I settled on.

Summary

I consider structured logging to be a no-brainer when it comes to running apps in production, and key to that are correlation IDs applied to logs wherever possible. Serilog, Seq, and the ASP.NET Core logging framework make it possible to add arbitrary properties to a log message using ILogger.BeginScope(state), but the semantics of the method call are somewhat ill-defined. Consequently, in order for scope state to be used as correlation ID properties on the log message, the state must be an IEnumerable<KeyValuePair<string, object>>.

Manually creating a Dictionary<string,object> every time I wanted to add a correlation ID was a bit cumbersome, so I wrote a simple extension overload to BeginScope method that takes a string key and and object value. This extension simply initialises a Dictionary<string, object> behind the scenes, and calls to the underlying BeginScope<T> method. This makes the call site easier to read when you are adding a single key-value pair.


Anuraj Parameswaran: Building ASP.NET Core web apps with VB.NET

This post is about developing ASP.NET Core applications with VB.NET. I started my career with VB 6.0, and .NET programming with VB.NET. When Microsoft introduced ASP.NET Core, people where concerned about Web Pages and VB.Net. Even though no one liked it, every one is using it. In ASP.NET Core 2.0, Microsoft introduced Razor Pages and support to develop .net core apps with VB.NET. Today I found one question on ASP.NET Core Web application template in VB.NET. So I thought of creating a ASP.NET Core Hello World app to VB.NET.


Damien Bowden: SignalR Group messages with ngrx and Angular

This article shows how SignalR can be used to send grouped messages to an Angular SignalR client, which uses ngrx to handle the SignalR events in the Angular client.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Other posts in this series:

SignalR Groups

SignalR allows messages to be sent to specific groups if required. You can read about this here:

https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/working-with-groups

The documentation is for the old SignalR, but most is still relevant.

To get started, add the SignalR Nuget package to the csproj file where the Hub(s) are to be implemented.

<PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" />

In this application, the NewsItem class is used to send the messages between the SignalR clients and server.

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsItem
    {
        public string Header { get; set; }
        public string NewsText { get; set; }
        public string Author { get; set; }
        public string NewsGroup { get; set; }
    }
}

The NewsHub class implements the SignalR Hub which can send messages with NewsItem classes, or let the clients join, or leave a SignalR group. When the Send method is called, the class uses the NewsGroup property to send the messages only to clients in the group. If the client is not a member of the group, it will receive no message.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsHub : Hub
    {
        public Task Send(NewsItem newsItem)
        {
            return Clients.Group(newsItem.NewsGroup).InvokeAsync("Send", newsItem);
        }

        public async Task JoinGroup(string groupName)
        {
            await Groups.AddAsync(Context.ConnectionId, groupName);
            await Clients.Group(groupName).InvokeAsync("JoinGroup", groupName);
        }

        public async Task LeaveGroup(string groupName)
        {
            await Clients.Group(groupName).InvokeAsync("LeaveGroup", groupName);
            await Groups.RemoveAsync(Context.ConnectionId, groupName);
        }
    }
}

The SignalR hub is configured in the Startup class. The path defined in the hub, must match the configuration in the SignalR client.

app.UseSignalR(routes =>
{
	routes.MapHub<NewssHub>("looney");
});

Angular Service for the SignalR client

To use SignalR in the Angular application, the npm package @aspnet/signalr-client needs to be added to the packages.json file.

"@aspnet/signalr-client": "1.0.0-alpha2-final"

The Angular NewsService is used to send SignalR events to the ASP.NET Core server and also to handle the messages received from the server. The send, joinGroup and leaveGroup functions are used in the ngrx store effects and the init method adds event handlers for SignalR events and dispatches ngrx actions when a message is received.

import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import { NewsState } from './store/news.state';
import * as NewsActions from './store/news.action';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;

    constructor(private store: Store<any>) {
        this.init();
    }

    public send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    public joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    public leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    private init() {

        this._hubConnection = new HubConnection('/looney');

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

Using ngrx to manage SignalR events

The NewsState interface is used to save the application state created from the SignalR events, and the user interactions.

import { NewsItem } from '../models/news-item';

export interface NewsState {
    newsItems: NewsItem[],
    groups: string[]
};

The news.action action classes are used to connect, define the actions for events which are dispatched from Angular components, the SignalR Angular service, or ngrx effects. These actions are used in the hubConnection.on event, which receives the SignalR messages, and dispatches the proper action.

import { Action } from '@ngrx/store';
import { NewsItem } from '../models/news-item';

export const JOIN_GROUP = '[news] JOIN_GROUP';
export const LEAVE_GROUP = '[news] LEAVE_GROUP';
export const JOIN_GROUP_COMPLETE = '[news] JOIN_GROUP_COMPLETE';
export const LEAVE_GROUP_COMPLETE = '[news] LEAVE_GROUP_COMPLETE';
export const SEND_NEWS_ITEM = '[news] SEND_NEWS_ITEM';
export const SEND_NEWS_ITEM_COMPLETE = '[news] SEND_NEWS_ITEM_COMPLETE';
export const RECEIVED_NEWS_ITEM = '[news] RECEIVED_NEWS_ITEM';
export const RECEIVED_GROUP_JOINED = '[news] RECEIVED_GROUP_JOINED';
export const RECEIVED_GROUP_LEFT = '[news] RECEIVED_GROUP_LEFT';

export class JoinGroupAction implements Action {
    readonly type = JOIN_GROUP;

    constructor(public group: string) { }
}

export class LeaveGroupAction implements Action {
    readonly type = LEAVE_GROUP;

    constructor(public group: string) { }
}


export class JoinGroupActionComplete implements Action {
    readonly type = JOIN_GROUP_COMPLETE;

    constructor(public group: string) { }
}

export class LeaveGroupActionComplete implements Action {
    readonly type = LEAVE_GROUP_COMPLETE;

    constructor(public group: string) { }
}
export class SendNewsItemAction implements Action {
    readonly type = SEND_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class SendNewsItemActionComplete implements Action {
    readonly type = SEND_NEWS_ITEM_COMPLETE;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedItemAction implements Action {
    readonly type = RECIEVED_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedGroupJoinedAction implements Action {
    readonly type = RECIEVED_GROUP_JOINED;

    constructor(public group: string) { }
}

export class ReceivedGroupLeftAction implements Action {
    readonly type = RECIEVED_GROUP_LEFT;

    constructor(public group: string) { }
}

export type Actions
    = JoinGroupAction
    | LeaveGroupAction
    | JoinGroupActionComplete
    | LeaveGroupActionComplete
    | SendNewsItemAction
    | SendNewsItemActionComplete
    | ReceivedItemAction
    | ReceivedGroupJoinedAction
    | ReceivedGroupLeftAction;


The newsReducer ngrx reducer class receives the actions and changes the state as required. For example, when a RECEIVED_NEWS_ITEM event is sent from the Angular SignalR service, it creates a new state with the new message appended to the existing items.

import { NewsState } from './news.state';
import { NewsItem } from '../models/news-item';
import { Action } from '@ngrx/store';
import * as newsAction from './news.action';

export const initialState: NewsState = {
    newsItems: [],
    groups: ['group']
};

export function newsReducer(state = initialState, action: newsAction.Actions): NewsState {
    switch (action.type) {

        case newsAction.RECEIVED_GROUP_JOINED:
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: (state.groups.indexOf(action.group) > -1) ? state.groups : state.groups.concat(action.group)
            });

        case newsAction.RECEIVED_NEWS_ITEM:
            return Object.assign({}, state, {
                newsItems: state.newsItems.concat(action.newsItem),
                groups: state.groups
            });

        case newsAction.RECEIVED_GROUP_LEFT:
            const data = [];
            for (const entry of state.groups) {
                if (entry !== action.group) {
                    data.push(entry);
                }
            }
            console.log(data);
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: data
            });
        default:
            return state;

    }
}

The ngrx store is configured in the module class.

StoreModule.forFeature('news', {
     newsitems: newsReducer,
}),
 EffectsModule.forFeature([NewsEffects])

The store is then used in the different Angular components. The component only uses the ngrx store to send, receive SignalR data.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { Store } from '@ngrx/store';
import { NewsState } from '../store/news.state';
import * as NewsActions from '../store/news.action';
import { NewsItem } from '../models/news-item';

@Component({
    selector: 'app-news-component',
    templateUrl: './news.component.html'
})

export class NewsComponent implements OnInit {
    public async: any;
    newsItem: NewsItem;
    group = 'group';
    newsState$: Observable<NewsState>;

    constructor(private store: Store<any>) {
        this.newsState$ = this.store.select<NewsState>(state => state.news.newsitems);
        this.newsItem = new NewsItem();
        this.newsItem.AddData('', '', 'me', this.group);
    }

    public sendNewsItem(): void {
        this.newsItem.NewsGroup = this.group;
        this.store.dispatch(new NewsActions.SendNewsItemAction(this.newsItem));
    }

    public join(): void {
        this.store.dispatch(new NewsActions.JoinGroupAction(this.group));
    }

    public leave(): void {
        this.store.dispatch(new NewsActions.LeaveGroupAction(this.group));
    }

    ngOnInit() {
    }
}

The component template then displays the data as required.

<div class="container-fluid">

    <h1>Send some basic news messages</h1>

    <div class="row">
        <form class="form-inline" >
            <div class="form-group">
                <label for="header">Group</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="group" required>
            </div>
            <button class="btn btn-primary" (click)="join()">Join</button>
            <button class="btn btn-primary" (click)="leave()">Leave</button>
        </form>
    </div>
    <hr />
    <div class="row">
        <form class="form" (ngSubmit)="sendNewsItem()" #newsItemForm="ngForm">
            <div class="form-group">
                <label for="header">Header</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="newsItem.header" required>
            </div>
            <div class="form-group">
                <label for="newsText">Text</label>
                <input type="text" class="form-control" id="newsText" placeholder="your newsText..." name="newsText" [(ngModel)]="newsItem.newsText" required>
            </div>
            <div class="form-group">
                <label for="newsText">Author</label>
                <input type="text" class="form-control" id="author" placeholder="your newsText..." name="author" [(ngModel)]="newsItem.author" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!newsItemForm.valid">Send News to: {{group}}</button>
        </form>
    </div>

    <div class="row" *ngIf="(newsState$|async)?.newsItems.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>header</th>
                        <th>Text</th>
                        <th>Author</th>
                        <th>roup</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let item of (newsState$|async)?.newsItems; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{item.header}}</td>
                        <td>{{item.newsText}}</td>
                        <td>{{item.author}}</td>
                        <td>{{item.newsGroup}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
 
    <div class="row" *ngIf="(newsState$|async)?.length <= 0">
        <span>No news items</span>
    </div>
</div>

When the application is started, SignalR messages can be sent, received and displayed from the instances of the Angaulr application.


Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Damien Bowden: Getting started with SignalR using ASP.NET Core and Angular

This article shows how to setup a first SignalR Hub in ASP.NET Core 2.0 and use it with an Angular client. SignalR will be released with dotnet 2.1. Thanks to Dennis Alberti for his help in setting up the code example.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Other posts in this series:

2017-09-15: Updated @aspnet/signalr-client to use npm feed, and 1.0.0-alpha1-final

The required SignalR Nuget packages and npm packages are at present hosted on MyGet. Your need to add the SignalR packagesto the csproj file. To use the MyGet feed, add the https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json to your package sources.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
  </ItemGroup>
  <ItemGroup>
    <Folder Include="angularApp\app\models\" />
  </ItemGroup>
</Project>

Now create a simple default hub.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreSignalr.SignalRHubs
{
    public class LoopyHub : Hub
    {
        public Task Send(string data)
        {
            return Clients.All.InvokeAsync("Send", data);
        }
    }
}

Add the SignalR configuration in the startup class. The hub which was created before needs to be added in the UseSignalR extension method.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddSignalR();
	...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseSignalR(routes =>
	{
		routes.MapHub<LoopyHub>("loopy");
	});

	...
}

Setup the Angular application. The Angular application is setup using a wepback build and all dependencies are added to the packages.json file.

You can use the The MyGet npm feed if you want to use the aspnetcore-ci-dev. You can do this using a .npmrc file in the project root. Add the registry path. If using the npm package, do not add this.

@aspnet:registry=https://dotnet.myget.org/f/aspnetcore-ci-dev/npm/

Now add the required SignalR npm packages to the packages.json file. Using the npm package from NuGet:

 "dependencies": {
    "@angular/animations": "4.4.0-RC.0",
    "@angular/common": "4.4.0-RC.0",
    "@angular/compiler": "4.4.0-RC.0",
    "@angular/compiler-cli": "4.4.0-RC.0",
    "@angular/core": "4.4.0-RC.0",
    "@angular/forms": "4.4.0-RC.0",
    "@angular/http": "4.4.0-RC.0",
    "@angular/platform-browser": "4.4.0-RC.0",
    "@angular/platform-browser-dynamic": "4.4.0-RC.0",
    "@angular/platform-server": "4.4.0-RC.0",
    "@angular/router": "4.4.0-RC.0",
    "@angular/upgrade": "4.4.0-RC.0",
    
    "msgpack5": "^3.5.1",
    "@aspnet/signalr-client": "1.0.0-alpha1-final"
  },

Add the SignalR client code. In this basic example, it is just added directly in a component. The sendMessage funtion sends messages and the hubConnection.on function receives all messages including its own.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';

@Component({
    selector: 'app-home-component',
    templateUrl: './home.component.html'
})

export class HomeComponent implements OnInit {
    private _hubConnection: HubConnection;
    public async: any;
    message = '';
    messages: string[] = [];

    constructor() {
    }

    public sendMessage(): void {
        const data = `Sent: ${this.message}`;

        this._hubConnection.invoke('Send', data);
        this.messages.push(data);
    }

    ngOnInit() {
        this._hubConnection = new HubConnection('/loopy');

        this._hubConnection.on('Send', (data: any) => {
            const received = `Received: ${data}`;
            this.messages.push(received);
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

The messages are then displayed in the component template.

<div class="container-fluid">

    <h1>Send some basic messages</h1>


    <div class="row">
        <form class="form-inline" (ngSubmit)="sendMessage()" #messageForm="ngForm">
            <div class="form-group">
                <label class="sr-only" for="message">Message</label>
                <input type="text" class="form-control" id="message" placeholder="your message..." name="message" [(ngModel)]="message" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!messageForm.valid">Send SignalR Message</button>
        </form>
    </div>
    <div class="row" *ngIf="messages.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Messages</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let message of messages; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{message}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
    <div class="row" *ngIf="messages.length <= 0">
        <span>No messages</span>
    </div>
</div>

Now the first really simple SignalR Hub is setup and an Angular client can send and receive messages.

Links:

https://github.com/aspnet/SignalR#readme

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Andrew Lock: How to include scopes when logging exceptions in ASP.NET Core

How to include scopes when logging exceptions in ASP.NET Core

This post describes how to work around an issue I ran into when logging exceptions that occur inside a scope block in ASP.NET Core. I'll provide a brief background on logging in ASP.NET Core, structured logging, and the concept of scopes. Then I'll show how exceptions can cause you to lose an associated scope, and how to get round this using a neat trick with exception filters.

tl;dr; Exception filters are executed in the same scope as the original exception, so you can use them to write logs in the original context, before the using scope blocks are disposed.

Logging in ASP.NET Core

ASP.NET Core includes logging infrastructure that makes it easy to write logs to a variety of different outputs, such as the console, a file, or the Windows EventLog. The logging abstractions are used through the ASP.NET Core framework libraries, so you can even get log messages from deep inside the infrastructure libraries like Kestrel and EF Core if you like.

The logging abstractions include common features like different event levels, applying unique ids to specific logs, and event categories for tracking which class created the log message, as well as the ability to use structured logging for easier parsing of logs.

Structured logging is especially useful, as it makes finding and diagnosing issues so much easier in production. I'd go as far as to say that it should be absolutely required if you're running an app in production.

Introduction to structured logging

Structured logging basically involves associating key-value pairs with each log entry, instead of a simple string "message". For example, a non-structured log message might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You could probably put together a regex to extract all the information, but that's a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available without having to do any messy string processing. Nicholas Blumhardt has a great explanation of the benefits in this stack overflow answer.

Now, as these logs are no longer simple strings, they can't just be written to the console, or stored in a file - they need dedicated storage. Some of the most popular options are to store the logs in Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

In some situations, you might like to add the same values to every log message that you write. For example, you might want to add a database transaction id to every log message until that transaction is committed.

You could manually add the id to every relevant message, but ASP.NET Core also provides the
concept of scopes. You can create a new scope in a using block, passing in some state you want to log, and it will be written to each log message inside the using block.

You don't have to be using structured logging to use scopes - you can add them to the console logger for example - but they make the most sense in terms of structured logging.

For example, the following sample taken from the serilog-aspnetcore package (the recommended package for easily adding Serilog to ASP.NET Core 2.0 apps) demonstrates multiple nested scopes in the Get() method. Calling _logger.BeginScope<T>(T state) creates a new scope with the provided state.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");

        using (_logger.BeginScope("Some name"))
        using (_logger.BeginScope(42))
        using (_logger.BeginScope("Formatted {WithValue}", 12345))
        using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
        {
            _logger.LogInformation("Hello from the Index!");
            _logger.LogDebug("Hello is done");
        }

        _logger.LogInformation("After");

        return new string[] { "value1", "value2" };
    }
}

Running this application and hitting the action method produces logs similar to the following in Seq:

How to include scopes when logging exceptions in ASP.NET Core

As you can see, you can store anything as the state parameter T - a string, an integer, or a Dictionary<string, object> of values. Seq handles these scope state values in too different ways:

  • integers, strings and formatted strings are added to an array of objects on the Scope property
  • Parameters and values from formatted strings, and Dictionary<string, object> are added directly to the log entry as key-value pairs.

Surprise surprise, Nicholas Blumhardt also has a post on what to make of these values, how logging providers should handle them, and how to use them!

Exceptions inside scope blocks lose the scope

Scopes work well for this situation when you want to attach additional values to every log message, but there's a problem. What if an exception occurs inside the scope using block? The scope probably contains some very useful information for debugging the problem, so naturally you'd like to include it in the error logs.

If you can include a try-catch block inside the scope block, then you're fine - you can log the errors and the scope will be included as you'd expect.

But what if the try-catch block surrounds the using blocks? For example, imagine the previous example, but this time we have a try-catch block in the method, and an exception is thrown inside the using blocks:

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                // An unexpected problem!
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "An unexpected exception occured");
            return new string[] { };
        }
    }
}

Obviously this is a trivial example, you could easily put the try-catch block inside the using blocks, but in reality the scope blocks and exception could occur several layers deep inside some service.

Unfortunately, if you look at the error logged in Seq, you can see that the scopes have all been lost. There's no Scope, WithValue, or ViaDictionary properties:

How to include scopes when logging exceptions in ASP.NET Core

At the point the exception is logged, the using blocks have all been disposed, and so the scopes have been lost. Far from ideal, especially if the scopes contained information that would help debug why the exception occurred!

Using exception filters to capture scopes

So how can we get the best of both worlds, and record the scope both for successful logs and errors? The answer was buried in an issue in the Serilog repo, and uses a "common and accepted form of 'abuse'" by using an exception filter for side effects.

Exception filters are a C# 6 feature that lets you conditionally catch exceptions in a try-catch block:

try  
{
  // Something throws an exception
}
catch(MyException ex) when (ex.MyValue == 3)  
{
  // Only caught if the expression filter evaluates
  // to true, i.e. if ex.MyValue == 3
}

If the filter evaluates to true, the catch block executes; if it evaluates to false, the catch block is ignored, and the exception continues to bubble up the call stack until it is handled.

There is a lesser known "feature" of exception filters that we can make of here - the code in an exception filter runs in the same context in which the original exception occurred - the stack in unharmed, and is only dumped if the exception filter evaluates to true.

We can use this feature to allow recording the scopes at the location the exception occurs. The helper method LogError(exception) simply writes the exception to the logger when it is called as part of an exception filter using when (LogError(ex)). Returning true means the catch block is executed too, but only after the exception has been logged with its scopes.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex) when (LogError(ex))
        {
            return new string[] { };
        }
    }

    bool LogError(Exception ex)
    {
        _logger.LogError(ex, "An unexpected exception occured");
        return true;
    }
}

Now when the exception occurs, it's logged with all the active scopes at the point the exception occurred (Scope, WithValue, or ViaDictionary), instead of the active scopes inside the catch block.

How to include scopes when logging exceptions in ASP.NET Core

Summary

Structured logging is a great approach that makes filtering and searching logs after the fact much easier by storing key-value pairs of properties. You can add extra properties to each log by using scopes inside a using block. Every log written inside the using block will include the scope properties, but if an exception occurs, those scope values will be lost.

To work around this, you can use the C# 6 exception filters feature. Exception filters are executed in the same context as the original exception, so you can use them to capture the logging scope at the point the exception occurred, instead of the logging scope inside the catch block.


Damien Bowden: Getting started with Angular and Redux

This article shows how you could setup Redux in an Angular application using ngrx. Redux provides a really great way of managing state in an Angular application. State Management is hard, and usually ends up a mess when you invent it yourself. At present, Angular provides no recommendations or solution for this.

Thanks to Fabian Gosebrink for his help in learning ngrx and Redux. The to Philip Steinebrunner for his feedback.

Code: https://github.com/damienbod/AngularRedux

The demo app uses an Angular component for displaying countries using the public API https://restcountries.eu/. The view displays regions and the countries per region. The data and the state of the component is implemented using ngrx.

Note: Read the Redux documentation to learn how it works. Here’s a quick summary of the redux store in this application:

  • There is just one store per application while you can register additional reducers for your Feature-Modules with StoreModule.forFeature() per module
  • The store has a state, actions, effects, and reducers
  • The actions define what can be done in the store. Components or effects dispatch these
  • effects are use to do API calls, etc and are attached to actions
  • reducers are attached to actions and are used to change the state

The following steps explains, what is required to get the state management setup in the Angular application, which uses an Angular service to request the data from the public API.

Step 1: Add the ngrx packages

Add the latest ngrx npm packages to the packages.json file in your project.

    "@ngrx/effects": "^4.0.5",
    "@ngrx/store": "^4.0.3",
    "@ngrx/store-devtools": "^4.0.0 ",

Step 2: Add the ngrx setup configuration to the app module.

In this app, a single Redux store will be used per module. The ngrx configuration needs to be added to the app.module and also each child module as required. The StoreModule, EffectsModule and the StoreDevtoolsModule are added to the imports array of the NgModule.

...

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { StoreDevtoolsModule } from '@ngrx/store-devtools';

@NgModule({
    imports: [
        ...
        StoreModule.forRoot({}),
        StoreDevtoolsModule.instrument({
            maxAge: 25 //  Retains last 25 states
        }),
        EffectsModule.forRoot([])
    ],

    declarations: [
        AppComponent
    ],

    bootstrap: [AppComponent],
})

export class AppModule { }

Step 3: Create the interface for the state.

This can be any type of object, array.

import { Region } from './../../models/region';

export interface CountryState {
    regions: Region[],
};

Step 4: Create the actions

Create the actions required by the components or the effects. The constructor params must match the params sent from the components or returned from the API calls.

import { Action } from '@ngrx/store';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

export const SELECTALL = '[countries] Select All';
export const SELECTALL_COMPLETE = '[countries] Select All Complete';
export const SELECTREGION = '[countries] Select Region';
export const SELECTREGION_COMPLETE = '[countries] Select Region Complete';

export const COLLAPSEREGION = '[countries] COLLAPSE Region';

export class SelectAllAction implements Action {
    readonly type = SELECTALL;

    constructor() { }
}

export class SelectAllCompleteAction implements Action {
    readonly type = SELECTALL_COMPLETE;

    constructor(public countries: Country[]) { }
}

export class SelectRegionAction implements Action {
    readonly type = SELECTREGION;

    constructor(public region: Region) { }
}

export class SelectRegionCompleteAction implements Action {
    readonly type = SELECTREGION_COMPLETE;

    constructor(public region: Region) { }
}

export class CollapseRegionAction implements Action {
    readonly type = COLLAPSEREGION;

    constructor(public region: Region) { }
}

export type Actions
    = SelectAllAction
    | SelectAllCompleteAction
    | SelectRegionAction
    | SelectRegionCompleteAction
    | CollapseRegionAction;


Step 5: Create the effects

Create the effects to do the API calls. The effects are mapped to actions and when finished call another action.

import 'rxjs/add/operator/map';
import 'rxjs/add/operator/switchMap';

import { Injectable } from '@angular/core';
import { Actions, Effect } from '@ngrx/effects';
import { Action } from '@ngrx/store';
import { of } from 'rxjs/observable/of';
import { Observable } from 'rxjs/Rx';

import * as countryAction from './country.action';
import { Country } from './../../models/country';
import { CountryService } from '../../core/services/country.service';

@Injectable()
export class CountryEffects {

    @Effect() getAll$: Observable<Action> = this.actions$.ofType(countryAction.SELECTALL)
        .switchMap(() =>
            this.countryService.getAll()
                .map((data: Country[]) => {
                    return new countryAction.SelectAllCompleteAction(data);
                })
                .catch((error: any) => {
                    return of({ type: 'getAll_FAILED' })
                })
        );

    @Effect() getAllPerRegion$: Observable<Action> = this.actions$.ofType(countryAction.SELECTREGION)
        .switchMap((action: countryAction.SelectRegionAction) =>
            this.countryService.getAllPerRegion(action.region.name)
                .map((data: Country[]) => {
                    const region = { name: action.region.name, expanded: true, countries: data};
                    return new countryAction.SelectRegionCompleteAction(region);
                })
                .catch((error: any) => {
                    return of({ type: 'getAllPerRegion$' })
                })
        );
    constructor(
        private countryService: CountryService,
        private actions$: Actions
    ) { }
}

Step 6: Implement the reducers

Implement the reducer to change the state when required. The reducer takes an initial state and executes methods matching the defined actions which were dispatched from the components or the effects.

import { CountryState } from './country.state';
import { Country } from './../../models/country';
import { Region } from './../../models/region';
import { Action } from '@ngrx/store';
import * as countryAction from './country.action';

export const initialState: CountryState = {
    regions: [
        { name: 'Africa', expanded:  false, countries: [] },
        { name: 'Americas', expanded: false, countries: [] },
        { name: 'Asia', expanded: false, countries: [] },
        { name: 'Europe', expanded: false, countries: [] },
        { name: 'Oceania', expanded: false, countries: [] }
    ]
};

export function countryReducer(state = initialState, action: countryAction.Actions): CountryState {
    switch (action.type) {

        case countryAction.SELECTREGION_COMPLETE:
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        case countryAction.COLLAPSEREGION:
            action.region.expanded = false;
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        default:
            return state;

    }
}

Step 7: Configure the module.

Important here is how the StoreModule.forFeature is configured. The configuration must match the definitions in the components which use the store.

...
import { CountryComponent } from './components/country.component';

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { CountryEffects } from './store/country.effects';
import { countryReducer } from './store/country.reducer';

@NgModule({
    imports: [
        ...
        StoreModule.forFeature('world', {
            regions: countryReducer,
        }),
        EffectsModule.forFeature([CountryEffects])
    ],

    declarations: [
        CountryComponent
    ],

    exports: [
        CountryComponent
    ]
})

export class CountryModule { }

Step 8: Create the component

Create the component which uses the store. The constructor configures the store matching the module configuration from the forFeature and the state as required. User actions dispatch events using the actions, which if required calls the an effect function, which then calls an action and then a reducer function which changes the state.

import { Component, OnInit } from '@angular/core';
import { Store } from '@ngrx/store';
import { Observable } from 'rxjs/Observable';

import { CountryState } from '../store/country.state';
import * as CountryActions from '../store/country.action';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

@Component({
    selector: 'app-country-component',
    templateUrl: './country.component.html',
    styleUrls: ['./country.component.scss']
})

export class CountryComponent implements OnInit {

    public async: any;

    regionsState$: Observable<CountryState>;

    constructor(private store: Store<any>) {
        this.regionsState$ = this.store.select<CountryState>(state => state.world.regions);
    }

    ngOnInit() {
        this.store.dispatch(new CountryActions.SelectAllAction());
    }

    public getCountries(region: Region) {
        this.store.dispatch(new CountryActions.SelectRegionAction(region));
    }

    public collapse(region: Region) {
         this.store.dispatch(new CountryActions.CollapseRegionAction(region));
    }
}

Step 9: Use the state objects in the HTML template.

It is important not to forget to use the async pipe when using the state from ngrx. Now the view is independent from the API calls and when the state is changed, it is automatically updated, or other components which use the same state.

<div class="container-fluid">
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length > 0">
        <div class="table-responsive">
            <table class="table">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Name</th>
                        <th>Population</th>
                        <th>Capital</th>
                        <th>Flag</th>
                    </tr>
                </thead>
                <tbody>
                    <ng-container *ngFor="let region of (regionsState$|async)?.regions; let i = index">
                        <tr>
                            <td class="text-left td-table-region" *ngIf="!region.expanded">
                                <span (click)="getCountries(region)">►</span>
                            </td>
                            <td class="text-left td-table-region" *ngIf="region.expanded">
                                <span type="button" (click)="collapse(region)">▼</span>
                            </td>
                            <td class="td-table-region">{{region.name}}</td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                        </tr>
                        <ng-container *ngIf="region.expanded">
                            <tr *ngFor="let country of region.countries; let i = index">
                                <td class="td-table-country">    {{i + 1}}</td>
                                <td class="td-table-country">{{country.name}}</td>
                                <td class="td-table-country" >{{country.population}}</td>
                                <td>{{country.capital}}</td>
                                <td><img width="100" [src]="country.flag"></td>
                            </tr>
                        </ng-container>
                    </ng-container>                                         
                </tbody>
            </table>
        </div>
    </div>

    <!--▼ ►   <span class="glyphicon glyphicon-ok" aria-hidden="true" style="color: darkgreen;"></span>-->
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length <= 0">
        <span>No items found</span>
    </div>
</div>

Redux DEV Tools

The redux-devtools chrome extension is really excellent. Add this to Chrome and start the application.

When you start the application, and open it in Chrome, and the Redux state can be viewed, explored changed and tested. This gives you an easy way to view the state and also display what happened inside the application. You can even remove state changes using this tool, too see a different history and change the value of the actual state.

The actual state can be viewed:

Links:

https://github.com/ngrx

https://egghead.io/courses/getting-started-with-redux

http://redux.js.org/

https://github.com/ngrx/platform/blob/master/docs/store-devtools/README.md

https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd?hl=en

https://restcountries.eu/

http://onehungrymind.com/build-better-angular-2-application-redux-ngrx/

https://egghead.io/courses/building-a-time-machine-with-angular-2-and-rxjs



Andrew Lock: Creating a rolling file logging provider for ASP.NET Core 2.0

Creating a rolling file logging provider for ASP.NET Core 2.0

ASP.NET Core includes a logging abstraction that makes writing logs to multiple locations easy. All of the first-party libraries that make up ASP.NET Core and EF Core use this abstraction, and the vast majority of libraries written for ASP.NET Core will too. That means its easy to aggregate the logs from your entire app, including the framework and your own code, into one place.

In this post I'll show how to create a logging provider that writes logs to the file system. In production, I'd recommended using a more fully-featured system like Serilog instead of this library, but I wanted to see what was involved to get a better idea of the process myself.

The code for the file logging provider is available on GitHub, or as the NetEscapades.Extensions.Logging.RollingFile package on NuGet.

The ASP.NET Core logging infrastructure

The ASP.NET Core logging infrastructure consists of three main components:

  • ILogger - Used by your app to create log messages.
  • ILoggerFactory - Creates instances of ILogger
  • ILoggerProvider - Controls where log messages are output. You can have multiple logging providers - every log message you write to an ILogger is written to the output locations for every configured logging provider in your app.


Creating a rolling file logging provider for ASP.NET Core 2.0

When you want to write a log message in your application you typically use DI to inject an ILogger<T> into the class, where T is the name of the class. The T is used to control the category associated with the class.

For example, to write a log message in an ASP.NET Core controller, HomeController, you would inject the ILogger<HomeController> and call one of the logging extension methods on ILogger:

public class HomeController: Controller  
{
    private readonly ILogger<HomeController> _logger;
    public HomeController(ILogger<HomeController> logger)
    {
         _logger = logger;
    }

    public IActionResult Get()
    {
        _logger.LogInformation("Calling home controller action");
        return View();
    }
}

This will write a log message to each output of the configured logging providers, something like this (for the console logger):

info: ExampleApplication.Controllers.HomeController[0]  
      Calling home controller action

ASP.NET Core includes several logging providers out of the box, which you can use to write your log messages to various locations:

  • Console provider - writes messages to the Console
  • Debug provider - writes messages to the Debug window (e.g. when debugging in Visual Studio)
  • EventSource provider - writes messages using Event Tracing for Windows (ETW)
  • EventLog provider - writes messages to the Windows Event Log
  • TraceSource provider - writes messages using System.Diagnostics.TraceSource libraries
  • Azure App Service provider - writes messages to blob storage or files when running your app in Azure.

In ASP.NET Core 2.0, the console and Debug loggers are configured by default, but in production you'll probably want to write your logs to somewhere more durable. In modern applications, you'll likely want to write to a centralised location, such as an Elastic Search cluster, Seq, elamh.io, or Loggr.

You can write your logs to most of these locations by adding logging providers for them directly to your application, but one provider is particularly conspicuous by its absence - a file provider. In this post I'll show how to implement a logging provider that writes your application logs to rolling files.

The logging library Serilog includes support for logging to files, as well as a multitude of other sinks. Rather than implementing your own logging provider as I have here, I strongly recommend you check it out. Nicholas Blumhardt has a post on adding Serilog to your ASP.NET Core 2.0 application here.

Creating A rolling file based logging provider

In actual fact, the ASP.NET Core framework does include a file logging provider, but it's wrapped up behind the Azure App Service provider. To create the file provider I mostly used files already part of the Microsoft.Extensions.Logging.AzureAppServices package, and exposed it as a logging provider in it's own right. A bit of a cheat, but hey, "shoulders of giants" and all that.

Implementing a logging provider basically involves implementing two interfaces:

  • ILogger
  • ILoggerProvider

The AzureAppServices library includes some base classes for batching log messages up, and writing them on a background thread. That's important as logging should inherently be a quick and synchronous operation. Your app shouldn't know or care where the logs are being written, and it certainly shouldn't be waiting on file IO!

The batching logger provider

The BatchingLoggerProvider is an abstract class that encapsulates the process of writing logs to a concurrent collection and writing them on a background thread. The full source is here but the abridged version looks something like this:

public abstract class BatchingLoggerProvider : ILoggerProvider  
{
    protected BatchingLoggerProvider(IOptions<BatchingLoggerOptions> options)
    {
        // save options etc
        _interval = options.Value.Interval
        // start the background task
        _outputTask = Task.Factory.StartNew<Task>(
            ProcessLogQueue,
            null,
            TaskCreationOptions.LongRunning);
    }

    // Implemented in derived classes to actually write the messages out
    protected abstract Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken token);

    // Take messages from concurrent queue and write them out
    private async Task ProcessLogQueue(object state)
    {
        while (!_cancellationTokenSource.IsCancellationRequested)
        {
            // Add pending messages to the current batch
            while (_messageQueue.TryTake(out var message))
            {
                _currentBatch.Add(message);
            }

            // Write the current batch out
            await WriteMessagesAsync(_currentBatch, _cancellationTokenSource.Token);
            _currentBatch.Clear();

            // Wait before writing the next batch
            await Task.Delay(interval, cancellationToken);
        }
    }

    // Add a message to the concurrent queue
    internal void AddMessage(DateTimeOffset timestamp, string message)
    {
        if (!_messageQueue.IsAddingCompleted)
        {
            _messageQueue.Add(new LogMessage { Message = message, Timestamp = timestamp }, _cancellationTokenSource.Token);
        }
    }

    public void Dispose()
    {
        // Finish writing messages out etc
    }

    // Create an instance of an ILogger, which is used to actually write the logs
    public ILogger CreateLogger(string categoryName)
    {
        return new BatchingLogger(this, categoryName);
    }

    private readonly List<LogMessage> _currentBatch = new List<LogMessage>();
    private readonly TimeSpan _interval;
    private BlockingCollection<LogMessage> _messageQueue = new BlockingCollection<LogMessage>(new ConcurrentQueue<LogMessage>());
    private Task _outputTask;
    private CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
}

The BatchingLoggerProvider starts by creating a Task on a background thread that runs the ProcessLogQueue method. This method sits in a loop until the provider is disposed and the CancellationTokenSource is cancelled. It takes log messages off the concurrent (thread safe) queue, and adds them to a temporary list, _currentBatch. This list is passed to the abstract WriteMessagesAsync method, implemented by derived classes, which writes the actual logs to the destination.

The other most important method is CreateLogger(categoryName), which creates an instance of an ILogger that is injected into your classes. Our actual non-abstract provider implementation, the FileLoggerProvider, derives from the BatchingLoggerProvider:

[ProviderAlias("File")]
public class FileLoggerProvider : BatchingLoggerProvider  
{
    private readonly string _path;
    private readonly string _fileName;
    private readonly int? _maxFileSize;
    private readonly int? _maxRetainedFiles;

    public FileLoggerProvider(IOptions<FileLoggerOptions> options) : base(options)
    {
        var loggerOptions = options.Value;
        _path = loggerOptions.LogDirectory;
        _fileName = loggerOptions.FileName;
        _maxFileSize = loggerOptions.FileSizeLimit;
        _maxRetainedFiles = loggerOptions.RetainedFileCountLimit;
    }

    // Write the provided messages to the file system
    protected override async Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken cancellationToken)
    {
        Directory.CreateDirectory(_path);

        // Group messages by log date
        foreach (var group in messages.GroupBy(GetGrouping))
        {
            var fullName = GetFullName(group.Key);
            var fileInfo = new FileInfo(fullName);
            // If we've exceeded the max file size, don't write any logs
            if (_maxFileSize > 0 && fileInfo.Exists && fileInfo.Length > _maxFileSize)
            {
                return;
            }

            // Write the log messages to the file
            using (var streamWriter = File.AppendText(fullName))
            {
                foreach (var item in group)
                {
                    await streamWriter.WriteAsync(item.Message);
                }
            }
        }

        RollFiles();
    }

    // Get the file name
    private string GetFullName((int Year, int Month, int Day) group)
    {
        return Path.Combine(_path, $"{_fileName}{group.Year:0000}{group.Month:00}{group.Day:00}.txt");
    }

    private (int Year, int Month, int Day) GetGrouping(LogMessage message)
    {
        return (message.Timestamp.Year, message.Timestamp.Month, message.Timestamp.Day);
    }

    // Delete files if we have too many
    protected void RollFiles()
    {
        if (_maxRetainedFiles > 0)
        {
            var files = new DirectoryInfo(_path)
                .GetFiles(_fileName + "*")
                .OrderByDescending(f => f.Name)
                .Skip(_maxRetainedFiles.Value);

            foreach (var item in files)
            {
                item.Delete();
            }
        }
    }
}

The FileLoggerProvider implements the WriteMessagesAsync method by writing the log messages to the file system. Files are created with a standard format, so a new file is created every day. Only the last _maxRetainedFiles files are retained, as defined by the FileLoggerOptions.RetainedFileCountLimit property set on the IOptions<> object provided in the constructor.

Note In this implementation, once files exceed a maximum size, no further logs are written for that day. The default is set to 10MB, but you can change this on the FileLoggerOptions object.

The [ProviderAlias("File")] attribute defines the alias for the logger that you can use to configure log filtering. You can read more about log filtering in the docs.

The FileLoggerProvider is used by the ILoggerFactory to create an instance of the BatchingLogger, which implements ILogger, and is used to actually write the log messages.

The batching logger

The BatchingLogger is pretty simple. The main method, Log, passes messages to the provider by calling AddMessage. The methods you typically use in your app, such as LogError and LogInformation are actually just extension methods that call down to this underlying Log method.

public class BatchingLogger : ILogger  
{
    private readonly BatchingLoggerProvider _provider;
    private readonly string _category;

    public BatchingLogger(BatchingLoggerProvider loggerProvider, string categoryName)
    {
        _provider = loggerProvider;
        _category = categoryName;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel != LogLevel.None;
    }

    // Write a log message
    public void Log<TState>(DateTimeOffset timestamp, LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        var builder = new StringBuilder();
        builder.Append(timestamp.ToString("yyyy-MM-dd HH:mm:ss.fff zzz"));
        builder.Append(" [");
        builder.Append(logLevel.ToString());
        builder.Append("] ");
        builder.Append(_category);
        builder.Append(": ");
        builder.AppendLine(formatter(state, exception));

        if (exception != null)
        {
            builder.AppendLine(exception.ToString());
        }

        _provider.AddMessage(timestamp, builder.ToString());
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        Log(DateTimeOffset.Now, logLevel, eventId, state, exception, formatter);
    }
}

Hopefully this class is pretty self explanatory - most of the work is done in the logger provider.

The remaining piece of the puzzle is to provide the extension methods that let you easily configure the provider for your own app.

Extension methods to add the provider yo your application

In ASP.NET Core 2.0, logging providers are added to your application by adding them directly to the WebHostBuilder in Program.cs. This is typically done using extension methods on the ILoggingBuilder. We can create a simple extension method, and even add an override to allow configuring the logging provider's options (filenames, intervals, file size limits etc).

public static class FileLoggerFactoryExtensions  
{
    public static ILoggingBuilder AddFile(this ILoggingBuilder builder)
    {
        builder.Services.AddSingleton<ILoggerProvider, FileLoggerProvider>();
        return builder;
    }

    public static ILoggingBuilder AddFile(this ILoggingBuilder builder, Action<FileLoggerOptions> configure)
    {
        builder.AddFile();
        builder.Services.Configure(configure);

        return builder;
    }
}

In ASP.NET Core 2.0, logging providers are added using DI, so adding our new logging provider just requires adding the FileLoggerProvider to DI, as in the AddFile() method above.

With the provider complete, we can add it to our application:

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureLogging(builder => builder.AddFile()) // <- add this line
            .UseStartup<Startup>()
            .Build();
}

This adds the FileLoggerProvider to the application, in addition to the Console and Debug provider. Now when we write logs to our application, logs will also be written to a file:

Creating a rolling file logging provider for ASP.NET Core 2.0

Summary

Creating an ILoggerProvider will rarely be necessary, especially thanks to established frameworks like Serilog and NLog that integrate with ASP.NET Core. Wherever possible, I suggest looking at one of these, but if you don't want to use a replacement framework like this, then using a dedicated ILoggerProvider is an option.

Implementing a new logging provider requires creating an ILogger implementation and an ILoggerProvider implementation. In this post I showed an example of a rolling file provider. For the full details and source code, check out the project on GitHub, or the NuGet package. All comments, bugs and suggestions welcome, and credit to the ASP.NET team for creating the code I based this on!


Andrew Lock: Aligning strings within string.Format and interpolated strings

Aligning strings within string.Format and interpolated strings

I was browsing through the MSDN docs the other day, trying to remind myself of the various standard ToString() format strings, when I spotted something I have somehow missed in all my years of .NET - alignment components.

This post is for those of you who have also managed to miss this feature, looking at how you can use alignment components both with string.Format and when you are using string interpolation.

Right-aligning currencies in format strings

I'm sure the vast majority of people already know how format strings work in general, so I won't dwell on it much here. In this post I'm going to focus on formatting numbers, as formatting currencies seems like the canonical use case for alignment components.

The following example shows a simple console program that formats three decimals as currencies:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1:C}");
        Console.WriteLine($"Number 2 {val2:C}");
        Console.WriteLine($"Number 3 {val3:C}");
    }
}

As you can see, we are using the standard c currency formatter in an interpolated string. Even though we are using interpolated strings, the output is identical to the output you get if you use string.Format or pass arguments to Console.WriteLine directly. All of the following are the same:

Console.WriteLine($"Number 1 {val1:C}");  
Console.WriteLine("Number 1 {0:C}", val1);  
Console.WriteLine(string.Format("Number 1 {0:C}", val1));  

When you run the original console app, you'll get something like the following (depending on your current culture):

Number 1 £1.00  
Number 2 £12.00  
Number 3 £1,234.12  

Note that the numbers are slightly hard to read - the following is much clearer:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  

This format is much easier to scan - you can easily see that Number 3 is significantly larger than the other numbers.

To right-align formatted strings as we have here, you can use an alignment component in your string.Format format specifiers. An alignment component specifies the total number of characters to use to format the value.

The formatter formats the number as usual, and then adds the necessary number of whitespace characters to make the total up to the specific alignment component. You specify the alignment component after the number to format and a comma ,. For example, the following format string "{value,5}" when value=1 would give the string " 1": 1 formatted character, 4 spaces, 5 characters in total.

You can use a formatting string (such as standard values like c or custom values like dd-mmm-yyyy and ###) in combination with an alignment component. Simply place the format component after the alignment component and :, for example "value,10:###". The integer after the comma is the alignment component, and the string after the colon is the formatting component.

So, going back to our original requirement of right aligning three currency strings, the following would do the trick, with the values previously presented:

decimal val1 = 1;  
decimal val2 = 12;  
decimal val3 = 1234.12m;

Console.WriteLine($"Number 1 {val1,10:C}");  
Console.WriteLine($"Number 2 {val2,10:C}");  
Console.WriteLine($"Number 3 {val3,10:C}");

// Number 1      £1.00
// Number 2     £12.00
// Number 3  £1,234.12

Oversized strings

Now, you may have spotted a slight issue with this alignment example. I specified that the total width of the formatted string should be 10 characters - what happens if the number is bigger that that?

In the following example, I'm formatting a long in the same ways as the previous, smaller, numbers:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;
    readonly static long _long = 999_999_999_999;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1,10:C}");
        Console.WriteLine($"Number 2 {val2,10:C}");
        Console.WriteLine($"Number 3 {val3,10:C}");
        Console.WriteLine($"Number 3 {_long,10:C}");
    }
}

You can see the effect of this 'oversized' number below:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  
Number 3 £999,999,999,999.00  

As you can see, when a formatted number doesn't fit in the requested alignment characters, it spills out to the right. Essentially the alignment component indicates the minimum number of characters the formatted value should occupy.

Padding left-aligned strings

You've seen how to left-align currencies, but what if the labels associated with these values were not all the same length, as in the following example:

Console.WriteLine($"A small number {val1,10:C}");  
Console.WriteLine($"A bit bigger {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

Written like this, our good work aligning the currencies is completely undone by the unequal length of our labels:

A small number      £1.00  
A bit bigger     £12.00  
A bit bigger again  £1,234.12  

Now, there's an easy way to fix the problem in this case, just manually pad with whitespace:

Console.WriteLine($"A small number     {val1,10:C}");  
Console.WriteLine($"A bit bigger       {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

But what if these labels were dynamic? In that case, we could use the same alignment component trick. Again, the integer passed to the alignment component indicates the minimum number of characters, but this time we use a negative value to indicate the values should be left aligned:

var label1 = "A small number";  
var label2 = "A bit bigger";  
var label3 = "A bit bigger again";

Console.WriteLine($"{label1,-18} {val1,10:C}");  
Console.WriteLine($"{label2,-18} {val2,10:C}");  
Console.WriteLine($"{label3,-18} {val3,10:C}");  

With this technique, when the strings are formatted, we get nicely formatted currencies and labels.

A small number          £1.00  
A bit bigger           £12.00  
A bit bigger again  £1,234.12  

Limitations

Now, there's one big limitation when it comes to using alignment components. In the previous example, we had to explicitly set the alignment component to a length of 18 characters. That feels a bit clunky.

Ideally, we'd probably prefer to do something like the following:

var maxLength = Math.Max(label1.Length, label2.Length);  
Console.WriteLine($"{label1,-maxLength} {val1,10:C}");  
Console.WriteLine($"{label2,-maxLength} {val2,10:C}");  

Unfortunately, this doesn't compile - maxLength has to be a constant. Ah well.

Summary

You can use alignment components in your format strings to both right-align and left-align your formatted values. This pads the formatted values with whitespace to either right-align (positive values) or left-align (negative values) the formatted value. This is particularly useful for right-aligning currencies in strings.


Andrew Lock: Using CancellationTokens in ASP.NET Core MVC controllers

Using CancellationTokens in ASP.NET Core MVC controllers

In this post I'll show how you can use a CancellationToken in your ASP.NET Core action method to stop execution when a user cancels a request from their browser. This can be useful if you have long running requests that you don't want to continue using up resources when a user clicks "stop" or "refresh" in their browser.

I'm not really going to cover any of the details of async, await, Tasks or CancellationTokens in this post, I'm just going to look at how you can inject a CancellationToken into your action methods, and use that to detect when a user has cancelled a request.

Long running requests and cancellation

Have you ever been on a website where you've made a request for a page, and it just sits there, supposedly loading? Eventually you get board and click the "Stop" button, or maybe hammer F5 to reload the page. Users expect a page to load pretty much instantly these days, and when it doesn't, a quick refresh can be very tempting.

That's all well and good for the user, but what about your poor server? If the action method the user is hitting takes a long time to run, then refreshing five times will fire off 5 requests. Now you're doing 5 times the work. That's the default behaviour in MVC - even though the user has refreshed the browser, which cancels the original request, your MVC action won't know that the value it's computing is going to be thrown away at the end of it!

In this post, we'll assume you have an MVC action that can take some time to complete, before sending a response to the user. While that action is processing, the user might cancel the request directly, or refresh the page (which effectively cancels the original request, and initiates a new one).

I'm ignoring the fact that long running actions are generally a bad idea. If you find yourself with many long running actions in your app, you might be better off considering a solution based on CQRS and messaging queues, so you can quickly return a response to the user, and can process the result of the action on a background thread.

For example, consider the following MVC controller. This is a toy example, that simply waits for 10s before returning a message to the user, but the Task.Delay() could be any long-running process, such as generating a large report to return to the user.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get()
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

If we hit the URL /slowtest then the request will run for 10s, and eventually will return the message:

Using CancellationTokens in ASP.NET Core MVC controllers

If we check the logs, you can see the whole action executed as expected:

Using CancellationTokens in ASP.NET Core MVC controllers

So now, what happens if the user refreshes the browser, half way through the request? The browser never receives the response from the first request, but as you can see from the logs, the action method executes to completion twice - once for the first (cancelled) request, and once for the second (refresh) request:

Using CancellationTokens in ASP.NET Core MVC controllers

Whether this is correct behaviour will depend on your app. If the request modifies state, then you may not want to halt execution mid-way through a method. On the other hand, if the request has no side-effects, then you probably want to stop the (presumably expensive) action as soon as you can.

ASP.NET Core provides a mechanism for the web server (e.g. Kestrel) to signal when a request has been cancelled using a CancellationToken. This is exposed as HttpContext.RequestAborted, but you can also inject it automatically into your actions using model binding.

Using CancellationTokens in your MVC Actions

CancellationTokens are lightweight objects that are created by a CancellationTokenSource. When a CancellationTokenSource is cancelled, it notifies all the consumers of the CancellationToken. This allows one central location to notify all of the code paths in your app that cancellation was requested.

When cancelled, the IsCancellationRequested property of the cancellation token will be set to True, to indicate that the CancellationTokenSource has been cancelled. Depending on how you are using the token, you may or may not need to check this property yourself. I'll touch on this a little more in the next section, but for now, let's see how to use a CancellationToken in our action methods.

Lets consider the previous example again. We have a long-running action method (which for example, is generating a read-only report by calling out to a number of other APIs). As it as an expensive method, we want to stop executing the action as soon as possible if the request is cancelled by the user.

The following code shows how we can hook into the central CancellationTokenSource for the request, by injecting a CancellationToken into the action method, and passing the parameter to the Task.Delay call:

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000, cancellationToken);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

MVC will automatically bind any CancellationToken parameters in an action method to the HttpContext.RequestAborted token, using the CancellationTokenModelBinder. This model binder is registered automatically when you call services.AddMvc() (or services.AddMvcCore()) in Startup.ConfigureServices().

With this small change, we can test out our scenario again. We'll make an initial request, which starts the long-running action, and then we'll reload the page. As you can see from the logs below, the first request never completes. Instead the Task.Delay call throws a TaskCancelledException when it detects that the CancellationToken.IsCancellationRequested property is true, immediately halting execution.

Using CancellationTokens in ASP.NET Core MVC controllers

Shortly after the request is cancelled by the user refreshing the browser, the original request is aborted with a TaskCancelledException which propagates back through the MVC filter pipeline, and back up the middleware pipeline.

In this scenario, the Task.Delay() method keeps an eye on the CancellationToken for you, so you never need to manually check if the token has been cancelled yourself. Depending on your scenario, you may be able to rely on framework methods like these to check the state of the CancellationToken, or you may have to watch for cancellation requests yourself.

Checking the cancellation state

If you're calling a built in method that supports cancellation tokens, like Task.Delay() or HttpClient.SendAsync(), then you can just pass in the token, and let the inner method take care of actually cancelling (throwing) for you.

In other cases, you may have some synchronous work you're doing, which you want to be able to be able to cancel. For example, imagine you're building a report to calculate all of the commission due to a company's employees. You're looping over every employee, and then looping over each sale they've made.

A simple solution to be able to cancel this report generation mid-way would be to check the CancellationToken inside the for loop, and abandon ship if the user cancels the request. The following example represents this kind of situation by looping 10 times, and performing some synchronous (non-cancellable) work, represented by the call to Thread.Sleep(). At the start of each loop, we check the cancellation token and throw if cancellation has been requested. This lets us add cancellation to an otherwise long-running synchronous process.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        for(var i=0; i<10; i++)
        {
            cancellationToken.ThrowIfCancellationRequested();
            // slow non-cancellable work
            Thread.Sleep(1000);
        }
        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

Now if you cancel the request the call to ThrowIfCancelletionRequested() will throw an OperationCanceledException, which again will propogate up the filter pipeline and up the middleware pipeline.

Tip: You don't have to use ThrowIfCancellationRequested(). You could check the value of IsCancellationRequested and exit the action gracefully. This article contains some general best practice patterns for working with cancellation tokens.

Typically, exceptions in action methods are bad, and this exception is treated no differently. If you're using the ExceptionHandlerMiddleware or DeveloperExceptionMiddleware in your pipeline, these will attempt to handle the exception, and generate a user-friendly error message. Of course, the request has been cancelled, so the user will never see this message!

Rather than filling your logs with exception messages from cancelled requests, you will probably want to catch these exceptions. A good candidate for catching cancellation exceptions from your MVC actions is an ExceptionFilter.

Catching cancellations with an ExceptionFilter

ExceptionFilters are an MVC concept that can be used to handle exceptions that occur either in your action methods, or in your action filters. If you're not familiar with the filter pipeline, I recommend checking out the documentation.

You can apply ExceptionFilters at the action level, at the controller level (in which case they apply to every action in the controller), or at the global level (in which case they apply to every action in your app). Typically they're implemented as attributes, so you can decorate your action methods with them.

For this example, I'm going to create a simple ExceptionFilter and add it to the global filters. We'll handle the exception, log it, and create a simple response so that we can just wind up the request as quick as possible. The actual response (Result) we generate doesn't really matter, as it's never getting sent to the browser, so our goal is to handle the exception in as tidy a way as possible.

public class OperationCancelledExceptionFilter : ExceptionFilterAttribute  
{
    private readonly ILogger _logger;

    public OperationCancelledExceptionFilter(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<OperationCancelledExceptionFilter>();
    }
    public override void OnException(ExceptionContext context)
    {
        if(context.Exception is OperationCanceledException)
        {
            _logger.LogInformation("Request was cancelled");
            context.ExceptionHandled = true;
            context.Result = new StatusCodeResult(400);
        }
    }
}

This filter is very simple. It derives from the base ExceptionFilterAttribute for simplicity, and overrides the OnException method. This provides an ExceptionContext object with information about the exception, the action method being executed, the ModelState - all sorts of interesting stuff!

All we care about are the OperationCanceledException exceptions, and if get one, we just write a log message, mark the exception as handled, and return a 400 result. Obviously we could log more (the URL would be an obvious start), but you get the idea.

Note that we are handling OperationCanceledException. The Task.Delay method throws a TaskCancelledException when cancelled, but that derives from OperationCanceledException, so we'll catch both types with this filter.

I'm not going to argue about whether this should be a 200/400/500 status code result. The request is cancelled and the client will never see it, so it really doesn't matter that much. I chose to go with a 400 result, but you have to be aware that if you have any middleware in place to catch errors like this, such as the StatusCodeMiddleware, then it could end up catching the response and doing pointless extra work to generate a "friendly" error page. On the other hand, if you return a 200, be careful if you have middleware that might cache the response to this "successful" request!

Muhammad Rehan Saeed suggest using 499 Client Closed Request, as it's used by Nginx for a similar purpose. That seems as good an option as any to me!

To hook up the exception filter globally, you add it in the call to services.AddMvc() in Startup.ConfigureServices:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(options =>
        {
            options.Filters.Add<OperationCancelledExceptionFilter>();
        });
    }
}

Now if the user refreshes their browser mid request, the request will still be cancelled, but we are back to a nice log message, instead of exceptions propagating all the way up our middleware pipeline.

Using CancellationTokens in ASP.NET Core MVC controllers

Summary

Users can cancel requests to your web app at any point, by hitting the stop or reload button on your browser. Typically, your app will continue to generate a response anyway, even though Kestrel won't send it to the user. If you have a long running action method, then you may want to detect when a request is cancelled, and stop execution.

You can do this by injecting a CancellationToken into your action method, which will be automatically bound to the HttpContext.RequestAborted token for the request. You can check this token for cancellation as usual, and pass it to any asynchronous methods that support it. If the request is cancelled, an OperationCanceledException or TaskCanceledException will be thrown.

You can easily handle this exceptions using an ExceptionFilter, applied to the action or controller directly, or alternatively applied globally. The response won't be sent to the user's browser, so this isn't essential, but you can use it to tidy up your logs, and short circuit the pipeline in as efficient manner as possible.

Thanks to @purekrome for requesting this post and even providing the code outline!


Darrel Miller: HTTP Pattern Index

When building HTTP based applications we are limited to a small set of HTTP methods in order to achieve the goals of our application. Once our needs go beyond simple CRUD style manipulation of resource representations, we need to be a little more creative in the way we manipulate resources in order to achieve more complex goals.

The following patterns are based on scenarios that I myself have used in production applications, or I have seen others implement. These patterns are language agnostic, domain agnostic and to my knowledge, exist within the limitations of the REST constraints.


Name Description
Alias A resource designed to provide a logical identifier but without being responsible for incurring the costs of transferring the representation bytes.
Action Coming soon A processing resource used to convey a client's intent to invoke some kind of unsafe action on a secondary resource.
Bouncer A resource designed to accept a request body containing complex query parameters and redirect to a new location to enable the results of complex and expensive queries to be cached.
Builder Coming soon: A builder resource is much like a factory resource in that it is used to create another resource, however, a builder is a transient resource that enables idempotent creation and allows the client to specify values that cannot change over the lifetime of the created resource.
Bucket A resource used to indicate the status of a "child" resource.
Discovery This type of resource is used to provide a client with the information it needs to be able to access other resources.
Factory A factory resource is one that is used to create another resource.
Miniput A resource designed to enable doing a partial updates to another resource.
Progress A progress resource is usually a temporary resource that is created automatically by the server to provide status on some long running process that has been initiated by a client.
Sandbox Coming soon: A processing resource that is paired with a regular resource to enable making "whatif" style updates and seeing what the results would have been if applied against the regular resource.
Toggle Coming soon: A resource that has two distinct states and can easily be switched between those states.
Whackamole A type of resource that when deleted, re-appears as a different resource.
Window Coming soon: A resource that provides access to a subset of a larger set of information through the use of parameters that filter, project and zoom information from the complete set.


Anuraj Parameswaran: Introduction to Razor Pages in ASP.NET Core

This post is about Razor Pages in ASP.NET Core. Razor Pages is a new feature of ASP.NET Core MVC that makes coding page-focused scenarios easier and more productive. With ASP.NET Core 2.0, Microsoft released Razor Pages. Razor Pages is another way of building applications, built on top of ASP.NET Core MVC. Razor Pages will be helpful for the beginners as well as the developers, who are coming from other web application development backgrounds like PHP or Old ASP. Razor Pages will fit well in small scenarios where building an application in MVC is an overkill.


Andrew Lock: The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

This article describes why you get the error "The SDK 'Microsoft.Net.Sdk.Web' specified could not be found" when creating a new project in Visual Studio 2017 15.3, which prevents the project from loading, and how to fix it.

tl;dr; I had a rogue global.json sitting in a parent folder, that was tying the SDK version to 1.X. Removing that, (or adding a global.json for 2.x fixed the problem).

Update: Shortly after publishing this post, I noticed a tweet from Patrik who was getting a similar error, but for a different situation. He had installed the VS 2017 15.3 update, and could no longer open ASP.NET Core 1.1 projects!

It turns out, he'd uncovered the route of the problem, and the issue I was having - VS 2017 update 3 is incompatible with the 1.0.0 SDK:

Kudos to him for figuring it out!

2.0 all the things

As I'm sure anyone who's reading this is aware, Microsoft released the final version of .NET Standard 2.0, .NET Core 2.0, and ASP.NET Core 2.0 yesterday. These brings a huge number of changes, perhaps most importantly being the massive increase in API surface brought by .NET STandard 2.0, which will make porting applications to .NET Core much easier.

As part of the release, Microsoft also released Visual Studio 2017 update 3. This also has a bunch of features, but most importantly in supports .NET Core 2.0. Before this point, if you wanted to play with the .NET Core 2.0 bits you had to install the preview version of Visual Studio.

That's no longer as scary as it once was, with VS new lightweight installer and side by side installers. But I've been burned one to many times, and just didn't feel like risking having to pave my machine, so I decided to hold off the preview version. That didn't stop me playing with the preview bits of course, OmniSharp means developing in VS Code and with the CLI is almost as good, and JetBrains Rider went RTM a couple of weeks ago.

Still, I was excited to play with 2.0 on my home turf, in Visual Studio, so I:

  • Opened up the Visual Studio Installer program - This should force VS to check for updates, instead of waiting for it to notice that an update was available. It still took a little while (10 mins) for 15.3 to become available, but I clicked the update button as soon as it was available

  • Installed the .NET Core 2.0 SDK from here - You have to do this step separately at the moment. Once this is installed, the .NET Core 2.0 templates will light up in Visual Studio.

With both of these installed I decided on a quick test to make sure everything was running smoothly. I'd create a basic app using new 2.0 templates.

Creating a new ASP.NET Core 2.0 web app

The File > New Project experience is pretty much the same in ASP.NET Core 2.0, but there are some additional templates available after you choose ASP.NET Core Web Application. If you switch the framework version to ASP.NET Core 2.0, you'll see some new templates appear, including SPA templates for Angular and React.js:

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

I left everything at the defaults - no Docker support enabled, no authentication - and selected Web Application (Model-View-Controller).

Note that the templates have been renamed a little. The Web Application template creates a new project using Razor pages, while the Web Application (Model-View-Controller) template creates a template using separate controllers.

Click OK, and wait for the template to scaffold… and …

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Oh dear. What's going on here?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

So, there was clearly a problem creating the solution. My first thought was that it was a bug in the new VS 2017 update. A little odd seeing as noone else on Twitter seemed to have mentioned it, but not overly surprising given it had just been released. I should expect some kinks right?

A quickgoogling for the error, turned up this issue, but that seemed to suggest the error was an old one that had been fixed.

I gave it a second go, but sure enough, the same error occurred. Clicking OK left me with a solution with no projects.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The project template was created successfully on disk, so I thought, why not just add it to the solution directly: Right Click on the solution file Add > Existing Project?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Hmmm, so definitely something significantly wrong here...

Check your global.json

The error was complaining that the SDK was not found. Why would that happen? I had definitely installed the .NET Core 2.0 SDK, and VS could definitely see it, as it had shown me the 2.0 templates.

It was at this point I had an epiphany. A while back, when experimenting with a variety of preview builds, I had recurring issues when I would switch back and forth between preview projects and ASP.NET Core 1.0 projects.

To get round the problem, I created a sub folder in my Repos folder for preview builds, and dropped a global.json into the folder for the newer SDK, and placed the following global.json in the root of my Repos folder:

{
  "sdk": {
    "version": "1.0.0"
  }
}

Any time I created a project in the Previews folder, it would use the preview SDK, but a project created anywhere else would use the stable 1.0.0 SDK. This was the route of my problem.

I was trying to create an ASP.NET Core 2.0 project in a folder tied to the 1.0.0 SDK. That older SDK doesn't support the new 2.0 projects, so VS was borking when it tried to load the project.

The simple fix was to either delete the global.json entirely (the highest SDK version will be used in that case), or update it to 2.0.0.

In general, you can always use the latest version of the SDK to build your projects. The 2.0.0 SDK can be used to build 1.0.0 projects.

After updating the global.json, VS was able to add the existing project, and to create new projects with no issues.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Summary

I was running into an issue where creating a new ASP.NET Core 2.0 project was giving me an error The SDK 'Microsoft.Net.Sdk.Web' specified could not be found, and leaving me unable to open the project in Visual Studio. The problem was the project was created in a folder that contained a global.json file, tying the SDK version to 1.0.0.

Deleting the global.json, or updating it to 2.0.0, fixed the issue. Be sure to check parent folders too - if any parent folder contains a global.json, the SDK version specified in the "closest" folder will be used.


Damien Bowden: Angular Configuration using ASP.NET Core settings

This post shows how ASP.NET Core application settings can be used to configure an Angular application. ASP.NET Core provides excellent support for different configuration per environment, and so using this for an Angular application can be very useful. Using CI, one release build can be automatically created with different configurations, instead of different release builds per deployment target.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/master/src/AngularClient

ASP.NET Core Hosting application

The ClientAppSettings class is used to load the strongly typed appsettings.json from the json file. The class contains the properties required for OIDC configuration in the SPA and the required API URLs. These properties have different values per deployment, so we do not want to add these in a typescript file, or change with each build.

namespace AngularClient.ViewModel
{
    public class ClientAppSettings
    {
        public string  stsServer { get; set; }
        public string redirect_url { get; set; }
        public string client_id { get; set; }
        public string response_type { get; set; }
        public string scope { get; set; }
        public string post_logout_redirect_uri { get; set; }
        public bool start_checksession { get; set; }
        public bool silent_renew { get; set; }
        public string startup_route { get; set; }
        public string forbidden_route { get; set; }
        public string unauthorized_route { get; set; }
        public bool log_console_warning_active { get; set; }
        public bool log_console_debug_active { get; set; }
        public string max_id_token_iat_offset_allowed_in_seconds { get; set; }
        public string apiServer { get; set; }
        public string apiFileServer { get; set; }
    }
}

The appsettings.json file contains the actual values which will be used for each different environment.

{
  "ClientAppSettings": {
    "stsServer": "https://localhost:44318",
    "redirect_url": "https://localhost:44311",
    "client_id": "angularclient",
    "response_type": "id_token token",
    "scope": "dataEventRecords securedFiles openid profile",
    "post_logout_redirect_uri": "https://localhost:44311",
    "start_checksession": false,
    "silent_renew": false,
    "startup_route": "/dataeventrecords",
    "forbidden_route": "/forbidden",
    "unauthorized_route": "/unauthorized",
    "log_console_warning_active": true,
    "log_console_debug_active": true,
    "max_id_token_iat_offset_allowed_in_seconds": 10,
    "apiServer": "https://localhost:44390/",
    "apiFileServer": "https://localhost:44378/"
  }
}

The ClientAppSettings class is then added to the IoC in the ASP.NET Core Startup class and the ClientAppSettings section is used to fill the instance with data.

public void ConfigureServices(IServiceCollection services)
{
  services.Configure<ClientAppSettings>(Configuration.GetSection("ClientAppSettings"));
  services.AddMvc();

A MVC Controller is used to make the settings public. This class gets the strongly typed settings from the IoC and returns it in a HTTP GET request. No application secrets should be included in this HTTP GET request!

using AngularClient.ViewModel;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

namespace AngularClient.Controllers
{
    [Route("api/[controller]")]
    public class ClientAppSettingsController : Controller
    {
        private readonly ClientAppSettings _clientAppSettings;

        public ClientAppSettingsController(IOptions<ClientAppSettings> clientAppSettings)
        {
            _clientAppSettings = clientAppSettings.Value;
        }

        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_clientAppSettings);
        }
    }
}

Configuring the Angular application

The Angular application needs to read the settings and use these in the client application. A configClient function is used to GET the data from the server. The APP_INITIALIZER could also be used, but as the settings are been used in the main AppModule, you still have to wait for the HTTP GET request to complete.

configClient() {

	// console.log('window.location', window.location);
	// console.log('window.location.href', window.location.href);
	// console.log('window.location.origin', window.location.origin);

	return this.http.get(window.location.origin + window.location.pathname + '/api/ClientAppSettings').map(res => {
		this.clientConfiguration = res.json();
	});
}

In the constructor of the AppModule, the module subscribes to the configClient function. Here the configuration values are read and the properties are set as required for the SPA application.

clientConfiguration: any;

constructor(public oidcSecurityService: OidcSecurityService, private http: Http, private configuration: Configuration) {

	console.log('APP STARTING');
	this.configClient().subscribe(config => {

		let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
		openIDImplicitFlowConfiguration.stsServer = this.clientConfiguration.stsServer;
		openIDImplicitFlowConfiguration.redirect_url = this.clientConfiguration.redirect_url;
		openIDImplicitFlowConfiguration.client_id = this.clientConfiguration.client_id;
		openIDImplicitFlowConfiguration.response_type = this.clientConfiguration.response_type;
		openIDImplicitFlowConfiguration.scope = this.clientConfiguration.scope;
		openIDImplicitFlowConfiguration.post_logout_redirect_uri = this.clientConfiguration.post_logout_redirect_uri;
		openIDImplicitFlowConfiguration.start_checksession = this.clientConfiguration.start_checksession;
		openIDImplicitFlowConfiguration.silent_renew = this.clientConfiguration.silent_renew;
		openIDImplicitFlowConfiguration.startup_route = this.clientConfiguration.startup_route;
		openIDImplicitFlowConfiguration.forbidden_route = this.clientConfiguration.forbidden_route;
		openIDImplicitFlowConfiguration.unauthorized_route = this.clientConfiguration.unauthorized_route;
		openIDImplicitFlowConfiguration.log_console_warning_active = this.clientConfiguration.log_console_warning_active;
		openIDImplicitFlowConfiguration.log_console_debug_active = this.clientConfiguration.log_console_debug_active;
		openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = this.clientConfiguration.max_id_token_iat_offset_allowed_in_seconds;

		this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);

		configuration.FileServer = this.clientConfiguration.apiFileServer;
		configuration.Server = this.clientConfiguration.apiServer;
	});
}

The Configuration class can then be used throughout the SPA application.

import { Injectable } from '@angular/core';

@Injectable()
export class Configuration {
    public Server = 'read from app settings';
    public FileServer = 'read from app settings';
}

I am certain, there is a better way to do the Angular configuration, but not much information exists for this. APP_INITIALIZER is not so well documentated. Angular CLI has it’s own solution, but the configuration file cannot be read per environment.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments

https://www.intertech.com/Blog/deploying-angular-4-apps-with-environment-specific-info/

https://stackoverflow.com/questions/43193049/app-settings-the-angular-4-way

https://damienbod.com/2015/10/11/asp-net-5-multiple-configurations-without-using-environment-variables/



Andrew Lock: Introduction to the ApiExplorer in ASP.NET Core

Introduction to the ApiExplorer in ASP.NET Core

One of the standard services added when you call AddMvc() or AddMvcCore() in an ASP.NET Core MVC application is the ApiExplorer. In this post I'll show a quick example of its capabilities, and give you a taste of the metadata you can obtain about your application.

Exposing your application's API with the ApiExplorer

The ApiExplorer contains functionality for discovering and exposing metadata about your MVC application. You can use it to provide details such as a list of controllers and actions, their URLs and allowed HTTP methods, parameters and response types.

How you choose to use these details is up to you - you could use it to auto-generate documentation, help pages, or clients for your application. The Swagger and Swashbuckle.AspNetCore frameworks use the ApiExplorer functionality to provide a fully featured documentation framework, and are well worth a look if that's what you're after.

For this article, I'll hook directly into the ApiExplorer to generate a simple help page for a basic Web API controller.

Introduction to the ApiExplorer in ASP.NET Core

Adding the ApiExplorer to your applications

The ApiExplorer functionality is part of the Microsoft.AspNetCore.Mvc.ApiExplorer package. This package is referenced by default when you include the Microsoft.AspNetCore.Mvc package in your application, so you generally won't need to add the package explicitly. If you are starting from a stripped down application, you can add the package directly to make the services available.

As Steve Gordon describes in his series on the MVC infrastructure, the call to AddMvc in Startup.ConfigureServices automatically adds the ApiExplorer services to your application by calling services.AddApiExplorer(), so you don't need to explicitly add anything else to your Startup class .

The call to AddApiExplorer in turn, calls an internal method AddApiExplorerServices(), which adds the actual services you will use in your application:

internal static void AddApiExplorerServices(IServiceCollection services)  
{
    services.TryAddSingleton<IApiDescriptionGroupCollectionProvider, ApiDescriptionGroupCollectionProvider>();
    services.TryAddEnumerable(
        ServiceDescriptor.Transient<IApiDescriptionProvider, DefaultApiDescriptionProvider>());
}

This adds a default implementations of the IApiDescriptionGroupCollectionProvider that exposes the API endpoints to your application. To access the list of APIs, you just need to inject the service into your controller/services.

Listing your application's metadata

For this app, we'll just include the default ValuesController that is added to the default web API project:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

In addition, we'll create a simple controller that renders details about the Web API endpoints in your application to a razor page, as you saw earlier, called DocumentationController.

First, inject the IApiDescriptionGroupCollectionProvider into the controller. For simplicity, we'll just return this directly as the model to the Razor view page - we'll decompose the details it provides in the Razor page.

public class DocumentationController : Controller  
{
    private readonly IApiDescriptionGroupCollectionProvider _apiExplorer;
    public DocumentationController(IApiDescriptionGroupCollectionProvider apiExplorer)
    {
        _apiExplorer = apiExplorer;
    }

    public IActionResult Index()
    {
        return View(_apiExplorer);
    }
}

The provider exposes a collection of ApiDescriptionGroups, each of which contains a collection of ApiDescriptions. You can think of an ApiDescriptionGroup as a controller, and an ApiDescription as an action method.

The ApiDescription contains a wealth of information about the action method - parameters, the URL, the type of media that can be returned - basically everything you might want to know about an API!

The Razor page below lists out all the APIs that are exposed in the application. There's a slightly overwhelming amount of detail here, but it lists everything you might need to know!

@using Microsoft.AspNetCore.Mvc.ApiExplorer;
@model IApiDescriptionGroupCollectionProvider

<div id="body">  
    <section class="featured">
        <div class="content-wrapper">
            <hgroup class="title">
                <h1>ASP.NET Web API Help Page</h1>
            </hgroup>
        </div>
    </section>
    <section class="content-wrapper main-content clear-fix">
        <h3>API Groups, version @Model.ApiDescriptionGroups.Version</h3>
        @foreach (var group in Model.ApiDescriptionGroups.Items)
            {
            <h4>@group.GroupName</h4>
            <ul>
                @foreach (var api in group.Items)
                {
                    <li>
                        <h5>@api.HttpMethod @api.RelativePath</h5>
                        <blockquote>
                            @if (api.ParameterDescriptions.Count > 0)
                            {
                                <h6>Parameters</h6>
                                    <dl class="dl-horizontal">
                                        @foreach (var parameter in api.ParameterDescriptions)
                                        {
                                            <dt>Name</dt>
                                            <dd>@parameter.Name,  (@parameter.Source.Id)</dd>
                                            <dt>Type</dt>
                                            <dd>@parameter.Type?.FullName</dd>
                                            @if (parameter.RouteInfo != null)
                                            {
                                                <dt>Constraints</dt>
                                                <dd>@string.Join(",", parameter.RouteInfo.Constraints?.Select(c => c.GetType().Name).ToArray())</dd>
                                                <dt>DefaultValue</dt>
                                                <dd>parameter.RouteInfo.DefaultValue</dd>
                                                <dt>Is Optional</dt>
                                                <dd>@parameter.RouteInfo.IsOptional</dd>
                                            }
                                        }
                                    </dl>
                            }
                            else
                            {
                                <i>No parameters</i>
                            }
                        </blockquote>
                        <blockquote>
                            <h6>Supported Response Types</h6>
                            <dl class="dl-horizontal">
                                @foreach (var response in api.SupportedResponseTypes)
                                {
                                    <dt>Status Code</dt>
                                        <dd>@response.StatusCode</dd>

                                        <dt>Response Type</dt>
                                        <dd>@response.Type?.FullName</dd>

                                        @foreach (var responseFormat in response.ApiResponseFormats)
                                        {
                                            <dt>Formatter</dt>
                                            <dd>@responseFormat.Formatter?.GetType().FullName</dd>
                                            <dt>Media Type</dt>
                                            <dd>@responseFormat.MediaType</dd>
                                        }
                                }
                            </dl>

                        </blockquote>
                    </li>
                }
            </ul>
        }
    </section>
</div>  

If you run the application now, you might be slightly surprised by the response:

Introduction to the ApiExplorer in ASP.NET Core

Even though we have the default ValuesController in the project, apparently, there's no APIs!

Enabling documentation of your controllers

By default, controllers in ASP.NET Core are not included in the ApiExplorer. There are a whole host of attributes you can apply to customise the metadata produced by the ApiExplorer, but the critical one here is [ApiExplorerSettings].

By applying this attribute to a controller, you can control whether or not it is included in the API, as well as its name in the ApiExplorer:

[Route("api/[controller]")]
[ApiExplorerSettings(IgnoreApi = false, GroupName = nameof(ValuesController))]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    // other action methods
}

After applying this attribute, and viewing the groups in IApiDescriptionGroupCollectionProvider you can see that the API is now available:

Introduction to the ApiExplorer in ASP.NET Core

ApiExplorer and conventional routing

Note, you can only apply the [ApiExplorerSettings] attribute to controllers and actions that use attribute routing. If you enable the ApiExplorer on an action that uses conventional routing, you will be greeted with an error like the following:

Introduction to the ApiExplorer in ASP.NET Core

Remember, ApiExplorer really is just for your APIs! If you stick to the convention of using attribute routing for your Web API controllers and conventional routing for your MVC controllers you'll be fine, but it's just something to be aware of.

Summary

This was just a brief introduction to the ApiExplorer functionality that exposes a variety of metadata about the Web APIs in your application. You're unlikely to use it quite like this, but it's interesting to see all the introspection options available to you.


Andrew Lock: How to format response data as XML or JSON, based on the request URL in ASP.NET Core

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

I think it's safe to say that most ASP.NET Core applications that use a Web API return data as JSON. What with JavaScript in the browser, and JSON parsers everywhere you look, this makes perfect sense. Consequently, ASP.NET Core is very much geared towards JSON, but it is perfectly possible to return data in other formats (for example Damien Bowden recently added a Protobuf formatter to the WebApiContrib.Core project).

In this post, I'm going to focus on a very specific scenario. You want to be able to return data from a Web API action method in one of two different formats - JSON or XML, and you want to control which format is used by the extension of the URL. For example /api/Values.xml should format the result as XML, while /api/Values.json should format the result as JSON.

Using the FormatFilterAttribute to read the format from the URL

Out of the box, if you use the standard MVC service configuration by calling services.AddMvc(), the JSON formatters are configured for your application by default. All that you need to do is tell your action method to read the format from the URL using the FormatFilterAttribute.

You can add the [FormatFilter] attribute to your action methods, to indicate that the output format will be defined by the URL. The FormatFilter looks for a route parameter called format in the RouteData for the request, or in the querystring. If you want to use the .json approach I described earlier, you should make sure the route template for your actions includes a .{format} parameter:

public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet("api/values.{format}"), FormatFilter]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

Note: You can make the .format suffix optional using the syntax .{format?}, but you need to make sure the . follows a route parameter, e.g. api/values/{id}.{format?}. If you try to make the format optional in the example above (api/values.{format?}) you'll get a server error. A bit odd, but there you go…

With the route template updated, and the [FormatFilter] applied to the method, we can now test our JSON formatters:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Success - we have returned JSON when requested! Let's give it a try with the xml suffix:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Doh, no such luck. As I mentioned earlier, the JSON formatters are registered by default; if we want to return XML then we'll need to configure the XML formatters too.

Adding the XML formatters

In ASP.NET Core, everything is highly modular, so you only add the functionality you need to your application. Consequently, there's a separate NuGet package for the XML formatters that you need to add to your .csproj file - Microsoft.AspNetCore.Mvc.Formatters.Xml

<PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Xml" Version="1.1.3" />  

Note: If you're using ASP.NET Core 2.0, this package is included by default as part of the Microsoft.AspNetCore.All metapackage.

Adding the package to your project lights up an extension method on the IMvcBuilder instance returned by the call to services.AddMvc(). The AddXmlSerializerFormatters() method adds both input and output formatters, so you can serialise objects to and from XML.

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc()
        .AddXmlSerializerFormatters();
}

Alternatively, if you only want to be able to format results as XML, but don't need to be able to read XML from a request body, you can just add the output formatter instead:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    });
}

By adding this output formatter we can now format objects as XML. However, if you test the XML URL again at this point, you'll still get the same 404 response as we did before. What gives?

Registering a type mapping for the format suffix

By registering the XML formatters, we now have the ability to format XML. However, the FormatFilter doesn't know how to handle the .xml suffix we're using in the request URL. To make this work, we need to tell the filter that the xml suffix maps to the application/xml MIME type.

You can register new type mappings by configuring the FormatterMappings options, when you call AddMvc(). These define the mappings between the {format} parmeter and the MIME type that the FormatFilter will use. For example:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("xml", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("config", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("js", MediaTypeHeaderValue.Parse("application/json"));
    })
        .AddXmlSerializerFormatters();

The FormatterMappings property contains a dictionary of all the suffix to MIME type mappings. You can add new ones using the SetMediaTypeMappingForFormat, passing the suffix as the key and the MIME type as the value.

In the example above I've actually registered three new mappings. I've added the xml and config files as XML, and added a new js suffix that maps to JSON, just to demonstrate that JSON isn't actually anything special here!

With this last piece of configuration in place, we can now finally request XML by using the .xml or .config suffix in our URLs:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Summary

In this post you saw how to use the FormatFilter to specify the desired output format by using a file-type suffix on your URLs. To do so, there were four steps:

  1. Add the [FormatFilter] attribute to your action method
  2. Ensure the route to the action contains a {format} route parameter (or pass it in the querystirng e.g. ?format=xml)
  3. Register the output formatters you wish to support with MVC. To add both input and output XML formatters, use the AddXmlSerializerFormatters() extensions method
  4. Register a new type mapping between a format suffix and a MIME type on the MvcOptions object. For example, you could add XML using:
options.FormatterMappings.SetMediaTypeMappingForFormat(  
    "xml", MediaTypeHeaderValue.Parse("application/xml"));

If a type mapping is not configured for a suffix, then you'll get a 404 Not Found response when calling the action.


Anuraj Parameswaran: Running PHP on .NET Core with Peachpie

This post is about running PHP on .NET Core with Peachpie. Peachpie is an open source PHP Compiler to .NET. This innovative compiler allows you to run existing PHP applications with the performance, speed, security and interoperability of .NET.


Andrew Lock: Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

ASP.NET Core Identity is an authentication and membership system that lets you easily add login functionality to your ASP.NET Core application. It is designed in a modular fashion, so you can use any "stores" for users and claims that you like, but out of the box it uses Entity Framework Core to store the entities in a database.

By default, EF Core uses naming conventions for the database entities that are typical for SQL Server. In this post I'll describe how to configure your ASP.NET Core Identity app to replace the database entity names with conventions that are more common to PostgreSQL.

I'm focusing on ASP.NET Core Identity here, where the entity table name mappings have already been defined, but there's actually nothing specific to ASP.NET Core Identity in this post. You can just as easily apply this post to EF Core in general, and use more PostgreSQL-friendly conventions for all your EF Core code. See here for the tl;dr code!

Moving to PostgreSql as a SQL Server aficionado

ASP.NET Core Identity can use any database provider that is supported by EF Core - some of which are provided by Microsoft, others are third-party or open source components. If you use the templates that come with the .NET CLI via dotnet new, you can choose SQL Server or SQLite by default. Personally, I've been working more and more with PostgreSQL, the powerful cross-platform, open source database.

As someone who's familiar with SQL Server, one of the biggest differences that can bite you when you start working with PostgreSQL is that table and column names are case sensitive! This certainly takes some getting used to, and, frankly is a royal pain in the arse to work with if you stick to your old habits. If a table is created with uppercase characters in the table or column name, then you have to ensure you get the case right, and wrap the identifiers in double quotes, as I'll show shortly.

This is unfortunate when you come from a SQL Server world, where camel-case is the norm for table and column names. For example, imagine you have a table called AspNetUsers, and you want to retrieve the Id, Email and EmailConfirmed fields:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

To query this table in PostgreSQL, you'd have to do something like:

SELECT "Id", "Email", "EmailConfirmed" FROM "AspNetUsers"  

Notice the quote marks we need? This only gets worse when you need to escape the quotes because you're calling from the command line, or defining a SQL query in a C# string, for example:

$ psql -d DatabaseWithCaseIssues -c "SELECT \"Id\", \"Email\", \"EmailConfirmed\" FROM \"AspNetUsers\" "

Clearly nobody wants to be dealing with this. Instead it's convention to use snake_case for database objects instead of CamelCase.

snake_case > CamelCase in PostreSQL

Snake case uses lowercase for all of the identifiers, and instead of using capitals to demarcate words, it uses an underscore, _. This is perfect for PostgreSQL, as it neatly avoids the case issue. If, we could rename our entity table names to asp_net_users, and the corresponding fields to id, email and email_confirmed, then we'd neatly side-step the quoting issue:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

This makes the PostgreSQL queries way simpler, especially when you would otherwise need to escape the quote marks:

$ psql -d DatabaseWithCaseIssues -c "SELECT id, email, email_confirmed FROM asp_net_users"

If you're using EF Core, then theoretically all this wouldn't matter to you. The whole point is that you don't have to write SQL code yourself, and you can just let the underlying framework generate the necessary queries. If you use CamelCase names, then the EF Core PostgreSQL database provider will happily escape all the entity names for you.

Unfortunately, reality is a pesky beast. It's just a matter of time before you find yourself wanting to write some sort of custom query directly against the database to figure out what's going on. More often than not, if it comes to this, it's because there's an issue in production and you're trying to figure out what went wrong. The last thing you need at this stressful time is to be messing with casing issues!

Consequently, I like to ensure my database tables are easy to query, even if I'll be using EF Core or some other ORM 99% of the time.

EF Core conventions and ASP.NET Core Identity

ASP.NET Core Identity takes care of many aspects of the identity and membership system of your app for you. In particular, it creates and manages the application user, claim and role entities for you, as well as a variety of entities related to third-party logins:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

If you're using the EF Core package for ASP.NET Core Identity, these entities are added to an IdentityDbContext, and configured within the OnModelCreating method. If you're interested, you can view the source online - I've shown a partial definition below, that just includes the configuration for the Users property which represents the users of your app

public abstract class IdentityDbContext<TUser>  
{
    public DbSet<TUser> Users { get; set; }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("UserNameIndex").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("EmailIndex");
            b.ToTable("AspNetUsers");
        }
        // additional configuration
    }
}

The IdentityDbContext uses the OnModelCreating method to configure the database schema. In particular, it defines the name of the user table to be "AspNetUsers" and sets the name of a number of indexes. The column names of the entities default to their C# property values, so they would also be CamelCased.

In your application, you would typically derive your own DbContext from the IdentityDbContext<>, and inherit all of the schema associated with ASP.NET Core Identity. In the example below I've done this, and specified TUser type for the application to be ApplicationUser:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }
}

With the configuration above, the database schema would use all of the default values, including the table names, and would give the database schema we saw previously. Luckily, we can override these values and replace them with our snake case values instead.

Replacing specific values with snake case

As is often the case, there are multiple ways to achieve our desired behaviour of mapping to snake case properties. The simplest conceptually is to just overwrite the values specified in IdentityDbContext.OnModelCreating() with new values. The later values will be used to generate the database schema. We simply override the OnModelCreating() method, call the base method, and then replace the values with our own:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("user_name_index").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("email_index");
            b.ToTable("asp_net_users");
        }
        // additional configuration
    }
}

Unfortunately, there's a problem with this. EF Core uses conventions to set the names for entities and properties where you don't explicitly define their schema name. In the example above, we didn't define the property names, so they will be CamelCase by default.

If we want to override these, then we need to add additional configuration for each entity property:

b.Property(b => b.EmailConfirmation).HasColumnName("email_confirmation");  

Every. Single. Property.

Talk about laborious and fragile…

Clearly we need another way. Instead of trying to explicitly replace each value, we can use a different approach, which essentially creates alternative conventions based on the existing ones.

Replacing the default conventions with snake case

The ModelBuilder instance that is passed to the OnModelCreating() method contains all the details of the database schema that will be created. By default, the database object names will all be CamelCased.

By overriding the OnModelCreating method, you can loop through each table, column, foreign key and index, and replace the existing value with its snake case equivalent. The following example shows how you can do this for every entity in the EF Core model. The ToSnakCase() extension method (shown shortly) converts a camel case string to a snake case string.

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        foreach(var entity in builder.Model.GetEntityTypes())
        {
            // Replace table names
            entity.Relational().TableName = entity.Relational().TableName.ToSnakeCase();

            // Replace column names            
            foreach(var property in entity.GetProperties())
            {
                property.Relational().ColumnName = property.Name.ToSnakeCase();
            }

            foreach(var key in entity.GetKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var key in entity.GetForeignKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var index in entity.GetIndexes())
            {
                index.Relational().Name = index.Relational().Name.ToSnakeCase();
            }
        }
    }
}

The ToSnakeCase() method is just a simple extension method that looks for a lower case letter or number, followed by a capital letter, and inserts an underscore. There's probably a better / more efficient way to achieve this, but it does the job!

public static class StringExtensions  
{
    public static string ToSnakeCase(this string input)
    {
        if (string.IsNullOrEmpty(input)) { return input; }

        var startUnderscores = Regex.Match(input, @"^_+");
        return startUnderscores + Regex.Replace(input, @"([a-z0-9])([A-Z])", "$1_$2").ToLower();
    }
}

These conventions will replace all the database object names with snake case values, but there's one table that won't be modified, the actual migrations table. This is defined when you call UseNpgsql() or UseSqlServer(), and by default is called __EFMigrationsHistory. You'll rarely need to query it outside of migrations, so I won't worry about it for now.

With our new conventions in place, we can add the EF Core migrations for our snake case schema. If you're starting from one of the VS or dotnet new templates, delete the default migration files created by ASP.NET Core Identity:

  • 00000000000000_CreateIdentitySchema.cs
  • 00000000000000_CreateIdentitySchema.Designer.cs
  • ApplicationDbContextModelSnapshot.cs

and create a new set of migrations using:

$ dotnet ef migrations add SnakeCaseIdentitySchema

Finally, you can apply the migrations using

$ dotnet ef database update

After the update, you can see that the database schema has been suitably updated. We have snake case table names, as well as snake case columns (you can take my word for it on the foreign keys and indexes!)

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Now we have the best of both worlds - we can use EF Core for all our standard database actions, but have the option of hand crafting SQL queries without crazy amounts of ceremony.

Note, although this article focused on ASP.NET Core Identity, it is perfectly applicable to EF Core in general.

Summary

In this post, I showed how you could modify the OnModelCreating() method so that EF Core uses snake case for database objects instead of camel case. You can look through all the entities in EF Core's model, and change the table names, column names, keys, and indexes to use snake case. For more details on the default EF Core conventions, I recommend perusing the documentation!


Andrew Lock: Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

I was recently creating a new GitHub project and I wanted to target ASP.NET Core 2.0 preview 2. I like to use AppVeyor for the CI build and for publishing to MyGet/NuGet, as I can typically just copy and paste a single file between projects to get my standard build pipeline. Unfortunately, targeting the latest preview is easier said than done! In this post, I'll show how to update your appveyor.yml file so you can build your .NET Core preview libraries on AppVeyor.

Building .NET Core projects on AppVeyor

If you are targeting a .NET Core SDK version that AppVeyor explicitly supports, then you don't really have to do anything - a simple appveyor.yml file will handle everything for you. For example, the following is a (somewhat abridged) version I use on some of my existing projects:

version: '{build}'  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxxxxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

There's really nothing fancy here, most of this configuration is used to define when AppVeyor should run a build, and how to deploy the NuGet package to NuGet. There's essentially no configuration of the target environment required - the build simply calls the build.ps1 file to restore and build the project.

I've switched to using Cake for most of my projects these days, often based on a script from Muhammad Rehan Saeed. If this is your first time using AppVeyor to build your projects, I suggest you take a look at my previous post on using AppVeyor.

Unfortunately, if you try and build a .NET Core 2.0 preview 2 project with this script, you'll be out of luck. I found I got random, nondescript errors, such as this one:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Installing .NET Core 2.0 preview 2 in AppVeyor

Luckily, AppVeyor makes it easy to install additional dependencies before running your build script - you just add additional commands under the install node:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
install:  
  # Run additional commands here

The tricky part is working out exactly what to run! I couldn't find any official guidance on scripting the install, so I went hunting in some of the Microsoft GitHub repos. In particular I found the JavaScriptServices repo which manually installs .NET Core. The install node at the time of writing (for preview 1) was:

install:  
   # .NET Core SDK binaries
   - ps: $urlCurrent = "https://download.microsoft.com/download/3/7/F/37F1CA21-E5EE-4309-9714-E914703ED05A/dotnet-dev-win-x64.2.0.0-preview1-005977.exe"
   - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
   - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
   - ps: $tempFileCurrent = [System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), [System.IO.Path]::GetRandomFileName())
   - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
   - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
   - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"

There's a lot of commands in there. Most of it we can copy and paste, but the trickiest point is that download URL - GUIDs, really?

Luckily there's an easy way to find the URL for preview 2 - you can look at the release notes for the version of .NET Core you want to target.

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

The link you want is the Windows 64-bit SDK binaries. Just right-click, copy the link and paste into the appveyor.yml, to give the final file. The full AppVeyor file from my recent CommonPasswordValidator repository is shown below:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
environment:  
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true
  DOTNET_CLI_TELEMETRY_OPTOUT: true
install:  
  # Download .NET Core 2.0 Preview 2 SDK and add to PATH
  - ps: $urlCurrent = "https://download.microsoft.com/download/F/A/A/FAAE9280-F410-458E-8819-279C5A68EDCF/dotnet-sdk-2.0.0-preview2-006497-win-x64.zip"
  - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
  - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
  - ps: $tempFileCurrent = [System.IO.Path]::GetTempFileName()
  - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
  - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
  - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  server: https://www.myget.org/F/andrewlock-ci/api/v2/package
  api_key:
    secure: xxxxxx
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

Now when AppVeyor runs, you can see it running the install steps before running the build script:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Using predictable download URLs

Shortly after battling with this issue, I took another look at the JavaScriptServices project, and noticed they'd switched to using nicer URLs for the SDK binaries. Instead of using the horrible GUIDy URLs, you can use zip files stored on an Azure CDN instead. These URLs just require you know the SDK version (including the build number) For example:

It looks like preview 2 is the first to be available at this URL, but as you can see, later builds are also available if you want to work with the bleeding edge builds.

Summary

In this post I showed how you could use the install node of an appveyor.yml file to install ASP.NET Core 2.0 preview 2 into your AppVeyor build pipeline. This lets you target preview versions of .NET Core in your build pipeline, before they're explicitly supported by AppVeyor.


Andrew Lock: Creating a validator to check for common passwords in ASP.NET Core Identity

Creating a validator to check for common passwords in ASP.NET Core Identity

In my last post, I showed how you can create a custom validator for ASP.NET Core. In this post, I introduce a package that lets you validate that a password is not one of the most common passwords users choose.

You can find the package on GitHub and on NuGet, and can install it using dotnet add package CommonPasswordValidator. Currently, it supports ASP.NET Core 2.0 preview 2.

Full disclosure, this post is 100% inspired by the codinghorror.com article by Jeff Atwood on how they validate passwords in Discourse. If you haven't read it yet, do it now!

As Jeff describes in the appropriately named article Password Rules Are Bullshit, password rules can be a real pain. Obviously in theory, password rules make sense, but reality can be a bit different. The default Identity templates require:

  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

All these rules will theoretically increase the entropy of any passwords a user enters. But you just know that's not really what happens.

All it means is that instead of entering password, they enter Password1!

And on top of that, if you're using a password manager, these password rules can get in the way. So your 40 character random password happens to not have a digit in this time? Pretty sure it's still OK... should you really have to generate a new password?

Instead, Jeff Attwood suggests 5 pieces of advice when designing your password validation:

  1. Password rules are bullshit - These rarely achieve their goal, don't make the passwords of average users better, and penalise users using password managers.

    You can easily disable password rules in ASP.NET Core Identity by disabling the composition rules.

  2. Enforce a minimum Unicode password length - Length is an easy rule for users to grasp, and in general, a longer password will be more secure than a short one

    You can similarly set the minimum length in ASP.NET Core Identity using the options pattern, e.g. options.Password.RequiredLength = 10

  3. Check for common passwords - There's plenty of stats on the terrible password choices user make to their own devices, and you an create your own by checking out password lists available online. For example, 30% have a password from the top 10,000 most common passwords!

    In this post I'll describe a custom validator you can add to your ASP.NET Core Identity project to prevent users using the most common passwords

  4. Check for basic entropy - Even with a length requirement, and checking for common passwords, users can make terrible password choices like 9999999999. A simple approach to tackling this is to require a minimum number of unique digits.

    In ASP.NET Core Identity 2.0, you can require a minimum number of required digits using options.Password.RequiredUniqueChars = 6

  5. Check for special case passwords - User's shouldn't be allowed to use their username, email or other obvious values as their password.

You can create custom validators for ASP.NET Core Identity, as I showed in my previous post.

Whether you agree 100% with these rules doesn't really matter, but I think most people will agree with at least a majority of them. Either way, preventing the most common passwords is somewhat of a no-brainer.

There's no built-in way of achieving this, but thanks to ASP.NET Core Identity's extensibility, we can create a custom validator instead.

Creating a validator to check for common passwords

ASP.NET Core Identity lets you register custom password validators. These are executed when a user registers on your site, or changes their password, and let you apply additional constraints to the password.

In my last post, I showed how to create custom validators. Creating a validator to check for common passwords is pretty simple - we load the list of forbidden passwords into a HashSet, and check that the user's password is not one of them:

public class Top100PasswordValidator<TUser> : IPasswordValidator<TUser>  
        where TUser : class
{
    static readonly HashSet<string> Passwords { get; } = PasswordLists.Top100Passwords;

    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager,
                                                TUser user,
                                                string password)
    {
        if(Passwords.Contains(password))
        {
            var result = IdentityResult.Failed(new IdentityError
            {
                Code = "CommonPassword",
                Description = "The password you chose is too common."
            });
            return Task.FromResult(result);
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator is pretty standard. We have a list of passwords that you are not allowed to use, stored in the static HashSet<string>. ASP.NET Core Identity will call ValidateAsync when a new user registers, passing in the new user object, and the new password.

As we don't need to access the user object itself, we can make this validator completely generic to TUser, instead of limiting it to IdentityUser<TKey> as we did in my last post.

There's plenty of different passwords list we could choose from, so I chose to implement a few different variations, based on the 10 million passwords from 2016, depending on how restrictive you want to be.

  • Block passwords in the top 100 most common
  • Block passwords in the top 500 most common
  • Block passwords in the top 1,000 most common
  • Block passwords in the top 10,000 most common
  • Block passwords in the top 100,000 most common

Each of these passwords lists is stored as an embedded resource in the NuGet package. In the new .csproj file format, you do this by removing it from the normal wildcard inclusion, and marking as EmbeddedResource:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <None Remove="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <EmbeddedResource Include="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Identity" Version="2.0.0-preview2-final" />
  </ItemGroup>

</Project>  

With the lists embedded in the dll, we can simply load the passwords from the embedded resource into a HashSet.

Loading a list of strings from an embedded resource

You can read an embedded resource as a stream from the assembly using the GetManifestResourceStream() method on the Assembly type. I created a small helper class that loads the embedded file from the assembly, reads it line by line, and adds the password to the HashSet (using a case-insensitive string comparer).

internal static class PasswordLists  
{
    private static HashSet<string> LoadPasswordList(string resourceName)
    {
        HashSet<string> hashset;

        var assembly = typeof(PasswordLists).GetTypeInfo().Assembly;
        using (var stream = assembly.GetManifestResourceStream(resourceName))
        {
            using (var streamReader = new StreamReader(stream))
            {
                hashset = new HashSet<string>(
                    GetLines(streamReader),
                    StringComparer.OrdinalIgnoreCase);
            }
        }
        return hashset;
    }

    private static IEnumerable<string> GetLines(StreamReader reader)
    {
        while (!reader.EndOfStream)
        {
            yield return reader.ReadLine();
        }
    }
}

NOTE: When you pass in the resourceName to load, it must be properly namespaced. The namespace is based on the namespace of the Assembly, and the subfolder of the resource file.

Adding the custom validator to ASP.NET Core Identity

That's all there is to the validator itself. You can add it to the ASP.NET Core Identity validators collection using the AddPasswordValidator<>() method. For example:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddPasswordValidator<Top100PasswordValidator<ApplicationUser>>();

It's somewhat of a convention to create helper extension methods in ASP.NET Core, so we can easily add an additional extension that simplifies the above slightly:

public static class IdentityBuilderExtensions  
{        
    public static IdentityBuilder AddTop100PasswordValidator<TUser>(this IdentityBuilder builder) where TUser : class
    {
        return builder.AddPasswordValidator<Top100PasswordValidator<TUser>>();
    }
}

With this extension, you can add the validator using the following:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddTop100PasswordValidator<ApplicationUser>();

With the validator in place, if a user tries to use a password that's too common, they'll get a standard warning when registering on your site:

Creating a validator to check for common passwords in ASP.NET Core Identity

Summary

This post was based on the suggestion by Jeff Attwood that we should limit password composition rules, focus on length, and ensure users can't choose common passwords.

ASP.NET Core Identity lets you add custom validators. This post showed how you could create a validator that ensures the entered password isn't in the top 100 - 100,000 of the 10 million most common passwords.

You can view the source code for the validator on GitHub, or you can install the NuGet package using the command

dotnet add package CommonPasswordValidator  

Currently, the package targets .NET Core 2.0 preview 2. If you have any comments, suggestions, or bugs, please raise an issue or leave a comment! Thanks


Anuraj Parameswaran: Send Mail Using SendGrid In .NET Core

This post is about sending emails using Send Grid API in .NET Core. SendGrid is a cloud-based SMTP provider that allows you to send email without having to maintain email servers. SendGrid manages all of the technical details, from scaling the infrastructure to ISP outreach and reputation monitoring to whitelist services and real time analytics.


Damien Bowden: Implementing Two-factor authentication with IdentityServer4 and Twilio

This article shows how to implement two factor authentication using Twilio and IdentityServer4 using Identity. On the Microsoft’s Two-factor authentication with SMS documentation, Twilio and ASPSMS are promoted, but any SMS provider can be used.

Code: https://github.com/damienbod/AspNetCoreID4External

2017-09-23 Updated to ASP.NET Core 2.0

Setting up Twilio

Create an account and login to https://www.twilio.com/

Now create a new phone number and use the Twilio documentation to set up your account to send SMS messages. You need the Account SID, Auth Token and the Phone number which are required in the application.

The phone number can be configured here:
https://www.twilio.com/console/phone-numbers/incoming

Adding the SMS support to IdentityServer4

Add the Twilio Nuget package to the IdentityServer4 project.

<PackageReference Include="Twilio" Version="5.6.5" />

The Twilio settings should be a secret, so these configuration properties are added to the app.settings.json file with dummy values. These can then be used for the deployments.

"TwilioSettings": {
  "Sid": "dummy",
  "Token": "dummy",
  "From": "dummy"
}

A configuration class is then created so that the settings can be added to the DI.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace IdentityServerWithAspNetIdentity.Services
{
    public class TwilioSettings
    {
        public string Sid { get; set; }
        public string Token { get; set; }
        public string From { get; set; }
    }
}

Now the user secrets configuration needs to be setup on your dev PC. Right click the IdentityServer4 project and add the user secrets with the proper values which you can get from your Twilio account.

{
  "MicrosoftClientId": "your_secret..",
  "MircosoftClientSecret":  "your_secret..",
  "TwilioSettings": {
    "Sid": "your_secret..",
    "Token": "your_secret..",
    "From": "your_secret..",
  }
}

The configuration class is then added to the DI in the Startup class ConfigureServices method.

var twilioSettings = Configuration.GetSection("TwilioSettings");
services.Configure<TwilioSettings>(twilioSettings);

Now the TwilioSettings can be added to the AuthMessageSender class which is defined in the MessageServices file, if using the IdentityServer4 samples.

private readonly TwilioSettings _twilioSettings;

public AuthMessageSender(ILogger<AuthMessageSender> logger, IOptions<TwilioSettings> twilioSettings)
{
	_logger = logger;
	_twilioSettings = twilioSettings.Value;
}

This class is also added to the DI in the startup class.

services.AddTransient<ISmsSender, AuthMessageSender>();

Now the TwilioClient can be setup to send the SMS in the SendSmsAsync method.

public Task SendSmsAsync(string number, string message)
{
	// Plug in your SMS service here to send a text message.
	_logger.LogInformation("SMS: {number}, Message: {message}", number, message);
	var sid = _twilioSettings.Sid;
	var token = _twilioSettings.Token;
	var from = _twilioSettings.From;
	TwilioClient.Init(sid, token);
	MessageResource.CreateAsync(new PhoneNumber(number),
		from: new PhoneNumber(from),
		body: message);
	return Task.FromResult(0);
}

The SendCode.cshtml view can now be changed to send the SMS with the style, layout you prefer.

<form asp-controller="Account" asp-action="SendCode" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="SelectedProvider" type="hidden" value="Phone" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <div class="row">
        <div class="col-md-8">
            <button type="submit" class="btn btn-default">Send a verification code using SMS</button>
        </div>
    </div>
</form>

In the VerifyCode.cshtml, the ReturnUrl from the model property must be added to the form as a hidden item, otherwise your client will not be redirected back to the calling app.

<form asp-controller="Account" asp-action="VerifyCode" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <div asp-validation-summary="All" class="text-danger"></div>
    <input asp-for="Provider" type="hidden" />
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <h4>@ViewData["Status"]</h4>
    <hr />
    <div class="form-group">
        <label asp-for="Code" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Code" class="form-control" />
            <span asp-validation-for="Code" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <div class="checkbox">
                <input asp-for="RememberBrowser" />
                <label asp-for="RememberBrowser"></label>
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">Submit</button>
        </div>
    </div>
</form>

Testing the application

If using an existing client, you need to update the Identity in the database. Each user requires that the TwoFactoredEnabled field is set to true and a mobile phone needs to be set in the phone number field, (Or any phone which can accept SMS)

Now login with this user:

The user is redirected to the send SMS page. Click the send SMS button. This sends a SMS to the phone number defined in the Identity for the user trying to authenticate.

You should recieve an SMS. Enter the code in the verify view. If no SMS was sent, check your Twilio account logs.

After a successful code validation, the user is redirected back to the consent page for the client application. If not redirected, the return url was not set in the model.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/2fa

https://www.twilio.com/

http://docs.identityserver.io/en/release/

https://www.twilio.com/use-cases/two-factor-authentication



Damien Bowden: Adding an external Microsoft login to IdentityServer4

This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.

Code https://github.com/damienbod/AspNetCoreID4External

2017-09-23 Updated to ASP.NET Core 2.0

Setting up the App Platform for the Microsoft Account

To setup the app, login using your Microsoft account and open the My Applications link

https://apps.dev.microsoft.com/?mkt=en-gb#/appList

Click the ‘Add an app’ button

Give the application a name and add your email. This app is called ‘microsoft_id4_damienbod’

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

Now Add a new platform. Choose a Web type.

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

Add the permissions as required

Application configuration

Note: The samples are at present not updated to ASP.NET Core 2.0

Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.

The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.

services.AddDbContext<ApplicationDbContext>(options =>
	   options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
	.AddEntityFrameworkStores<ApplicationDbContext>()
	.AddDefaultTokenProviders()
	.AddIdentityServer();

Now the AddMicrosoftAccount extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.

services.AddAuthentication()
	 .AddMicrosoftAccount(options => {
		  options.ClientId = _clientId;
		  options.SignInScheme = "Identity.External";
		  options.ClientSecret = _clientSecret;
	  });

services.AddMvc();

...

services.AddIdentityServer()
	 .AddSigningCredential(cert)
	 .AddInMemoryIdentityResources(Config.GetIdentityResources())
	 .AddInMemoryApiResources(Config.GetApiResources())
	 .AddInMemoryClients(Config.GetClients())
	 .AddAspNetIdentity<ApplicationUser>()
	 .AddProfileService<IdentityWithAdditionalClaimsProfileService>();

And the Configure method also needs to be configured correctly.

app.UseStaticFiles();

app.UseIdentityServer();
app.UseAuthentication();

app.UseMvc(routes =>
{
	routes.MapRoute(
		name: "default",
		template: "{controller=Home}/{action=Index}/{id?}");
});

The application can now be tested. An Angular client using OpenID Connect sends a login request to the server. The ClientId and the ClientSecret are saved using user secrets, so that the password is not committed in the src code.

Click the Microsoft button to login.

This redirects the user to the Microsoft Account login for the microsoft_id4_damienbod application.

After a successful login, the user is redirected to the consent page.

Click yes, and the user is redirected back to the IdentityServer4 application. If it’s a new user, a register page will be opened.

Click register and the ID4 consent page is opened.

Then the application opens.

What’s nice about the IdentityServer4 application is that it’s a simple ASP.NET Core application with standard Views and Controllers. This makes it really easy to change the flow, for example, if a user is not allowed to register or whatever.

Links

https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-how-to-configure-microsoft-authentication

http://docs.identityserver.io/en/release/topics/signin_external_providers.html



Anuraj Parameswaran: ASP.NET Core Gravatar Tag Helper

This post is about creating a tag helper in ASP.NET Core for displaying Gravatar images based on the email address. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog.


Dominick Baier: Authorization is hard! Slides and Video from NDC Oslo 2017

A while ago I wrote a controversial article about the problems that can arise when mixing authentication and authorization systems – especially when using identity/access tokens to transmit authorization data – you can read it here.

In the meanwhile Brock and I sat down to prototype a possible solution (or at least an improvement) to the problem and presented it to various customers and at conferences.

Also many people asked me for a more detailed version of my blog post – and finally there is now a recording of our talk from NDC – video here – and slides here. HTH!

 


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: Using Protobuf Media Formatters with ASP.NET Core

This article shows how to use Protobuf with an ASP.NET Core MVC application. The API uses the WebApiContrib.Core.Formatter.Protobuf Nuget package to add support for Protobuf. This package uses the protobuf-net Nuget package from Marc Gravell, which makes it really easy to use a really fast serializer, deserializer for your APIs.

Code: https://github.com/damienbod/AspNetCoreWebApiContribProtobufSample

History

2017-08-19 Updated to ASP.NET Core 2.0, WebApiContrib.Core.Formatter.Protobuf 2.0

Setting up te ASP.NET Core MVC API

To use Protobuf with ASP.NET Core, the WebApiContrib.Core.Formatter.Protobuf Nuget package can be used in your project. You can add this using the Nuget manager in Visual Studio.

Or you can add it directly in your project file.

<PackageReference Include="WebApiContrib.Core.Formatter.Protobuf" Version="2.0.0" />

Now the formatters can be added in the Startup file.

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc()
		.AddProtobufFormatters();
}

A model now needs to be defined. The protobuf-net attributes are used to define the model class.

using ProtoBuf;

namespace Model
{
    [ProtoContract]
    public class Table
    {
        [ProtoMember(1)]
        public string Name {get;set;}

        [ProtoMember(2)]
        public string Description { get; set; }


        [ProtoMember(3)]
        public string Dimensions { get; set; }
    }
}

The ASP.NET Core MVC API can then be used with the Table class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Model;

namespace AspNetCoreWebApiContribProtobufSample.Controllers
{
    [Route("api/[controller]")]
    public class TablesController : Controller
    {
        // GET api/tables
        [HttpGet]
        public IActionResult Get()
        {
            List<Table> tables = new List<Table>
            {
                new Table{Name= "jim", Dimensions="190x80x90", Description="top of the range from Migro"},
                new Table{Name= "jim large", Dimensions="220x100x90", Description="top of the range from Migro"}
            };

            return Ok(tables);
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            var table = new Table { Name = "jim", Dimensions = "190x80x90", Description = "top of the range from Migro" };
            return Ok(table);
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]Table value)
        {
            var got = value;
            return Created("api/tables", got);
        }
    }
}

Creating a simple Protobuf HttpClient

A HttpClient using the same Table class with the protobuf-net definitions can be used to access the API and request the data with “application/x-protobuf” header.

static async System.Threading.Tasks.Task<Table[]> CallServerAsync()
{
	var client = new HttpClient();

	var request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:31004/api/tables");
	request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var result = await client.SendAsync(request);
	var tables = ProtoBuf.Serializer.Deserialize<Table[]>(await result.Content.ReadAsStreamAsync());
	return tables;
}

The data is returned in the response using Protobuf seriailzation.

If you want to post some data using Protobuf, you can serialize the data to Protobuf and post it to the server using the HttpClient. This example uses “application/x-protobuf”.

static async System.Threading.Tasks.Task<Table> PostStreamDataToServerAsync()
{
	HttpClient client = new HttpClient();
	client.DefaultRequestHeaders
		  .Accept
		  .Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));

	HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post,
		"http://localhost:31004/api/tables");

	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<Table>(stream, new Table
	{
		Name = "jim",
		Dimensions = "190x80x90",
		Description = "top of the range from Migro"
	});

	request.Content = new ByteArrayContent(stream.ToArray());

	// HTTP POST with Protobuf Request Body
	var responseForPost = client.SendAsync(request).Result;

	var resultData = ProtoBuf.Serializer.Deserialize<Table>(await responseForPost.Content.ReadAsStreamAsync());
	return resultData;
}

Links:

https://www.nuget.org/packages/WebApiContrib.Core.Formatter.Protobuf/

https://github.com/mgravell/protobuf-net



Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an interview – so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.