Darrel Miller: OpenAPI is not what I thought

Sometimes I do my best thinking in the car and today was an excellent example of this.  I had a phone call today with a the digital agency Authentic who have been hired to help you stop saying Swagger, when you mean OpenApi. I’m only partially kidding. They asked me some hard questions about why I got involved in the OpenAPI Initiative, and experiences I have had with OpenAPI delivering value.  Apparently this started a chain reaction of noodling in my subconscious because while driving my daughter to ballet, it hit me.  I’ve been thinking about OpenAPI all wrong.



Let me be categorically clear. In the beginning, I was not a fan of Swagger.  Or WADL, or RAML, or API Blueprint or RADL.  I was, and largely still am, a card carrying Restafarian.  I wear that slur with pride.  I like to eat REST Pragmatists for breakfast.   Out of band coupling is the scourge of the Internet.  We’ve tried interface definition languages before. Remember WSDL?  Been there, done that. Please, not again.

An Inflection Point

The first chink in my amour of objections appeared at the API Strategy Conference in Austin in November 2015 (There’s another one coming up soon in Portand http://apistrat.com/).  I watched Tony Tam do a workshop on Swagger.  Truth be told, I only attended to see what trouble I could cause.  Turns out he showed a tool called Swagger-Inflector and I was captivated.  Inflector used Swagger for a different purpose.  It became a DSL for driving the routing of a Java based HTTP API.  

An Inside Job

It wasn’t too long after that when I was asked if I would be interested in joining the OpenAPI Initiative.  It was clear from fighting the hypermedia fight for more than 10 years, we were losing the war.  The Swagger tooling provided value that developers building APIs wanted.  Hypermedia wasn’t solving problems they were facing, it was a promise to solve problems that they might face in a few years by doing extra work up front.  I understood the problems that Swagger/OpenAPI could cause, but I had a higher chance of convincing developers to stop drinking Mountain Dew than to pry a documentation generator from the hands of a dev with a deadline.  If I were going to have any chance of having an impact, I was going to have to work from the inside.

No Escape

A big part of my day job involves dealing with OpenAPI descriptions.  Our customer’s import them, export them.  My team uses it to describe our management API, as do all the other Azure teams.  As the de-facto “OpenAPI Guy” at Microsoft I end up having a fair number of interactions with other teams about what works and what doesn’t work in OpenAPI. A re-occurring theme is people keep wanting to put stuff in OpenAPI that has no business in an OpenAPI description.  At least that’s how I perceived it until today.

Scope Creep?

OpenApi descriptions are primarily used to drive consumer facing artifacts.  HTML documentation and client libraries are the most prominent examples.  Interface descriptions should not contain implementation details. But I keep running into scenarios where people want to add stuff that seems like it would be useful.  Credentials, rate limiting details, transformations, caching, CORS… the list continues to grow.  I’ve considered the need for a second description document that contains those details and augments the OpenAPI description.  I’ve considered adding an “x-implementation” object to the OpenApi description.  I’ve considered “x-impl-“ prefixes to distinguish between implementation details and the actual an interface description.  But nothing has felt right. I didn’t know why. Now I do.  It’s all implementation with some subset being the interface. Which subset depends on your perspective.

Pivot And Think Bigger

Remember Swagger-inflector?  It didn’t use OpenAPI to describe the interface at all.  It was purely an implementation artifact.  You know why Restafarians get all uppity about OpenAPI?  Because as an interface description language it encourages out of band coupling that makes independent evolvability hard.  That thing that micro-services need so badly.

What if OpenAPI isn’t an interface definition language at all?  What if it is purely a declarative description of an HTTP implementation?  What if tooling allowed you to project some subset of the OpenAPI description to drive HTML documentation? And another subset could be used for client code generation? And a different subset for driving server routing, middleware, validation and language bindings. OpenAPI descriptions become a platform independent definition of all aspects of an HTTP API.

Common Goals

One of my original objections to OpenAPI descriptions is that they contained information that I didn’t think belonged in an interface description.  Declaring a fixed subset of status codes an operation returned seemed unnecessary and restrictive.  But for scaffolding servers, generating mock responses and ensuring consistency across APIs, having the needed status codes identified is definitely valuable.

For the hypermedia folks, their generated documentation would only be based on a projection of the link relations, media types, entry points and possibly schemas.  For those who want a more traditional operation based  documentation, that is fine too.  It is the recognition of the projection step that is important.  It allows us to ensure that private implementation details are not leaked to interface driven artifacts.

Back to Work

Now I’m sure many people already perceive OpenAPI descriptions this way.  Well, where have you been my friends?  We need you contributing to the Github repo.  Me, I’m a bit slower and this only dawned on me today.  But hopefully, this will help me find even more ways to deliver value to developers via OpenAPI descriptions.

The other possibility of course is that people think I’m just plain wrong and that OpenAPI really is the description of an interface.


Anuraj Parameswaran: Getting started with SignalR using ASP.NET Core

This post is about getting started SignalR in ASP.NET Core. SignalR is a framework for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available.


Andrew Lock: Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

This is the first in a short series of posts exploring the process I went through to make working with scopes a little nicer in ASP.NET Core (and Serilog / Seq). In this post I'll create an extension method for logging a single key-value pair as a scope. In the next post, I'll extend this to multiple key-value pairs.

I'll start by presenting an overview of structured logging and why you should be using it in your applications. This is largely the same introduction as in my last post so feel to skip ahead if I'm preaching to the choir!

Next, I'll show how scopes are typically recorded in ASP.NET Core, with a particular focus on Serilog and Seq. This will largely demonstrate the semantics described by Nicholas Blumhardt in his post on the semantics of ILogger.BeginScope(), but it will also set the scene for the meat of this post. In particular, we'll take a look at the syntax needed to record scope state as a series of key-value pairs.

Finally, I'll show an extension method you can add to your application to make recording key-value scope state that little bit easier.

Introduction to structured logging

Structured logging involves associating key-value pairs with each log entry, instead of just outputting an unstructured string "message". For example, an unstructured log message, something that might be output to the console, might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You're limited to what you can achieve in a text editor - doable to an extent, but a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available for filtering without having to do any messy string processing.

Some of the most popular options for storing and searching structured logs are Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

Once you're storing logs in a structured manner, it becomes far easier to query and analyse your log files. Structured logging can extract parameters from the format string passed in the log message, and attach these to the log itself.

For example, the log message Request starting {protocol} {method} {url} contains three parameters, protocol, method, and url, which can all be extracted as properties on the log.

The ASP.NET Core logging framework also includes the concept of scopes which lets you attach arbitrary additional data to all log messages inside the scope. For example, the following log entry has a format string parameter, {ActionName}, which would be attached to the log message, but it also contains four scopes:

using (_logger.BeginScope("Some name"))  
using (_logger.BeginScope(42))  
using (_logger.BeginScope("Formatted {WithValue}", 12345))  
using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))  
{
    _logger.LogInformation("Hello from the {ActionName}!", name);
}

The state passed in the call to ILogger.BeginScope(state) can be anything, as shown in this example. The problem is, how this state should be logged is not clearly defined by the ILogger interface, so it's up to the logger implementation to decide.

Luckily Nicholas Blumhardt has thought hard about this problem, and has baked his rules into the Serilog / Seq implementation. There are effectively three different rules:

  1. If the state is an IEnumerable<KeyValuePair<string, object>>, attach each KeyValuePair as a property to the log.
  2. If the state is a format string / message template, add the parameters as properties to the log, and the formatted string to a Scope property.
  3. For everything else, add it to the Scope property.

For the LogInformation call shown previously, these rules result in the WithValue, ViaDictionary, and Scope values being attached to the log:

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Adding correlation IDs using scope

Of all these rules, the most interesting to me is the IEnumerable<KeyValuePair<string, object>> rule, which allows attaching arbitrary key-values to the log as properties. A common problem when looking through logs is looking for relationships. For example, I want to see all logs related to a particular product ID, a particular user ID, or a transaction ID. These are commonly referred to as correlation IDs as they allow you to easily determine the relationship between different log messages.

My one bugbear, is the somewhat lengthy syntax required in order to attach these correlation IDs to the log messages. Lets start with the following, highly contrived code. We're simply adding a product to a basket, but I've added correlation IDs in scopes for the productId, the basketId and the transactionId:

public void Add(int productId, int basketId)  
{
    using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
    {
        _logger.LogDebug("Adding product to basket");
        var product = _service.GetProduct();
        var basket = _service.GetBasket();

        using (var transaction = factory.Create())
        using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))
        {
            basket.Add(product);
            transaction.Submit();
            _logger.LogDebug("Product added to basket");
        }
    }
}

This code does exactly what I want, but it's a bit of an eye-sore. All those dictionaries flying around and nameof() to avoid typos is a bit ugly, so I wanted to see if I could tidy it up. I didn't want to go messing with the framework code, so I thought I would create a couple of extension methods to tidy up these common patterns.

Creating a single key-value pair scope state extension

In this post we'll start with the inner-most call to BeginScope<T>, in which we create a dictionary with a single key, transactionId. For this case I created a simple extension method that takes two parameters, the key name as a string, and the value as an object. These are used to initialise a Dictionary<string, object> which is passed to the underlying ILogger.BeginScope<T> method:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, string key, object value)
    {
        return logger.BeginScope(new Dictionary<string, object> { { key, value } });
    }
}

The underlying ILogger.BeginScope<T>(T state) method only has a single argument, so there's no issue with overload resolution here. With this small addition, our second using call has gone from this:

using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))  

to this:

using (_logger.BeginScope("transactionId", transaction.Id))  

Much nicer, I think you'll agree!

This was the most common use case that I was trying to tidy up, so stopping at this point would be perfectly reasonable. In fact, I could already use this to tidy up the first using method too, if I was happy to change the semantics somewhat. For example

using (_logger.BeginScope(new Dictionary<string, object> {{ nameof(productId), productId }, { nameof(basketId), basketId} }))  

could become

using (_logger.BeginScope(nameof(productId), productId))  
using (_logger.BeginScope(nameof(basketId), basketId))  

Not strictly the same, but not too bad. Still, I wanted to do better. In the next post I'll show some of the avenues I explored, their pros and cons, and the final extension method I settled on.

Summary

I consider structured logging to be a no-brainer when it comes to running apps in production, and key to that are correlation IDs applied to logs wherever possible. Serilog, Seq, and the ASP.NET Core logging framework make it possible to add arbitrary properties to a log message using ILogger.BeginScope(state), but the semantics of the method call are somewhat ill-defined. Consequently, in order for scope state to be used as correlation ID properties on the log message, the state must be an IEnumerable<KeyValuePair<string, object>>.

Manually creating a Dictionary<string,object> every time I wanted to add a correlation ID was a bit cumbersome, so I wrote a simple extension overload to BeginScope method that takes a string key and and object value. This extension simply initialises a Dictionary<string, object> behind the scenes, and calls to the underlying BeginScope<T> method. This makes the call site easier to read when you are adding a single key-value pair.


Anuraj Parameswaran: Building ASP.NET Core web apps with VB.NET

This post is about developing ASP.NET Core applications with VB.NET. I started my career with VB 6.0, and .NET programming with VB.NET. When Microsoft introduced ASP.NET Core, people where concerned about Web Pages and VB.Net. Even though no one liked it, every one is using it. In ASP.NET Core 2.0, Microsoft introduced Razor Pages and support to develop .net core apps with VB.NET. Today I found one question on ASP.NET Core Web application template in VB.NET. So I thought of creating a ASP.NET Core Hello World app to VB.NET.


Damien Bowden: SignalR Group messages with ngrx and Angular

This article shows how SignalR can be used to send grouped messages to an Angular SignalR client, which uses ngrx to handle the SignalR events in the Angular client.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Other posts in this series:

SignalR Groups

SignalR allows messages to be sent to specific groups if required. You can read about this here:

https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/working-with-groups

The documentation is for the old SignalR, but most is still relevant.

To get started, add the SignalR Nuget package to the csproj file where the Hub(s) are to be implemented.

<PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" />

In this application, the NewsItem class is used to send the messages between the SignalR clients and server.

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsItem
    {
        public string Header { get; set; }
        public string NewsText { get; set; }
        public string Author { get; set; }
        public string NewsGroup { get; set; }
    }
}

The NewsHub class implements the SignalR Hub which can send messages with NewsItem classes, or let the clients join, or leave a SignalR group. When the Send method is called, the class uses the NewsGroup property to send the messages only to clients in the group. If the client is not a member of the group, it will receive no message.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsHub : Hub
    {
        public Task Send(NewsItem newsItem)
        {
            return Clients.Group(newsItem.NewsGroup).InvokeAsync("Send", newsItem);
        }

        public async Task JoinGroup(string groupName)
        {
            await Groups.AddAsync(Context.ConnectionId, groupName);
            await Clients.Group(groupName).InvokeAsync("JoinGroup", groupName);
        }

        public async Task LeaveGroup(string groupName)
        {
            await Clients.Group(groupName).InvokeAsync("LeaveGroup", groupName);
            await Groups.RemoveAsync(Context.ConnectionId, groupName);
        }
    }
}

The SignalR hub is configured in the Startup class. The path defined in the hub, must match the configuration in the SignalR client.

app.UseSignalR(routes =>
{
	routes.MapHub<NewssHub>("looney");
});

Angular Service for the SignalR client

To use SignalR in the Angular application, the npm package @aspnet/signalr-client needs to be added to the packages.json file.

"@aspnet/signalr-client": "1.0.0-alpha1-final"

The Angular NewsService is used to send SignalR events to the ASP.NET Core server and also to handle the messages received from the server. The send, joinGroup and leaveGroup functions are used in the ngrx store effects and the init method adds event handlers for SignalR events and dispatches ngrx actions when a message is received.

import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import { NewsState } from './store/news.state';
import * as NewsActions from './store/news.action';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;

    constructor(private store: Store<any>) {
        this.init();
    }

    public send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    public joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    public leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    private init() {

        this._hubConnection = new HubConnection('/looney');

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

Using ngrx to manage SignalR events

The NewsState interface is used to save the application state created from the SignalR events, and the user interactions.

import { NewsItem } from '../models/news-item';

export interface NewsState {
    newsItems: NewsItem[],
    groups: string[]
};

The news.action action classes are used to connect, define the actions for events which are dispatched from Angular components, the SignalR Angular service, or ngrx effects. These actions are used in the hubConnection.on event, which receives the SignalR messages, and dispatches the proper action.

import { Action } from '@ngrx/store';
import { NewsItem } from '../models/news-item';

export const JOIN_GROUP = '[news] JOIN_GROUP';
export const LEAVE_GROUP = '[news] LEAVE_GROUP';
export const JOIN_GROUP_COMPLETE = '[news] JOIN_GROUP_COMPLETE';
export const LEAVE_GROUP_COMPLETE = '[news] LEAVE_GROUP_COMPLETE';
export const SEND_NEWS_ITEM = '[news] SEND_NEWS_ITEM';
export const SEND_NEWS_ITEM_COMPLETE = '[news] SEND_NEWS_ITEM_COMPLETE';
export const RECEIVED_NEWS_ITEM = '[news] RECEIVED_NEWS_ITEM';
export const RECEIVED_GROUP_JOINED = '[news] RECEIVED_GROUP_JOINED';
export const RECEIVED_GROUP_LEFT = '[news] RECEIVED_GROUP_LEFT';

export class JoinGroupAction implements Action {
    readonly type = JOIN_GROUP;

    constructor(public group: string) { }
}

export class LeaveGroupAction implements Action {
    readonly type = LEAVE_GROUP;

    constructor(public group: string) { }
}


export class JoinGroupActionComplete implements Action {
    readonly type = JOIN_GROUP_COMPLETE;

    constructor(public group: string) { }
}

export class LeaveGroupActionComplete implements Action {
    readonly type = LEAVE_GROUP_COMPLETE;

    constructor(public group: string) { }
}
export class SendNewsItemAction implements Action {
    readonly type = SEND_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class SendNewsItemActionComplete implements Action {
    readonly type = SEND_NEWS_ITEM_COMPLETE;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedItemAction implements Action {
    readonly type = RECIEVED_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedGroupJoinedAction implements Action {
    readonly type = RECIEVED_GROUP_JOINED;

    constructor(public group: string) { }
}

export class ReceivedGroupLeftAction implements Action {
    readonly type = RECIEVED_GROUP_LEFT;

    constructor(public group: string) { }
}

export type Actions
    = JoinGroupAction
    | LeaveGroupAction
    | JoinGroupActionComplete
    | LeaveGroupActionComplete
    | SendNewsItemAction
    | SendNewsItemActionComplete
    | ReceivedItemAction
    | ReceivedGroupJoinedAction
    | ReceivedGroupLeftAction;


The newsReducer ngrx reducer class receives the actions and changes the state as required. For example, when a RECEIVED_NEWS_ITEM event is sent from the Angular SignalR service, it creates a new state with the new message appended to the existing items.

import { NewsState } from './news.state';
import { NewsItem } from '../models/news-item';
import { Action } from '@ngrx/store';
import * as newsAction from './news.action';

export const initialState: NewsState = {
    newsItems: [],
    groups: ['group']
};

export function newsReducer(state = initialState, action: newsAction.Actions): NewsState {
    switch (action.type) {

        case newsAction.RECEIVED_GROUP_JOINED:
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: (state.groups.indexOf(action.group) > -1) ? state.groups : state.groups.concat(action.group)
            });

        case newsAction.RECEIVED_NEWS_ITEM:
            return Object.assign({}, state, {
                newsItems: state.newsItems.concat(action.newsItem),
                groups: state.groups
            });

        case newsAction.RECEIVED_GROUP_LEFT:
            const data = [];
            for (const entry of state.groups) {
                if (entry !== action.group) {
                    data.push(entry);
                }
            }
            console.log(data);
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: data
            });
        default:
            return state;

    }
}

The ngrx store is configured in the module class.

StoreModule.forFeature('news', {
     newsitems: newsReducer,
}),
 EffectsModule.forFeature([NewsEffects])

The store is then used in the different Angular components. The component only uses the ngrx store to send, receive SignalR data.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { Store } from '@ngrx/store';
import { NewsState } from '../store/news.state';
import * as NewsActions from '../store/news.action';
import { NewsItem } from '../models/news-item';

@Component({
    selector: 'app-news-component',
    templateUrl: './news.component.html'
})

export class NewsComponent implements OnInit {
    public async: any;
    newsItem: NewsItem;
    group = 'group';
    newsState$: Observable<NewsState>;

    constructor(private store: Store<any>) {
        this.newsState$ = this.store.select<NewsState>(state => state.news.newsitems);
        this.newsItem = new NewsItem();
        this.newsItem.AddData('', '', 'me', this.group);
    }

    public sendNewsItem(): void {
        this.newsItem.NewsGroup = this.group;
        this.store.dispatch(new NewsActions.SendNewsItemAction(this.newsItem));
    }

    public join(): void {
        this.store.dispatch(new NewsActions.JoinGroupAction(this.group));
    }

    public leave(): void {
        this.store.dispatch(new NewsActions.LeaveGroupAction(this.group));
    }

    ngOnInit() {
    }
}

The component template then displays the data as required.

<div class="container-fluid">

    <h1>Send some basic news messages</h1>

    <div class="row">
        <form class="form-inline" >
            <div class="form-group">
                <label for="header">Group</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="group" required>
            </div>
            <button class="btn btn-primary" (click)="join()">Join</button>
            <button class="btn btn-primary" (click)="leave()">Leave</button>
        </form>
    </div>
    <hr />
    <div class="row">
        <form class="form" (ngSubmit)="sendNewsItem()" #newsItemForm="ngForm">
            <div class="form-group">
                <label for="header">Header</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="newsItem.Header" required>
            </div>
            <div class="form-group">
                <label for="newsText">Text</label>
                <input type="text" class="form-control" id="newsText" placeholder="your newsText..." name="newsText" [(ngModel)]="newsItem.NewsText" required>
            </div>
            <div class="form-group">
                <label for="newsText">Author</label>
                <input type="text" class="form-control" id="author" placeholder="your newsText..." name="author" [(ngModel)]="newsItem.Author" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!newsItemForm.valid">Send News to: {{group}}</button>
        </form>
    </div>

    <div class="row" *ngIf="(newsState$|async)?.newsItems.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>header</th>
                        <th>Text</th>
                        <th>Author</th>
                        <th>roup</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let item of (newsState$|async)?.newsItems; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{item.Header}}</td>
                        <td>{{item.NewsText}}</td>
                        <td>{{item.Author}}</td>
                        <td>{{item.NewsGroup}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
 
    <div class="row" *ngIf="(newsState$|async)?.length <= 0">
        <span>No news items</span>
    </div>
</div>

When the application is started, SignalR messages can be sent, received and displayed from the instances of the Angaulr application.


Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Damien Bowden: Getting started with SignalR using ASP.NET Core and Angular

This article shows how to setup a first SignalR Hub in ASP.NET Core 2.0 and use it with an Angular client. SignalR will be released with dotnet 2.1. Thanks to Dennis Alberti for his help in setting up the code example.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

2017-09-15: Updated @aspnet/signalr-client to use npm feed, and 1.0.0-alpha1-final

The required SignalR Nuget packages and npm packages are at present hosted on MyGet. Your need to add the SignalR packagesto the csproj file. To use the MyGet feed, add the https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json to your package sources.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
  </ItemGroup>
  <ItemGroup>
    <Folder Include="angularApp\app\models\" />
  </ItemGroup>
</Project>

Now create a simple default hub.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreSignalr.SignalRHubs
{
    public class LoopyHub : Hub
    {
        public Task Send(string data)
        {
            return Clients.All.InvokeAsync("Send", data);
        }
    }
}

Add the SignalR configuration in the startup class. The hub which was created before needs to be added in the UseSignalR extension method.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddSignalR();
	...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseSignalR(routes =>
	{
		routes.MapHub<LoopyHub>("loopy");
	});

	...
}

Setup the Angular application. The Angular application is setup using a wepback build and all dependencies are added to the packages.json file.

You can use the The MyGet npm feed if you want to use the aspnetcore-ci-dev. You can do this using a .npmrc file in the project root. Add the registry path. If using the npm package, do not add this.

@aspnet:registry=https://dotnet.myget.org/f/aspnetcore-ci-dev/npm/

Now add the required SignalR npm packages to the packages.json file. Using the npm package from NuGet:

 "dependencies": {
    "@angular/animations": "4.4.0-RC.0",
    "@angular/common": "4.4.0-RC.0",
    "@angular/compiler": "4.4.0-RC.0",
    "@angular/compiler-cli": "4.4.0-RC.0",
    "@angular/core": "4.4.0-RC.0",
    "@angular/forms": "4.4.0-RC.0",
    "@angular/http": "4.4.0-RC.0",
    "@angular/platform-browser": "4.4.0-RC.0",
    "@angular/platform-browser-dynamic": "4.4.0-RC.0",
    "@angular/platform-server": "4.4.0-RC.0",
    "@angular/router": "4.4.0-RC.0",
    "@angular/upgrade": "4.4.0-RC.0",
    
    "msgpack5": "^3.5.1",
    "@aspnet/signalr-client": "1.0.0-alpha1-final"
  },

Add the SignalR client code. In this basic example, it is just added directly in a component. The sendMessage funtion sends messages and the hubConnection.on function receives all messages including its own.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';

@Component({
    selector: 'app-home-component',
    templateUrl: './home.component.html'
})

export class HomeComponent implements OnInit {
    private _hubConnection: HubConnection;
    public async: any;
    message = '';
    messages: string[] = [];

    constructor() {
    }

    public sendMessage(): void {
        const data = `Sent: ${this.message}`;

        this._hubConnection.invoke('Send', data);
        this.messages.push(data);
    }

    ngOnInit() {
        this._hubConnection = new HubConnection('/loopy');

        this._hubConnection.on('Send', (data: any) => {
            const received = `Received: ${data}`;
            this.messages.push(received);
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

The messages are then displayed in the component template.

<div class="container-fluid">

    <h1>Send some basic messages</h1>


    <div class="row">
        <form class="form-inline" (ngSubmit)="sendMessage()" #messageForm="ngForm">
            <div class="form-group">
                <label class="sr-only" for="message">Message</label>
                <input type="text" class="form-control" id="message" placeholder="your message..." name="message" [(ngModel)]="message" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!messageForm.valid">Send SignalR Message</button>
        </form>
    </div>
    <div class="row" *ngIf="messages.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Messages</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let message of messages; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{message}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
    <div class="row" *ngIf="messages.length <= 0">
        <span>No messages</span>
    </div>
</div>

Now the first really simple SignalR Hub is setup and an Angular client can send and receive messages.

Links:

https://github.com/aspnet/SignalR#readme

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Andrew Lock: How to include scopes when logging exceptions in ASP.NET Core

How to include scopes when logging exceptions in ASP.NET Core

This post describes how to work around an issue I ran into when logging exceptions that occur inside a scope block in ASP.NET Core. I'll provide a brief background on logging in ASP.NET Core, structured logging, and the concept of scopes. Then I'll show how exceptions can cause you to lose an associated scope, and how to get round this using a neat trick with exception filters.

tl;dr; Exception filters are executed in the same scope as the original exception, so you can use them to write logs in the original context, before the using scope blocks are disposed.

Logging in ASP.NET Core

ASP.NET Core includes logging infrastructure that makes it easy to write logs to a variety of different outputs, such as the console, a file, or the Windows EventLog. The logging abstractions are used through the ASP.NET Core framework libraries, so you can even get log messages from deep inside the infrastructure libraries like Kestrel and EF Core if you like.

The logging abstractions include common features like different event levels, applying unique ids to specific logs, and event categories for tracking which class created the log message, as well as the ability to use structured logging for easier parsing of logs.

Structured logging is especially useful, as it makes finding and diagnosing issues so much easier in production. I'd go as far as to say that it should be absolutely required if you're running an app in production.

Introduction to structured logging

Structured logging basically involves associating key-value pairs with each log entry, instead of a simple string "message". For example, a non-structured log message might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You could probably put together a regex to extract all the information, but that's a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available without having to do any messy string processing. Nicholas Blumhardt has a great explanation of the benefits in this stack overflow answer.

Now, as these logs are no longer simple strings, they can't just be written to the console, or stored in a file - they need dedicated storage. Some of the most popular options are to store the logs in Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

In some situations, you might like to add the same values to every log message that you write. For example, you might want to add a database transaction id to every log message until that transaction is committed.

You could manually add the id to every relevant message, but ASP.NET Core also provides the
concept of scopes. You can create a new scope in a using block, passing in some state you want to log, and it will be written to each log message inside the using block.

You don't have to be using structured logging to use scopes - you can add them to the console logger for example - but they make the most sense in terms of structured logging.

For example, the following sample taken from the serilog-aspnetcore package (the recommended package for easily adding Serilog to ASP.NET Core 2.0 apps) demonstrates multiple nested scopes in the Get() method. Calling _logger.BeginScope<T>(T state) creates a new scope with the provided state.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");

        using (_logger.BeginScope("Some name"))
        using (_logger.BeginScope(42))
        using (_logger.BeginScope("Formatted {WithValue}", 12345))
        using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
        {
            _logger.LogInformation("Hello from the Index!");
            _logger.LogDebug("Hello is done");
        }

        _logger.LogInformation("After");

        return new string[] { "value1", "value2" };
    }
}

Running this application and hitting the action method produces logs similar to the following in Seq:

How to include scopes when logging exceptions in ASP.NET Core

As you can see, you can store anything as the state parameter T - a string, an integer, or a Dictionary<string, object> of values. Seq handles these scope state values in too different ways:

  • integers, strings and formatted strings are added to an array of objects on the Scope property
  • Parameters and values from formatted strings, and Dictionary<string, object> are added directly to the log entry as key-value pairs.

Surprise surprise, Nicholas Blumhardt also has a post on what to make of these values, how logging providers should handle them, and how to use them!

Exceptions inside scope blocks lose the scope

Scopes work well for this situation when you want to attach additional values to every log message, but there's a problem. What if an exception occurs inside the scope using block? The scope probably contains some very useful information for debugging the problem, so naturally you'd like to include it in the error logs.

If you can include a try-catch block inside the scope block, then you're fine - you can log the errors and the scope will be included as you'd expect.

But what if the try-catch block surrounds the using blocks? For example, imagine the previous example, but this time we have a try-catch block in the method, and an exception is thrown inside the using blocks:

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                // An unexpected problem!
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "An unexpected exception occured");
            return new string[] { };
        }
    }
}

Obviously this is a trivial example, you could easily put the try-catch block inside the using blocks, but in reality the scope blocks and exception could occur several layers deep inside some service.

Unfortunately, if you look at the error logged in Seq, you can see that the scopes have all been lost. There's no Scope, WithValue, or ViaDictionary properties:

How to include scopes when logging exceptions in ASP.NET Core

At the point the exception is logged, the using blocks have all been disposed, and so the scopes have been lost. Far from ideal, especially if the scopes contained information that would help debug why the exception occurred!

Using exception filters to capture scopes

So how can we get the best of both worlds, and record the scope both for successful logs and errors? The answer was buried in an issue in the Serilog repo, and uses a "common and accepted form of 'abuse'" by using an exception filter for side effects.

Exception filters are a C# 6 feature that lets you conditionally catch exceptions in a try-catch block:

try  
{
  // Something throws an exception
}
catch(MyException ex) when (ex.MyValue == 3)  
{
  // Only caught if the expression filter evaluates
  // to true, i.e. if ex.MyValue == 3
}

If the filter evaluates to true, the catch block executes; if it evaluates to false, the catch block is ignored, and the exception continues to bubble up the call stack until it is handled.

There is a lesser known "feature" of exception filters that we can make of here - the code in an exception filter runs in the same context in which the original exception occurred - the stack in unharmed, and is only dumped if the exception filter evaluates to true.

We can use this feature to allow recording the scopes at the location the exception occurs. The helper method LogError(exception) simply writes the exception to the logger when it is called as part of an exception filter using when (LogError(ex)). Returning true means the catch block is executed too, but only after the exception has been logged with its scopes.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex) when (LogError(ex))
        {
            return new string[] { };
        }
    }

    bool LogError(Exception ex)
    {
        _logger.LogError(ex, "An unexpected exception occured");
        return true;
    }
}

Now when the exception occurs, it's logged with all the active scopes at the point the exception occurred (Scope, WithValue, or ViaDictionary), instead of the active scopes inside the catch block.

How to include scopes when logging exceptions in ASP.NET Core

Summary

Structured logging is a great approach that makes filtering and searching logs after the fact much easier by storing key-value pairs of properties. You can add extra properties to each log by using scopes inside a using block. Every log written inside the using block will include the scope properties, but if an exception occurs, those scope values will be lost.

To work around this, you can use the C# 6 exception filters feature. Exception filters are executed in the same context as the original exception, so you can use them to capture the logging scope at the point the exception occurred, instead of the logging scope inside the catch block.


Damien Bowden: Getting started with Angular and Redux

This article shows how you could setup Redux in an Angular application using ngrx. Redux provides a really great way of managing state in an Angular application. State Management is hard, and usually ends up a mess when you invent it yourself. At present, Angular provides no recommendations or solution for this.

Thanks to Fabian Gosebrink for his help in learning ngrx and Redux. The to Philip Steinebrunner for his feedback.

Code: https://github.com/damienbod/AngularRedux

The demo app uses an Angular component for displaying countries using the public API https://restcountries.eu/. The view displays regions and the countries per region. The data and the state of the component is implemented using ngrx.

Note: Read the Redux documentation to learn how it works. Here’s a quick summary of the redux store in this application:

  • There is just one store per application while you can register additional reducers for your Feature-Modules with StoreModule.forFeature() per module
  • The store has a state, actions, effects, and reducers
  • The actions define what can be done in the store. Components or effects dispatch these
  • effects are use to do API calls, etc and are attached to actions
  • reducers are attached to actions and are used to change the state

The following steps explains, what is required to get the state management setup in the Angular application, which uses an Angular service to request the data from the public API.

Step 1: Add the ngrx packages

Add the latest ngrx npm packages to the packages.json file in your project.

    "@ngrx/effects": "^4.0.5",
    "@ngrx/store": "^4.0.3",
    "@ngrx/store-devtools": "^4.0.0 ",

Step 2: Add the ngrx setup configuration to the app module.

In this app, a single Redux store will be used per module. The ngrx configuration needs to be added to the app.module and also each child module as required. The StoreModule, EffectsModule and the StoreDevtoolsModule are added to the imports array of the NgModule.

...

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { StoreDevtoolsModule } from '@ngrx/store-devtools';

@NgModule({
    imports: [
        ...
        StoreModule.forRoot({}),
        StoreDevtoolsModule.instrument({
            maxAge: 25 //  Retains last 25 states
        }),
        EffectsModule.forRoot([])
    ],

    declarations: [
        AppComponent
    ],

    bootstrap: [AppComponent],
})

export class AppModule { }

Step 3: Create the interface for the state.

This can be any type of object, array.

import { Region } from './../../models/region';

export interface CountryState {
    regions: Region[],
};

Step 4: Create the actions

Create the actions required by the components or the effects. The constructor params must match the params sent from the components or returned from the API calls.

import { Action } from '@ngrx/store';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

export const SELECTALL = '[countries] Select All';
export const SELECTALL_COMPLETE = '[countries] Select All Complete';
export const SELECTREGION = '[countries] Select Region';
export const SELECTREGION_COMPLETE = '[countries] Select Region Complete';

export const COLLAPSEREGION = '[countries] COLLAPSE Region';

export class SelectAllAction implements Action {
    readonly type = SELECTALL;

    constructor() { }
}

export class SelectAllCompleteAction implements Action {
    readonly type = SELECTALL_COMPLETE;

    constructor(public countries: Country[]) { }
}

export class SelectRegionAction implements Action {
    readonly type = SELECTREGION;

    constructor(public region: Region) { }
}

export class SelectRegionCompleteAction implements Action {
    readonly type = SELECTREGION_COMPLETE;

    constructor(public region: Region) { }
}

export class CollapseRegionAction implements Action {
    readonly type = COLLAPSEREGION;

    constructor(public region: Region) { }
}

export type Actions
    = SelectAllAction
    | SelectAllCompleteAction
    | SelectRegionAction
    | SelectRegionCompleteAction
    | CollapseRegionAction;


Step 5: Create the effects

Create the effects to do the API calls. The effects are mapped to actions and when finished call another action.

import 'rxjs/add/operator/map';
import 'rxjs/add/operator/switchMap';

import { Injectable } from '@angular/core';
import { Actions, Effect } from '@ngrx/effects';
import { Action } from '@ngrx/store';
import { of } from 'rxjs/observable/of';
import { Observable } from 'rxjs/Rx';

import * as countryAction from './country.action';
import { Country } from './../../models/country';
import { CountryService } from '../../core/services/country.service';

@Injectable()
export class CountryEffects {

    @Effect() getAll$: Observable<Action> = this.actions$.ofType(countryAction.SELECTALL)
        .switchMap(() =>
            this.countryService.getAll()
                .map((data: Country[]) => {
                    return new countryAction.SelectAllCompleteAction(data);
                })
                .catch((error: any) => {
                    return of({ type: 'getAll_FAILED' })
                })
        );

    @Effect() getAllPerRegion$: Observable<Action> = this.actions$.ofType(countryAction.SELECTREGION)
        .switchMap((action: countryAction.SelectRegionAction) =>
            this.countryService.getAllPerRegion(action.region.name)
                .map((data: Country[]) => {
                    const region = { name: action.region.name, expanded: true, countries: data};
                    return new countryAction.SelectRegionCompleteAction(region);
                })
                .catch((error: any) => {
                    return of({ type: 'getAllPerRegion$' })
                })
        );
    constructor(
        private countryService: CountryService,
        private actions$: Actions
    ) { }
}

Step 6: Implement the reducers

Implement the reducer to change the state when required. The reducer takes an initial state and executes methods matching the defined actions which were dispatched from the components or the effects.

import { CountryState } from './country.state';
import { Country } from './../../models/country';
import { Region } from './../../models/region';
import { Action } from '@ngrx/store';
import * as countryAction from './country.action';

export const initialState: CountryState = {
    regions: [
        { name: 'Africa', expanded:  false, countries: [] },
        { name: 'Americas', expanded: false, countries: [] },
        { name: 'Asia', expanded: false, countries: [] },
        { name: 'Europe', expanded: false, countries: [] },
        { name: 'Oceania', expanded: false, countries: [] }
    ]
};

export function countryReducer(state = initialState, action: countryAction.Actions): CountryState {
    switch (action.type) {

        case countryAction.SELECTREGION_COMPLETE:
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        case countryAction.COLLAPSEREGION:
            action.region.expanded = false;
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        default:
            return state;

    }
}

Step 7: Configure the module.

Important here is how the StoreModule.forFeature is configured. The configuration must match the definitions in the components which use the store.

...
import { CountryComponent } from './components/country.component';

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { CountryEffects } from './store/country.effects';
import { countryReducer } from './store/country.reducer';

@NgModule({
    imports: [
        ...
        StoreModule.forFeature('world', {
            regions: countryReducer,
        }),
        EffectsModule.forFeature([CountryEffects])
    ],

    declarations: [
        CountryComponent
    ],

    exports: [
        CountryComponent
    ]
})

export class CountryModule { }

Step 8: Create the component

Create the component which uses the store. The constructor configures the store matching the module configuration from the forFeature and the state as required. User actions dispatch events using the actions, which if required calls the an effect function, which then calls an action and then a reducer function which changes the state.

import { Component, OnInit } from '@angular/core';
import { Store } from '@ngrx/store';
import { Observable } from 'rxjs/Observable';

import { CountryState } from '../store/country.state';
import * as CountryActions from '../store/country.action';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

@Component({
    selector: 'app-country-component',
    templateUrl: './country.component.html',
    styleUrls: ['./country.component.scss']
})

export class CountryComponent implements OnInit {

    public async: any;

    regionsState$: Observable<CountryState>;

    constructor(private store: Store<any>) {
        this.regionsState$ = this.store.select<CountryState>(state => state.world.regions);
    }

    ngOnInit() {
        this.store.dispatch(new CountryActions.SelectAllAction());
    }

    public getCountries(region: Region) {
        this.store.dispatch(new CountryActions.SelectRegionAction(region));
    }

    public collapse(region: Region) {
         this.store.dispatch(new CountryActions.CollapseRegionAction(region));
    }
}

Step 9: Use the state objects in the HTML template.

It is important not to forget to use the async pipe when using the state from ngrx. Now the view is independent from the API calls and when the state is changed, it is automatically updated, or other components which use the same state.

<div class="container-fluid">
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length > 0">
        <div class="table-responsive">
            <table class="table">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Name</th>
                        <th>Population</th>
                        <th>Capital</th>
                        <th>Flag</th>
                    </tr>
                </thead>
                <tbody>
                    <ng-container *ngFor="let region of (regionsState$|async)?.regions; let i = index">
                        <tr>
                            <td class="text-left td-table-region" *ngIf="!region.expanded">
                                <span (click)="getCountries(region)">►</span>
                            </td>
                            <td class="text-left td-table-region" *ngIf="region.expanded">
                                <span type="button" (click)="collapse(region)">▼</span>
                            </td>
                            <td class="td-table-region">{{region.name}}</td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                        </tr>
                        <ng-container *ngIf="region.expanded">
                            <tr *ngFor="let country of region.countries; let i = index">
                                <td class="td-table-country">    {{i + 1}}</td>
                                <td class="td-table-country">{{country.name}}</td>
                                <td class="td-table-country" >{{country.population}}</td>
                                <td>{{country.capital}}</td>
                                <td><img width="100" [src]="country.flag"></td>
                            </tr>
                        </ng-container>
                    </ng-container>                                         
                </tbody>
            </table>
        </div>
    </div>

    <!--▼ ►   <span class="glyphicon glyphicon-ok" aria-hidden="true" style="color: darkgreen;"></span>-->
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length <= 0">
        <span>No items found</span>
    </div>
</div>

Redux DEV Tools

The redux-devtools chrome extension is really excellent. Add this to Chrome and start the application.

When you start the application, and open it in Chrome, and the Redux state can be viewed, explored changed and tested. This gives you an easy way to view the state and also display what happened inside the application. You can even remove state changes using this tool, too see a different history and change the value of the actual state.

The actual state can be viewed:

Links:

https://github.com/ngrx

https://egghead.io/courses/getting-started-with-redux

http://redux.js.org/

https://github.com/ngrx/platform/blob/master/docs/store-devtools/README.md

https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd?hl=en

https://restcountries.eu/

http://onehungrymind.com/build-better-angular-2-application-redux-ngrx/

https://egghead.io/courses/building-a-time-machine-with-angular-2-and-rxjs



Andrew Lock: Creating a rolling file logging provider for ASP.NET Core 2.0

Creating a rolling file logging provider for ASP.NET Core 2.0

ASP.NET Core includes a logging abstraction that makes writing logs to multiple locations easy. All of the first-party libraries that make up ASP.NET Core and EF Core use this abstraction, and the vast majority of libraries written for ASP.NET Core will too. That means its easy to aggregate the logs from your entire app, including the framework and your own code, into one place.

In this post I'll show how to create a logging provider that writes logs to the file system. In production, I'd recommended using a more fully-featured system like Serilog instead of this library, but I wanted to see what was involved to get a better idea of the process myself.

The code for the file logging provider is available on GitHub, or as the NetEscapades.Extensions.Logging.RollingFile package on NuGet.

The ASP.NET Core logging infrastructure

The ASP.NET Core logging infrastructure consists of three main components:

  • ILogger - Used by your app to create log messages.
  • ILoggerFactory - Creates instances of ILogger
  • ILoggerProvider - Controls where log messages are output. You can have multiple logging providers - every log message you write to an ILogger is written to the output locations for every configured logging provider in your app.


Creating a rolling file logging provider for ASP.NET Core 2.0

When you want to write a log message in your application you typically use DI to inject an ILogger<T> into the class, where T is the name of the class. The T is used to control the category associated with the class.

For example, to write a log message in an ASP.NET Core controller, HomeController, you would inject the ILogger<HomeController> and call one of the logging extension methods on ILogger:

public class HomeController: Controller  
{
    private readonly ILogger<HomeController> _logger;
    public HomeController(ILogger<HomeController> logger)
    {
         _logger = logger;
    }

    public IActionResult Get()
    {
        _logger.LogInformation("Calling home controller action");
        return View();
    }
}

This will write a log message to each output of the configured logging providers, something like this (for the console logger):

info: ExampleApplication.Controllers.HomeController[0]  
      Calling home controller action

ASP.NET Core includes several logging providers out of the box, which you can use to write your log messages to various locations:

  • Console provider - writes messages to the Console
  • Debug provider - writes messages to the Debug window (e.g. when debugging in Visual Studio)
  • EventSource provider - writes messages using Event Tracing for Windows (ETW)
  • EventLog provider - writes messages to the Windows Event Log
  • TraceSource provider - writes messages using System.Diagnostics.TraceSource libraries
  • Azure App Service provider - writes messages to blob storage or files when running your app in Azure.

In ASP.NET Core 2.0, the console and Debug loggers are configured by default, but in production you'll probably want to write your logs to somewhere more durable. In modern applications, you'll likely want to write to a centralised location, such as an Elastic Search cluster, Seq, elamh.io, or Loggr.

You can write your logs to most of these locations by adding logging providers for them directly to your application, but one provider is particularly conspicuous by its absence - a file provider. In this post I'll show how to implement a logging provider that writes your application logs to rolling files.

The logging library Serilog includes support for logging to files, as well as a multitude of other sinks. Rather than implementing your own logging provider as I have here, I strongly recommend you check it out. Nicholas Blumhardt has a post on adding Serilog to your ASP.NET Core 2.0 application here.

Creating A rolling file based logging provider

In actual fact, the ASP.NET Core framework does include a file logging provider, but it's wrapped up behind the Azure App Service provider. To create the file provider I mostly used files already part of the Microsoft.Extensions.Logging.AzureAppServices package, and exposed it as a logging provider in it's own right. A bit of a cheat, but hey, "shoulders of giants" and all that.

Implementing a logging provider basically involves implementing two interfaces:

  • ILogger
  • ILoggerProvider

The AzureAppServices library includes some base classes for batching log messages up, and writing them on a background thread. That's important as logging should inherently be a quick and synchronous operation. Your app shouldn't know or care where the logs are being written, and it certainly shouldn't be waiting on file IO!

The batching logger provider

The BatchingLoggerProvider is an abstract class that encapsulates the process of writing logs to a concurrent collection and writing them on a background thread. The full source is here but the abridged version looks something like this:

public abstract class BatchingLoggerProvider : ILoggerProvider  
{
    protected BatchingLoggerProvider(IOptions<BatchingLoggerOptions> options)
    {
        // save options etc
        _interval = options.Value.Interval
        // start the background task
        _outputTask = Task.Factory.StartNew<Task>(
            ProcessLogQueue,
            null,
            TaskCreationOptions.LongRunning);
    }

    // Implemented in derived classes to actually write the messages out
    protected abstract Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken token);

    // Take messages from concurrent queue and write them out
    private async Task ProcessLogQueue(object state)
    {
        while (!_cancellationTokenSource.IsCancellationRequested)
        {
            // Add pending messages to the current batch
            while (_messageQueue.TryTake(out var message))
            {
                _currentBatch.Add(message);
            }

            // Write the current batch out
            await WriteMessagesAsync(_currentBatch, _cancellationTokenSource.Token);
            _currentBatch.Clear();

            // Wait before writing the next batch
            await Task.Delay(interval, cancellationToken);
        }
    }

    // Add a message to the concurrent queue
    internal void AddMessage(DateTimeOffset timestamp, string message)
    {
        if (!_messageQueue.IsAddingCompleted)
        {
            _messageQueue.Add(new LogMessage { Message = message, Timestamp = timestamp }, _cancellationTokenSource.Token);
        }
    }

    public void Dispose()
    {
        // Finish writing messages out etc
    }

    // Create an instance of an ILogger, which is used to actually write the logs
    public ILogger CreateLogger(string categoryName)
    {
        return new BatchingLogger(this, categoryName);
    }

    private readonly List<LogMessage> _currentBatch = new List<LogMessage>();
    private readonly TimeSpan _interval;
    private BlockingCollection<LogMessage> _messageQueue = new BlockingCollection<LogMessage>(new ConcurrentQueue<LogMessage>());
    private Task _outputTask;
    private CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
}

The BatchingLoggerProvider starts by creating a Task on a background thread that runs the ProcessLogQueue method. This method sits in a loop until the provider is disposed and the CancellationTokenSource is cancelled. It takes log messages off the concurrent (thread safe) queue, and adds them to a temporary list, _currentBatch. This list is passed to the abstract WriteMessagesAsync method, implemented by derived classes, which writes the actual logs to the destination.

The other most important method is CreateLogger(categoryName), which creates an instance of an ILogger that is injected into your classes. Our actual non-abstract provider implementation, the FileLoggerProvider, derives from the BatchingLoggerProvider:

[ProviderAlias("File")]
public class FileLoggerProvider : BatchingLoggerProvider  
{
    private readonly string _path;
    private readonly string _fileName;
    private readonly int? _maxFileSize;
    private readonly int? _maxRetainedFiles;

    public FileLoggerProvider(IOptions<FileLoggerOptions> options) : base(options)
    {
        var loggerOptions = options.Value;
        _path = loggerOptions.LogDirectory;
        _fileName = loggerOptions.FileName;
        _maxFileSize = loggerOptions.FileSizeLimit;
        _maxRetainedFiles = loggerOptions.RetainedFileCountLimit;
    }

    // Write the provided messages to the file system
    protected override async Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken cancellationToken)
    {
        Directory.CreateDirectory(_path);

        // Group messages by log date
        foreach (var group in messages.GroupBy(GetGrouping))
        {
            var fullName = GetFullName(group.Key);
            var fileInfo = new FileInfo(fullName);
            // If we've exceeded the max file size, don't write any logs
            if (_maxFileSize > 0 && fileInfo.Exists && fileInfo.Length > _maxFileSize)
            {
                return;
            }

            // Write the log messages to the file
            using (var streamWriter = File.AppendText(fullName))
            {
                foreach (var item in group)
                {
                    await streamWriter.WriteAsync(item.Message);
                }
            }
        }

        RollFiles();
    }

    // Get the file name
    private string GetFullName((int Year, int Month, int Day) group)
    {
        return Path.Combine(_path, $"{_fileName}{group.Year:0000}{group.Month:00}{group.Day:00}.txt");
    }

    private (int Year, int Month, int Day) GetGrouping(LogMessage message)
    {
        return (message.Timestamp.Year, message.Timestamp.Month, message.Timestamp.Day);
    }

    // Delete files if we have too many
    protected void RollFiles()
    {
        if (_maxRetainedFiles > 0)
        {
            var files = new DirectoryInfo(_path)
                .GetFiles(_fileName + "*")
                .OrderByDescending(f => f.Name)
                .Skip(_maxRetainedFiles.Value);

            foreach (var item in files)
            {
                item.Delete();
            }
        }
    }
}

The FileLoggerProvider implements the WriteMessagesAsync method by writing the log messages to the file system. Files are created with a standard format, so a new file is created every day. Only the last _maxRetainedFiles files are retained, as defined by the FileLoggerOptions.RetainedFileCountLimit property set on the IOptions<> object provided in the constructor.

Note In this implementation, once files exceed a maximum size, no further logs are written for that day. The default is set to 10MB, but you can change this on the FileLoggerOptions object.

The [ProviderAlias("File")] attribute defines the alias for the logger that you can use to configure log filtering. You can read more about log filtering in the docs.

The FileLoggerProvider is used by the ILoggerFactory to create an instance of the BatchingLogger, which implements ILogger, and is used to actually write the log messages.

The batching logger

The BatchingLogger is pretty simple. The main method, Log, passes messages to the provider by calling AddMessage. The methods you typically use in your app, such as LogError and LogInformation are actually just extension methods that call down to this underlying Log method.

public class BatchingLogger : ILogger  
{
    private readonly BatchingLoggerProvider _provider;
    private readonly string _category;

    public BatchingLogger(BatchingLoggerProvider loggerProvider, string categoryName)
    {
        _provider = loggerProvider;
        _category = categoryName;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel != LogLevel.None;
    }

    // Write a log message
    public void Log<TState>(DateTimeOffset timestamp, LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        var builder = new StringBuilder();
        builder.Append(timestamp.ToString("yyyy-MM-dd HH:mm:ss.fff zzz"));
        builder.Append(" [");
        builder.Append(logLevel.ToString());
        builder.Append("] ");
        builder.Append(_category);
        builder.Append(": ");
        builder.AppendLine(formatter(state, exception));

        if (exception != null)
        {
            builder.AppendLine(exception.ToString());
        }

        _provider.AddMessage(timestamp, builder.ToString());
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        Log(DateTimeOffset.Now, logLevel, eventId, state, exception, formatter);
    }
}

Hopefully this class is pretty self explanatory - most of the work is done in the logger provider.

The remaining piece of the puzzle is to provide the extension methods that let you easily configure the provider for your own app.

Extension methods to add the provider yo your application

In ASP.NET Core 2.0, logging providers are added to your application by adding them directly to the WebHostBuilder in Program.cs. This is typically done using extension methods on the ILoggingBuilder. We can create a simple extension method, and even add an override to allow configuring the logging provider's options (filenames, intervals, file size limits etc).

public static class FileLoggerFactoryExtensions  
{
    public static ILoggingBuilder AddFile(this ILoggingBuilder builder)
    {
        builder.Services.AddSingleton<ILoggerProvider, FileLoggerProvider>();
        return builder;
    }

    public static ILoggingBuilder AddFile(this ILoggingBuilder builder, Action<FileLoggerOptions> configure)
    {
        builder.AddFile();
        builder.Services.Configure(configure);

        return builder;
    }
}

In ASP.NET Core 2.0, logging providers are added using DI, so adding our new logging provider just requires adding the FileLoggerProvider to DI, as in the AddFile() method above.

With the provider complete, we can add it to our application:

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureLogging(builder => builder.AddFile()) // <- add this line
            .UseStartup<Startup>()
            .Build();
}

This adds the FileLoggerProvider to the application, in addition to the Console and Debug provider. Now when we write logs to our application, logs will also be written to a file:

Creating a rolling file logging provider for ASP.NET Core 2.0

Summary

Creating an ILoggerProvider will rarely be necessary, especially thanks to established frameworks like Serilog and NLog that integrate with ASP.NET Core. Wherever possible, I suggest looking at one of these, but if you don't want to use a replacement framework like this, then using a dedicated ILoggerProvider is an option.

Implementing a new logging provider requires creating an ILogger implementation and an ILoggerProvider implementation. In this post I showed an example of a rolling file provider. For the full details and source code, check out the project on GitHub, or the NuGet package. All comments, bugs and suggestions welcome, and credit to the ASP.NET team for creating the code I based this on!


Andrew Lock: Aligning strings within string.Format and interpolated strings

Aligning strings within string.Format and interpolated strings

I was browsing through the MSDN docs the other day, trying to remind myself of the various standard ToString() format strings, when I spotted something I have somehow missed in all my years of .NET - alignment components.

This post is for those of you who have also managed to miss this feature, looking at how you can use alignment components both with string.Format and when you are using string interpolation.

Right-aligning currencies in format strings

I'm sure the vast majority of people already know how format strings work in general, so I won't dwell on it much here. In this post I'm going to focus on formatting numbers, as formatting currencies seems like the canonical use case for alignment components.

The following example shows a simple console program that formats three decimals as currencies:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1:C}");
        Console.WriteLine($"Number 2 {val2:C}");
        Console.WriteLine($"Number 3 {val3:C}");
    }
}

As you can see, we are using the standard c currency formatter in an interpolated string. Even though we are using interpolated strings, the output is identical to the output you get if you use string.Format or pass arguments to Console.WriteLine directly. All of the following are the same:

Console.WriteLine($"Number 1 {val1:C}");  
Console.WriteLine("Number 1 {0:C}", val1);  
Console.WriteLine(string.Format("Number 1 {0:C}", val1));  

When you run the original console app, you'll get something like the following (depending on your current culture):

Number 1 £1.00  
Number 2 £12.00  
Number 3 £1,234.12  

Note that the numbers are slightly hard to read - the following is much clearer:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  

This format is much easier to scan - you can easily see that Number 3 is significantly larger than the other numbers.

To right-align formatted strings as we have here, you can use an alignment component in your string.Format format specifiers. An alignment component specifies the total number of characters to use to format the value.

The formatter formats the number as usual, and then adds the necessary number of whitespace characters to make the total up to the specific alignment component. You specify the alignment component after the number to format and a comma ,. For example, the following format string "{value,5}" when value=1 would give the string " 1": 1 formatted character, 4 spaces, 5 characters in total.

You can use a formatting string (such as standard values like c or custom values like dd-mmm-yyyy and ###) in combination with an alignment component. Simply place the format component after the alignment component and :, for example "value,10:###". The integer after the comma is the alignment component, and the string after the colon is the formatting component.

So, going back to our original requirement of right aligning three currency strings, the following would do the trick, with the values previously presented:

decimal val1 = 1;  
decimal val2 = 12;  
decimal val3 = 1234.12m;

Console.WriteLine($"Number 1 {val1,10:C}");  
Console.WriteLine($"Number 2 {val2,10:C}");  
Console.WriteLine($"Number 3 {val3,10:C}");

// Number 1      £1.00
// Number 2     £12.00
// Number 3  £1,234.12

Oversized strings

Now, you may have spotted a slight issue with this alignment example. I specified that the total width of the formatted string should be 10 characters - what happens if the number is bigger that that?

In the following example, I'm formatting a long in the same ways as the previous, smaller, numbers:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;
    readonly static long _long = 999_999_999_999;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1,10:C}");
        Console.WriteLine($"Number 2 {val2,10:C}");
        Console.WriteLine($"Number 3 {val3,10:C}");
        Console.WriteLine($"Number 3 {_long,10:C}");
    }
}

You can see the effect of this 'oversized' number below:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  
Number 3 £999,999,999,999.00  

As you can see, when a formatted number doesn't fit in the requested alignment characters, it spills out to the right. Essentially the alignment component indicates the minimum number of characters the formatted value should occupy.

Padding left-aligned strings

You've seen how to left-align currencies, but what if the labels associated with these values were not all the same length, as in the following example:

Console.WriteLine($"A small number {val1,10:C}");  
Console.WriteLine($"A bit bigger {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

Written like this, our good work aligning the currencies is completely undone by the unequal length of our labels:

A small number      £1.00  
A bit bigger     £12.00  
A bit bigger again  £1,234.12  

Now, there's an easy way to fix the problem in this case, just manually pad with whitespace:

Console.WriteLine($"A small number     {val1,10:C}");  
Console.WriteLine($"A bit bigger       {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

But what if these labels were dynamic? In that case, we could use the same alignment component trick. Again, the integer passed to the alignment component indicates the minimum number of characters, but this time we use a negative value to indicate the values should be left aligned:

var label1 = "A small number";  
var label2 = "A bit bigger";  
var label3 = "A bit bigger again";

Console.WriteLine($"{label1,-18} {val1,10:C}");  
Console.WriteLine($"{label2,-18} {val2,10:C}");  
Console.WriteLine($"{label3,-18} {val3,10:C}");  

With this technique, when the strings are formatted, we get nicely formatted currencies and labels.

A small number          £1.00  
A bit bigger           £12.00  
A bit bigger again  £1,234.12  

Limitations

Now, there's one big limitation when it comes to using alignment components. In the previous example, we had to explicitly set the alignment component to a length of 18 characters. That feels a bit clunky.

Ideally, we'd probably prefer to do something like the following:

var maxLength = Math.Max(label1.Length, label2.Length);  
Console.WriteLine($"{label1,-maxLength} {val1,10:C}");  
Console.WriteLine($"{label2,-maxLength} {val2,10:C}");  

Unfortunately, this doesn't compile - maxLength has to be a constant. Ah well.

Summary

You can use alignment components in your format strings to both right-align and left-align your formatted values. This pads the formatted values with whitespace to either right-align (positive values) or left-align (negative values) the formatted value. This is particularly useful for right-aligning currencies in strings.


Andrew Lock: Using CancellationTokens in ASP.NET Core MVC controllers

Using CancellationTokens in ASP.NET Core MVC controllers

In this post I'll show how you can use a CancellationToken in your ASP.NET Core action method to stop execution when a user cancels a request from their browser. This can be useful if you have long running requests that you don't want to continue using up resources when a user clicks "stop" or "refresh" in their browser.

I'm not really going to cover any of the details of async, await, Tasks or CancellationTokens in this post, I'm just going to look at how you can inject a CancellationToken into your action methods, and use that to detect when a user has cancelled a request.

Long running requests and cancellation

Have you ever been on a website where you've made a request for a page, and it just sits there, supposedly loading? Eventually you get board and click the "Stop" button, or maybe hammer F5 to reload the page. Users expect a page to load pretty much instantly these days, and when it doesn't, a quick refresh can be very tempting.

That's all well and good for the user, but what about your poor server? If the action method the user is hitting takes a long time to run, then refreshing five times will fire off 5 requests. Now you're doing 5 times the work. That's the default behaviour in MVC - even though the user has refreshed the browser, which cancels the original request, your MVC action won't know that the value it's computing is going to be thrown away at the end of it!

In this post, we'll assume you have an MVC action that can take some time to complete, before sending a response to the user. While that action is processing, the user might cancel the request directly, or refresh the page (which effectively cancels the original request, and initiates a new one).

I'm ignoring the fact that long running actions are generally a bad idea. If you find yourself with many long running actions in your app, you might be better off considering a solution based on CQRS and messaging queues, so you can quickly return a response to the user, and can process the result of the action on a background thread.

For example, consider the following MVC controller. This is a toy example, that simply waits for 10s before returning a message to the user, but the Task.Delay() could be any long-running process, such as generating a large report to return to the user.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get()
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

If we hit the URL /slowtest then the request will run for 10s, and eventually will return the message:

Using CancellationTokens in ASP.NET Core MVC controllers

If we check the logs, you can see the whole action executed as expected:

Using CancellationTokens in ASP.NET Core MVC controllers

So now, what happens if the user refreshes the browser, half way through the request? The browser never receives the response from the first request, but as you can see from the logs, the action method executes to completion twice - once for the first (cancelled) request, and once for the second (refresh) request:

Using CancellationTokens in ASP.NET Core MVC controllers

Whether this is correct behaviour will depend on your app. If the request modifies state, then you may not want to halt execution mid-way through a method. On the other hand, if the request has no side-effects, then you probably want to stop the (presumably expensive) action as soon as you can.

ASP.NET Core provides a mechanism for the web server (e.g. Kestrel) to signal when a request has been cancelled using a CancellationToken. This is exposed as HttpContext.RequestAborted, but you can also inject it automatically into your actions using model binding.

Using CancellationTokens in your MVC Actions

CancellationTokens are lightweight objects that are created by a CancellationTokenSource. When a CancellationTokenSource is cancelled, it notifies all the consumers of the CancellationToken. This allows one central location to notify all of the code paths in your app that cancellation was requested.

When cancelled, the IsCancellationRequested property of the cancellation token will be set to True, to indicate that the CancellationTokenSource has been cancelled. Depending on how you are using the token, you may or may not need to check this property yourself. I'll touch on this a little more in the next section, but for now, let's see how to use a CancellationToken in our action methods.

Lets consider the previous example again. We have a long-running action method (which for example, is generating a read-only report by calling out to a number of other APIs). As it as an expensive method, we want to stop executing the action as soon as possible if the request is cancelled by the user.

The following code shows how we can hook into the central CancellationTokenSource for the request, by injecting a CancellationToken into the action method, and passing the parameter to the Task.Delay call:

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        // slow async action, e.g. call external api
        await Task.Delay(10_000, cancellationToken);

        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

MVC will automatically bind any CancellationToken parameters in an action method to the HttpContext.RequestAborted token, using the CancellationTokenModelBinder. This model binder is registered automatically when you call services.AddMvc() (or services.AddMvcCore()) in Startup.ConfigureServices().

With this small change, we can test out our scenario again. We'll make an initial request, which starts the long-running action, and then we'll reload the page. As you can see from the logs below, the first request never completes. Instead the Task.Delay call throws a TaskCancelledException when it detects that the CancellationToken.IsCancellationRequested property is true, immediately halting execution.

Using CancellationTokens in ASP.NET Core MVC controllers

Shortly after the request is cancelled by the user refreshing the browser, the original request is aborted with a TaskCancelledException which propagates back through the MVC filter pipeline, and back up the middleware pipeline.

In this scenario, the Task.Delay() method keeps an eye on the CancellationToken for you, so you never need to manually check if the token has been cancelled yourself. Depending on your scenario, you may be able to rely on framework methods like these to check the state of the CancellationToken, or you may have to watch for cancellation requests yourself.

Checking the cancellation state

If you're calling a built in method that supports cancellation tokens, like Task.Delay() or HttpClient.SendAsync(), then you can just pass in the token, and let the inner method take care of actually cancelling (throwing) for you.

In other cases, you may have some synchronous work you're doing, which you want to be able to be able to cancel. For example, imagine you're building a report to calculate all of the commission due to a company's employees. You're looping over every employee, and then looping over each sale they've made.

A simple solution to be able to cancel this report generation mid-way would be to check the CancellationToken inside the for loop, and abandon ship if the user cancels the request. The following example represents this kind of situation by looping 10 times, and performing some synchronous (non-cancellable) work, represented by the call to Thread.Sleep(). At the start of each loop, we check the cancellation token and throw if cancellation has been requested. This lets us add cancellation to an otherwise long-running synchronous process.

public class SlowRequestController : Controller  
{
    private readonly ILogger _logger;

    public SlowRequestController(ILogger<SlowRequestController> logger)
    {
        _logger = logger;
    }

    [HttpGet("/slowtest")]
    public async Task<string> Get(CancellationToken cancellationToken)
    {
        _logger.LogInformation("Starting to do slow work");

        for(var i=0; i<10; i++)
        {
            cancellationToken.ThrowIfCancellationRequested();
            // slow non-cancellable work
            Thread.Sleep(1000);
        }
        var message = "Finished slow delay of 10 seconds.";

        _logger.LogInformation(message);

        return message;
    }
}

Now if you cancel the request the call to ThrowIfCancelletionRequested() will throw an OperationCanceledException, which again will propogate up the filter pipeline and up the middleware pipeline.

Tip: You don't have to use ThrowIfCancellationRequested(). You could check the value of IsCancellationRequested and exit the action gracefully. This article contains some general best practice patterns for working with cancellation tokens.

Typically, exceptions in action methods are bad, and this exception is treated no differently. If you're using the ExceptionHandlerMiddleware or DeveloperExceptionMiddleware in your pipeline, these will attempt to handle the exception, and generate a user-friendly error message. Of course, the request has been cancelled, so the user will never see this message!

Rather than filling your logs with exception messages from cancelled requests, you will probably want to catch these exceptions. A good candidate for catching cancellation exceptions from your MVC actions is an ExceptionFilter.

Catching cancellations with an ExceptionFilter

ExceptionFilters are an MVC concept that can be used to handle exceptions that occur either in your action methods, or in your action filters. If you're not familiar with the filter pipeline, I recommend checking out the documentation.

You can apply ExceptionFilters at the action level, at the controller level (in which case they apply to every action in the controller), or at the global level (in which case they apply to every action in your app). Typically they're implemented as attributes, so you can decorate your action methods with them.

For this example, I'm going to create a simple ExceptionFilter and add it to the global filters. We'll handle the exception, log it, and create a simple response so that we can just wind up the request as quick as possible. The actual response (Result) we generate doesn't really matter, as it's never getting sent to the browser, so our goal is to handle the exception in as tidy a way as possible.

public class OperationCancelledExceptionFilter : ExceptionFilterAttribute  
{
    private readonly ILogger _logger;

    public OperationCancelledExceptionFilter(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<OperationCancelledExceptionFilter>();
    }
    public override void OnException(ExceptionContext context)
    {
        if(context.Exception is OperationCanceledException)
        {
            _logger.LogInformation("Request was cancelled");
            context.ExceptionHandled = true;
            context.Result = new StatusCodeResult(400);
        }
    }
}

This filter is very simple. It derives from the base ExceptionFilterAttribute for simplicity, and overrides the OnException method. This provides an ExceptionContext object with information about the exception, the action method being executed, the ModelState - all sorts of interesting stuff!

All we care about are the OperationCanceledException exceptions, and if get one, we just write a log message, mark the exception as handled, and return a 400 result. Obviously we could log more (the URL would be an obvious start), but you get the idea.

Note that we are handling OperationCanceledException. The Task.Delay method throws a TaskCancelledException when cancelled, but that derives from OperationCanceledException, so we'll catch both types with this filter.

I'm not going to argue about whether this should be a 200/400/500 status code result. The request is cancelled and the client will never see it, so it really doesn't matter that much. I chose to go with a 400 result, but you have to be aware that if you have any middleware in place to catch errors like this, such as the StatusCodeMiddleware, then it could end up catching the response and doing pointless extra work to generate a "friendly" error page. On the other hand, if you return a 200, be careful if you have middleware that might cache the response to this "successful" request!

To hook up the exception filter globally, you add it in the call to services.AddMvc() in Startup.ConfigureServices:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc(options =>
        {
            options.Filters.Add<OperationCancelledExceptionFilter>();
        });
    }
}

Now if the user refreshes their browser mid request, the request will still be cancelled, but we are back to a nice log message, instead of exceptions propagating all the way up our middleware pipeline.

Using CancellationTokens in ASP.NET Core MVC controllers

Summary

Users can cancel requests to your web app at any point, by hitting the stop or reload button on your browser. Typically, your app will continue to generate a response anyway, even though Kestrel won't send it to the user. If you have a long running action method, then you may want to detect when a request is cancelled, and stop execution.

You can do this by injecting a CancellationToken into your action method, which will be automatically bound to the HttpContext.RequestAborted token for the request. You can check this token for cancellation as usual, and pass it to any asynchronous methods that support it. If the request is cancelled, an OperationCanceledException or TaskCanceledException will be thrown.

You can easily handle this exceptions using an ExceptionFilter, applied to the action or controller directly, or alternatively applied globally. The response won't be sent to the user's browser, so this isn't essential, but you can use it to tidy up your logs, and short circuit the pipeline in as efficient manner as possible.

Thanks to @purekrome for requesting this post and even providing the code outline!


Darrel Miller: HTTP Pattern Index

When building HTTP based applications we are limited to a small set of HTTP methods in order to achieve the goals of our application. Once our needs go beyond simple CRUD style manipulation of resource representations, we need to be a little more creative in the way we manipulate resources in order to achieve more complex goals.

The following patterns are based on scenarios that I myself have used in production applications, or I have seen others implement. These patterns are language agnostic, domain agnostic and to my knowledge, exist within the limitations of the REST constraints.


Name Description
Alias A resource designed to provide a logical identifier but without being responsible for incurring the costs of transferring the representation bytes.
Action Coming soon A processing resource used to convey a client's intent to invoke some kind of unsafe action on a secondary resource.
Bouncer A resource designed to accept a request body containing complex query parameters and redirect to a new location to enable the results of complex and expensive queries to be cached.
Builder Coming soon: A builder resource is much like a factory resource in that it is used to create another resource, however, a builder is a transient resource that enables idempotent creation and allows the client to specify values that cannot change over the lifetime of the created resource.
Bucket A resource used to indicate the status of a "child" resource.
Discovery This type of resource is used to provide a client with the information it needs to be able to access other resources.
Factory A factory resource is one that is used to create another resource.
Miniput A resource designed to enable doing a partial updates to another resource.
Progress A progress resource is usually a temporary resource that is created automatically by the server to provide status on some long running process that has been initiated by a client.
Sandbox Coming soon: A processing resource that is paired with a regular resource to enable making "whatif" style updates and seeing what the results would have been if applied against the regular resource.
Toggle Coming soon: A resource that has two distinct states and can easily be switched between those states.
Whackamole A type of resource that when deleted, re-appears as a different resource.
Window Coming soon: A resource that provides access to a subset of a larger set of information through the use of parameters that filter, project and zoom information from the complete set.


Anuraj Parameswaran: Introduction to Razor Pages in ASP.NET Core

This post is about Razor Pages in ASP.NET Core. Razor Pages is a new feature of ASP.NET Core MVC that makes coding page-focused scenarios easier and more productive. With ASP.NET Core 2.0, Microsoft released Razor Pages. Razor Pages is another way of building applications, built on top of ASP.NET Core MVC. Razor Pages will be helpful for the beginners as well as the developers, who are coming from other web application development backgrounds like PHP or Old ASP. Razor Pages will fit well in small scenarios where building an application in MVC is an overkill.


Andrew Lock: The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

This article describes why you get the error "The SDK 'Microsoft.Net.Sdk.Web' specified could not be found" when creating a new project in Visual Studio 2017 15.3, which prevents the project from loading, and how to fix it.

tl;dr; I had a rogue global.json sitting in a parent folder, that was tying the SDK version to 1.X. Removing that, (or adding a global.json for 2.x fixed the problem).

Update: Shortly after publishing this post, I noticed a tweet from Patrik who was getting a similar error, but for a different situation. He had installed the VS 2017 15.3 update, and could no longer open ASP.NET Core 1.1 projects!

It turns out, he'd uncovered the route of the problem, and the issue I was having - VS 2017 update 3 is incompatible with the 1.0.0 SDK:

Kudos to him for figuring it out!

2.0 all the things

As I'm sure anyone who's reading this is aware, Microsoft released the final version of .NET Standard 2.0, .NET Core 2.0, and ASP.NET Core 2.0 yesterday. These brings a huge number of changes, perhaps most importantly being the massive increase in API surface brought by .NET STandard 2.0, which will make porting applications to .NET Core much easier.

As part of the release, Microsoft also released Visual Studio 2017 update 3. This also has a bunch of features, but most importantly in supports .NET Core 2.0. Before this point, if you wanted to play with the .NET Core 2.0 bits you had to install the preview version of Visual Studio.

That's no longer as scary as it once was, with VS new lightweight installer and side by side installers. But I've been burned one to many times, and just didn't feel like risking having to pave my machine, so I decided to hold off the preview version. That didn't stop me playing with the preview bits of course, OmniSharp means developing in VS Code and with the CLI is almost as good, and JetBrains Rider went RTM a couple of weeks ago.

Still, I was excited to play with 2.0 on my home turf, in Visual Studio, so I:

  • Opened up the Visual Studio Installer program - This should force VS to check for updates, instead of waiting for it to notice that an update was available. It still took a little while (10 mins) for 15.3 to become available, but I clicked the update button as soon as it was available

  • Installed the .NET Core 2.0 SDK from here - You have to do this step separately at the moment. Once this is installed, the .NET Core 2.0 templates will light up in Visual Studio.

With both of these installed I decided on a quick test to make sure everything was running smoothly. I'd create a basic app using new 2.0 templates.

Creating a new ASP.NET Core 2.0 web app

The File > New Project experience is pretty much the same in ASP.NET Core 2.0, but there are some additional templates available after you choose ASP.NET Core Web Application. If you switch the framework version to ASP.NET Core 2.0, you'll see some new templates appear, including SPA templates for Angular and React.js:

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

I left everything at the defaults - no Docker support enabled, no authentication - and selected Web Application (Model-View-Controller).

Note that the templates have been renamed a little. The Web Application template creates a new project using Razor pages, while the Web Application (Model-View-Controller) template creates a template using separate controllers.

Click OK, and wait for the template to scaffold… and …

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Oh dear. What's going on here?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

So, there was clearly a problem creating the solution. My first thought was that it was a bug in the new VS 2017 update. A little odd seeing as noone else on Twitter seemed to have mentioned it, but not overly surprising given it had just been released. I should expect some kinks right?

A quickgoogling for the error, turned up this issue, but that seemed to suggest the error was an old one that had been fixed.

I gave it a second go, but sure enough, the same error occurred. Clicking OK left me with a solution with no projects.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

The project template was created successfully on disk, so I thought, why not just add it to the solution directly: Right Click on the solution file Add > Existing Project?

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Hmmm, so definitely something significantly wrong here...

Check your global.json

The error was complaining that the SDK was not found. Why would that happen? I had definitely installed the .NET Core 2.0 SDK, and VS could definitely see it, as it had shown me the 2.0 templates.

It was at this point I had an epiphany. A while back, when experimenting with a variety of preview builds, I had recurring issues when I would switch back and forth between preview projects and ASP.NET Core 1.0 projects.

To get round the problem, I created a sub folder in my Repos folder for preview builds, and dropped a global.json into the folder for the newer SDK, and placed the following global.json in the root of my Repos folder:

{
  "sdk": {
    "version": "1.0.0"
  }
}

Any time I created a project in the Previews folder, it would use the preview SDK, but a project created anywhere else would use the stable 1.0.0 SDK. This was the route of my problem.

I was trying to create an ASP.NET Core 2.0 project in a folder tied to the 1.0.0 SDK. That older SDK doesn't support the new 2.0 projects, so VS was borking when it tried to load the project.

The simple fix was to either delete the global.json entirely (the highest SDK version will be used in that case), or update it to 2.0.0.

In general, you can always use the latest version of the SDK to build your projects. The 2.0.0 SDK can be used to build 1.0.0 projects.

After updating the global.json, VS was able to add the existing project, and to create new projects with no issues.

The SDK 'Microsoft.Net.Sdk.Web' specified could not be found

Summary

I was running into an issue where creating a new ASP.NET Core 2.0 project was giving me an error The SDK 'Microsoft.Net.Sdk.Web' specified could not be found, and leaving me unable to open the project in Visual Studio. The problem was the project was created in a folder that contained a global.json file, tying the SDK version to 1.0.0.

Deleting the global.json, or updating it to 2.0.0, fixed the issue. Be sure to check parent folders too - if any parent folder contains a global.json, the SDK version specified in the "closest" folder will be used.


Damien Bowden: Angular Configuration using ASP.NET Core settings

This post shows how ASP.NET Core application settings can be used to configure an Angular application. ASP.NET Core provides excellent support for different configuration per environment, and so using this for an Angular application can be very useful. Using CI, one release build can be automatically created with different configurations, instead of different release builds per deployment target.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/master/src/AngularClient

ASP.NET Core Hosting application

The ClientAppSettings class is used to load the strongly typed appsettings.json from the json file. The class contains the properties required for OIDC configuration in the SPA and the required API URLs. These properties have different values per deployment, so we do not want to add these in a typescript file, or change with each build.

namespace AngularClient.ViewModel
{
    public class ClientAppSettings
    {
        public string  stsServer { get; set; }
        public string redirect_url { get; set; }
        public string client_id { get; set; }
        public string response_type { get; set; }
        public string scope { get; set; }
        public string post_logout_redirect_uri { get; set; }
        public bool start_checksession { get; set; }
        public bool silent_renew { get; set; }
        public string startup_route { get; set; }
        public string forbidden_route { get; set; }
        public string unauthorized_route { get; set; }
        public bool log_console_warning_active { get; set; }
        public bool log_console_debug_active { get; set; }
        public string max_id_token_iat_offset_allowed_in_seconds { get; set; }
        public string apiServer { get; set; }
        public string apiFileServer { get; set; }
    }
}

The appsettings.json file contains the actual values which will be used for each different environment.

{
  "ClientAppSettings": {
    "stsServer": "https://localhost:44318",
    "redirect_url": "https://localhost:44311",
    "client_id": "angularclient",
    "response_type": "id_token token",
    "scope": "dataEventRecords securedFiles openid profile",
    "post_logout_redirect_uri": "https://localhost:44311",
    "start_checksession": false,
    "silent_renew": false,
    "startup_route": "/dataeventrecords",
    "forbidden_route": "/forbidden",
    "unauthorized_route": "/unauthorized",
    "log_console_warning_active": true,
    "log_console_debug_active": true,
    "max_id_token_iat_offset_allowed_in_seconds": 10,
    "apiServer": "https://localhost:44390/",
    "apiFileServer": "https://localhost:44378/"
  }
}

The ClientAppSettings class is then added to the IoC in the ASP.NET Core Startup class and the ClientAppSettings section is used to fill the instance with data.

public void ConfigureServices(IServiceCollection services)
{
  services.Configure<ClientAppSettings>(Configuration.GetSection("ClientAppSettings"));
  services.AddMvc();

A MVC Controller is used to make the settings public. This class gets the strongly typed settings from the IoC and returns it in a HTTP GET request. No application secrets should be included in this HTTP GET request!

using AngularClient.ViewModel;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

namespace AngularClient.Controllers
{
    [Route("api/[controller]")]
    public class ClientAppSettingsController : Controller
    {
        private readonly ClientAppSettings _clientAppSettings;

        public ClientAppSettingsController(IOptions<ClientAppSettings> clientAppSettings)
        {
            _clientAppSettings = clientAppSettings.Value;
        }

        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_clientAppSettings);
        }
    }
}

Configuring the Angular application

The Angular application needs to read the settings and use these in the client application. A configClient function is used to GET the data from the server. The APP_INITIALIZER could also be used, but as the settings are been used in the main AppModule, you still have to wait for the HTTP GET request to complete.

configClient() {

	// console.log('window.location', window.location);
	// console.log('window.location.href', window.location.href);
	// console.log('window.location.origin', window.location.origin);

	return this.http.get(window.location.origin + window.location.pathname + '/api/ClientAppSettings').map(res => {
		this.clientConfiguration = res.json();
	});
}

In the constructor of the AppModule, the module subscribes to the configClient function. Here the configuration values are read and the properties are set as required for the SPA application.

clientConfiguration: any;

constructor(public oidcSecurityService: OidcSecurityService, private http: Http, private configuration: Configuration) {

	console.log('APP STARTING');
	this.configClient().subscribe(config => {

		let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
		openIDImplicitFlowConfiguration.stsServer = this.clientConfiguration.stsServer;
		openIDImplicitFlowConfiguration.redirect_url = this.clientConfiguration.redirect_url;
		openIDImplicitFlowConfiguration.client_id = this.clientConfiguration.client_id;
		openIDImplicitFlowConfiguration.response_type = this.clientConfiguration.response_type;
		openIDImplicitFlowConfiguration.scope = this.clientConfiguration.scope;
		openIDImplicitFlowConfiguration.post_logout_redirect_uri = this.clientConfiguration.post_logout_redirect_uri;
		openIDImplicitFlowConfiguration.start_checksession = this.clientConfiguration.start_checksession;
		openIDImplicitFlowConfiguration.silent_renew = this.clientConfiguration.silent_renew;
		openIDImplicitFlowConfiguration.startup_route = this.clientConfiguration.startup_route;
		openIDImplicitFlowConfiguration.forbidden_route = this.clientConfiguration.forbidden_route;
		openIDImplicitFlowConfiguration.unauthorized_route = this.clientConfiguration.unauthorized_route;
		openIDImplicitFlowConfiguration.log_console_warning_active = this.clientConfiguration.log_console_warning_active;
		openIDImplicitFlowConfiguration.log_console_debug_active = this.clientConfiguration.log_console_debug_active;
		openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = this.clientConfiguration.max_id_token_iat_offset_allowed_in_seconds;

		this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);

		configuration.FileServer = this.clientConfiguration.apiFileServer;
		configuration.Server = this.clientConfiguration.apiServer;
	});
}

The Configuration class can then be used throughout the SPA application.

import { Injectable } from '@angular/core';

@Injectable()
export class Configuration {
    public Server = 'read from app settings';
    public FileServer = 'read from app settings';
}

I am certain, there is a better way to do the Angular configuration, but not much information exists for this. APP_INITIALIZER is not so well documentated. Angular CLI has it’s own solution, but the configuration file cannot be read per environment.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments

https://www.intertech.com/Blog/deploying-angular-4-apps-with-environment-specific-info/

https://stackoverflow.com/questions/43193049/app-settings-the-angular-4-way

https://damienbod.com/2015/10/11/asp-net-5-multiple-configurations-without-using-environment-variables/



Andrew Lock: Introduction to the ApiExplorer in ASP.NET Core

Introduction to the ApiExplorer in ASP.NET Core

One of the standard services added when you call AddMvc() or AddMvcCore() in an ASP.NET Core MVC application is the ApiExplorer. In this post I'll show a quick example of its capabilities, and give you a taste of the metadata you can obtain about your application.

Exposing your application's API with the ApiExplorer

The ApiExplorer contains functionality for discovering and exposing metadata about your MVC application. You can use it to provide details such as a list of controllers and actions, their URLs and allowed HTTP methods, parameters and response types.

How you choose to use these details is up to you - you could use it to auto-generate documentation, help pages, or clients for your application. The Swagger and Swashbuckle.AspNetCore frameworks use the ApiExplorer functionality to provide a fully featured documentation framework, and are well worth a look if that's what you're after.

For this article, I'll hook directly into the ApiExplorer to generate a simple help page for a basic Web API controller.

Introduction to the ApiExplorer in ASP.NET Core

Adding the ApiExplorer to your applications

The ApiExplorer functionality is part of the Microsoft.AspNetCore.Mvc.ApiExplorer package. This package is referenced by default when you include the Microsoft.AspNetCore.Mvc package in your application, so you generally won't need to add the package explicitly. If you are starting from a stripped down application, you can add the package directly to make the services available.

As Steve Gordon describes in his series on the MVC infrastructure, the call to AddMvc in Startup.ConfigureServices automatically adds the ApiExplorer services to your application by calling services.AddApiExplorer(), so you don't need to explicitly add anything else to your Startup class .

The call to AddApiExplorer in turn, calls an internal method AddApiExplorerServices(), which adds the actual services you will use in your application:

internal static void AddApiExplorerServices(IServiceCollection services)  
{
    services.TryAddSingleton<IApiDescriptionGroupCollectionProvider, ApiDescriptionGroupCollectionProvider>();
    services.TryAddEnumerable(
        ServiceDescriptor.Transient<IApiDescriptionProvider, DefaultApiDescriptionProvider>());
}

This adds a default implementations of the IApiDescriptionGroupCollectionProvider that exposes the API endpoints to your application. To access the list of APIs, you just need to inject the service into your controller/services.

Listing your application's metadata

For this app, we'll just include the default ValuesController that is added to the default web API project:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

In addition, we'll create a simple controller that renders details about the Web API endpoints in your application to a razor page, as you saw earlier, called DocumentationController.

First, inject the IApiDescriptionGroupCollectionProvider into the controller. For simplicity, we'll just return this directly as the model to the Razor view page - we'll decompose the details it provides in the Razor page.

public class DocumentationController : Controller  
{
    private readonly IApiDescriptionGroupCollectionProvider _apiExplorer;
    public DocumentationController(IApiDescriptionGroupCollectionProvider apiExplorer)
    {
        _apiExplorer = apiExplorer;
    }

    public IActionResult Index()
    {
        return View(_apiExplorer);
    }
}

The provider exposes a collection of ApiDescriptionGroups, each of which contains a collection of ApiDescriptions. You can think of an ApiDescriptionGroup as a controller, and an ApiDescription as an action method.

The ApiDescription contains a wealth of information about the action method - parameters, the URL, the type of media that can be returned - basically everything you might want to know about an API!

The Razor page below lists out all the APIs that are exposed in the application. There's a slightly overwhelming amount of detail here, but it lists everything you might need to know!

@using Microsoft.AspNetCore.Mvc.ApiExplorer;
@model IApiDescriptionGroupCollectionProvider

<div id="body">  
    <section class="featured">
        <div class="content-wrapper">
            <hgroup class="title">
                <h1>ASP.NET Web API Help Page</h1>
            </hgroup>
        </div>
    </section>
    <section class="content-wrapper main-content clear-fix">
        <h3>API Groups, version @Model.ApiDescriptionGroups.Version</h3>
        @foreach (var group in Model.ApiDescriptionGroups.Items)
            {
            <h4>@group.GroupName</h4>
            <ul>
                @foreach (var api in group.Items)
                {
                    <li>
                        <h5>@api.HttpMethod @api.RelativePath</h5>
                        <blockquote>
                            @if (api.ParameterDescriptions.Count > 0)
                            {
                                <h6>Parameters</h6>
                                    <dl class="dl-horizontal">
                                        @foreach (var parameter in api.ParameterDescriptions)
                                        {
                                            <dt>Name</dt>
                                            <dd>@parameter.Name,  (@parameter.Source.Id)</dd>
                                            <dt>Type</dt>
                                            <dd>@parameter.Type?.FullName</dd>
                                            @if (parameter.RouteInfo != null)
                                            {
                                                <dt>Constraints</dt>
                                                <dd>@string.Join(",", parameter.RouteInfo.Constraints?.Select(c => c.GetType().Name).ToArray())</dd>
                                                <dt>DefaultValue</dt>
                                                <dd>parameter.RouteInfo.DefaultValue</dd>
                                                <dt>Is Optional</dt>
                                                <dd>@parameter.RouteInfo.IsOptional</dd>
                                            }
                                        }
                                    </dl>
                            }
                            else
                            {
                                <i>No parameters</i>
                            }
                        </blockquote>
                        <blockquote>
                            <h6>Supported Response Types</h6>
                            <dl class="dl-horizontal">
                                @foreach (var response in api.SupportedResponseTypes)
                                {
                                    <dt>Status Code</dt>
                                        <dd>@response.StatusCode</dd>

                                        <dt>Response Type</dt>
                                        <dd>@response.Type?.FullName</dd>

                                        @foreach (var responseFormat in response.ApiResponseFormats)
                                        {
                                            <dt>Formatter</dt>
                                            <dd>@responseFormat.Formatter?.GetType().FullName</dd>
                                            <dt>Media Type</dt>
                                            <dd>@responseFormat.MediaType</dd>
                                        }
                                }
                            </dl>

                        </blockquote>
                    </li>
                }
            </ul>
        }
    </section>
</div>  

If you run the application now, you might be slightly surprised by the response:

Introduction to the ApiExplorer in ASP.NET Core

Even though we have the default ValuesController in the project, apparently, there's no APIs!

Enabling documentation of your controllers

By default, controllers in ASP.NET Core are not included in the ApiExplorer. There are a whole host of attributes you can apply to customise the metadata produced by the ApiExplorer, but the critical one here is [ApiExplorerSettings].

By applying this attribute to a controller, you can control whether or not it is included in the API, as well as its name in the ApiExplorer:

[Route("api/[controller]")]
[ApiExplorerSettings(IgnoreApi = false, GroupName = nameof(ValuesController))]
public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    // other action methods
}

After applying this attribute, and viewing the groups in IApiDescriptionGroupCollectionProvider you can see that the API is now available:

Introduction to the ApiExplorer in ASP.NET Core

ApiExplorer and conventional routing

Note, you can only apply the [ApiExplorerSettings] attribute to controllers and actions that use attribute routing. If you enable the ApiExplorer on an action that uses conventional routing, you will be greeted with an error like the following:

Introduction to the ApiExplorer in ASP.NET Core

Remember, ApiExplorer really is just for your APIs! If you stick to the convention of using attribute routing for your Web API controllers and conventional routing for your MVC controllers you'll be fine, but it's just something to be aware of.

Summary

This was just a brief introduction to the ApiExplorer functionality that exposes a variety of metadata about the Web APIs in your application. You're unlikely to use it quite like this, but it's interesting to see all the introspection options available to you.


Andrew Lock: How to format response data as XML or JSON, based on the request URL in ASP.NET Core

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

I think it's safe to say that most ASP.NET Core applications that use a Web API return data as JSON. What with JavaScript in the browser, and JSON parsers everywhere you look, this makes perfect sense. Consequently, ASP.NET Core is very much geared towards JSON, but it is perfectly possible to return data in other formats (for example Damien Bowden recently added a Protobuf formatter to the WebApiContrib.Core project).

In this post, I'm going to focus on a very specific scenario. You want to be able to return data from a Web API action method in one of two different formats - JSON or XML, and you want to control which format is used by the extension of the URL. For example /api/Values.xml should format the result as XML, while /api/Values.json should format the result as JSON.

Using the FormatFilterAttribute to read the format from the URL

Out of the box, if you use the standard MVC service configuration by calling services.AddMvc(), the JSON formatters are configured for your application by default. All that you need to do is tell your action method to read the format from the URL using the FormatFilterAttribute.

You can add the [FormatFilter] attribute to your action methods, to indicate that the output format will be defined by the URL. The FormatFilter looks for a route parameter called format in the RouteData for the request, or in the querystring. If you want to use the .json approach I described earlier, you should make sure the route template for your actions includes a .{format} parameter:

public class ValuesController : Controller  
{
    // GET api/values
    [HttpGet("api/values.{format}"), FormatFilter]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }
}

Note: You can make the .format suffix optional using the syntax .{format?}, but you need to make sure the . follows a route parameter, e.g. api/values/{id}.{format?}. If you try to make the format optional in the example above (api/values.{format?}) you'll get a server error. A bit odd, but there you go…

With the route template updated, and the [FormatFilter] applied to the method, we can now test our JSON formatters:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Success - we have returned JSON when requested! Let's give it a try with the xml suffix:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Doh, no such luck. As I mentioned earlier, the JSON formatters are registered by default; if we want to return XML then we'll need to configure the XML formatters too.

Adding the XML formatters

In ASP.NET Core, everything is highly modular, so you only add the functionality you need to your application. Consequently, there's a separate NuGet package for the XML formatters that you need to add to your .csproj file - Microsoft.AspNetCore.Mvc.Formatters.Xml

<PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Xml" Version="1.1.3" />  

Note: If you're using ASP.NET Core 2.0, this package is included by default as part of the Microsoft.AspNetCore.All metapackage.

Adding the package to your project lights up an extension method on the IMvcBuilder instance returned by the call to services.AddMvc(). The AddXmlSerializerFormatters() method adds both input and output formatters, so you can serialise objects to and from XML.

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc()
        .AddXmlSerializerFormatters();
}

Alternatively, if you only want to be able to format results as XML, but don't need to be able to read XML from a request body, you can just add the output formatter instead:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    });
}

By adding this output formatter we can now format objects as XML. However, if you test the XML URL again at this point, you'll still get the same 404 response as we did before. What gives?

Registering a type mapping for the format suffix

By registering the XML formatters, we now have the ability to format XML. However, the FormatFilter doesn't know how to handle the .xml suffix we're using in the request URL. To make this work, we need to tell the filter that the xml suffix maps to the application/xml MIME type.

You can register new type mappings by configuring the FormatterMappings options, when you call AddMvc(). These define the mappings between the {format} parmeter and the MIME type that the FormatFilter will use. For example:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(options =>
    {
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("xml", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("config", MediaTypeHeaderValue.Parse("application/xml"));
        options.FormatterMappings.SetMediaTypeMappingForFormat
            ("js", MediaTypeHeaderValue.Parse("application/json"));
    })
        .AddXmlSerializerFormatters();

The FormatterMappings property contains a dictionary of all the suffix to MIME type mappings. You can add new ones using the SetMediaTypeMappingForFormat, passing the suffix as the key and the MIME type as the value.

In the example above I've actually registered three new mappings. I've added the xml and config files as XML, and added a new js suffix that maps to JSON, just to demonstrate that JSON isn't actually anything special here!

With this last piece of configuration in place, we can now finally request XML by using the .xml or .config suffix in our URLs:

How to format response data as XML or JSON, based on the request URL in ASP.NET Core

Summary

In this post you saw how to use the FormatFilter to specify the desired output format by using a file-type suffix on your URLs. To do so, there were four steps:

  1. Add the [FormatFilter] attribute to your action method
  2. Ensure the route to the action contains a {format} route parameter (or pass it in the querystirng e.g. ?format=xml)
  3. Register the output formatters you wish to support with MVC. To add both input and output XML formatters, use the AddXmlSerializerFormatters() extensions method
  4. Register a new type mapping between a format suffix and a MIME type on the MvcOptions object. For example, you could add XML using:
options.FormatterMappings.SetMediaTypeMappingForFormat(  
    "xml", MediaTypeHeaderValue.Parse("application/xml"));

If a type mapping is not configured for a suffix, then you'll get a 404 Not Found response when calling the action.


Anuraj Parameswaran: Running PHP on .NET Core with Peachpie

This post is about running PHP on .NET Core with Peachpie. Peachpie is an open source PHP Compiler to .NET. This innovative compiler allows you to run existing PHP applications with the performance, speed, security and interoperability of .NET.


Andrew Lock: Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

ASP.NET Core Identity is an authentication and membership system that lets you easily add login functionality to your ASP.NET Core application. It is designed in a modular fashion, so you can use any "stores" for users and claims that you like, but out of the box it uses Entity Framework Core to store the entities in a database.

By default, EF Core uses naming conventions for the database entities that are typical for SQL Server. In this post I'll describe how to configure your ASP.NET Core Identity app to replace the database entity names with conventions that are more common to PostgreSQL.

I'm focusing on ASP.NET Core Identity here, where the entity table name mappings have already been defined, but there's actually nothing specific to ASP.NET Core Identity in this post. You can just as easily apply this post to EF Core in general, and use more PostgreSQL-friendly conventions for all your EF Core code. See here for the tl;dr code!

Moving to PostgreSql as a SQL Server aficionado

ASP.NET Core Identity can use any database provider that is supported by EF Core - some of which are provided by Microsoft, others are third-party or open source components. If you use the templates that come with the .NET CLI via dotnet new, you can choose SQL Server or SQLite by default. Personally, I've been working more and more with PostgreSQL, the powerful cross-platform, open source database.

As someone who's familiar with SQL Server, one of the biggest differences that can bite you when you start working with PostgreSQL is that table and column names are case sensitive! This certainly takes some getting used to, and, frankly is a royal pain in the arse to work with if you stick to your old habits. If a table is created with uppercase characters in the table or column name, then you have to ensure you get the case right, and wrap the identifiers in double quotes, as I'll show shortly.

This is unfortunate when you come from a SQL Server world, where camel-case is the norm for table and column names. For example, imagine you have a table called AspNetUsers, and you want to retrieve the Id, Email and EmailConfirmed fields:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

To query this table in PostgreSQL, you'd have to do something like:

SELECT "Id", "Email", "EmailConfirmed" FROM "AspNetUsers"  

Notice the quote marks we need? This only gets worse when you need to escape the quotes because you're calling from the command line, or defining a SQL query in a C# string, for example:

$ psql -d DatabaseWithCaseIssues -c "SELECT \"Id\", \"Email\", \"EmailConfirmed\" FROM \"AspNetUsers\" "

Clearly nobody wants to be dealing with this. Instead it's convention to use snake_case for database objects instead of CamelCase.

snake_case > CamelCase in PostreSQL

Snake case uses lowercase for all of the identifiers, and instead of using capitals to demarcate words, it uses an underscore, _. This is perfect for PostgreSQL, as it neatly avoids the case issue. If, we could rename our entity table names to asp_net_users, and the corresponding fields to id, email and email_confirmed, then we'd neatly side-step the quoting issue:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

This makes the PostgreSQL queries way simpler, especially when you would otherwise need to escape the quote marks:

$ psql -d DatabaseWithCaseIssues -c "SELECT id, email, email_confirmed FROM asp_net_users"

If you're using EF Core, then theoretically all this wouldn't matter to you. The whole point is that you don't have to write SQL code yourself, and you can just let the underlying framework generate the necessary queries. If you use CamelCase names, then the EF Core PostgreSQL database provider will happily escape all the entity names for you.

Unfortunately, reality is a pesky beast. It's just a matter of time before you find yourself wanting to write some sort of custom query directly against the database to figure out what's going on. More often than not, if it comes to this, it's because there's an issue in production and you're trying to figure out what went wrong. The last thing you need at this stressful time is to be messing with casing issues!

Consequently, I like to ensure my database tables are easy to query, even if I'll be using EF Core or some other ORM 99% of the time.

EF Core conventions and ASP.NET Core Identity

ASP.NET Core Identity takes care of many aspects of the identity and membership system of your app for you. In particular, it creates and manages the application user, claim and role entities for you, as well as a variety of entities related to third-party logins:

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

If you're using the EF Core package for ASP.NET Core Identity, these entities are added to an IdentityDbContext, and configured within the OnModelCreating method. If you're interested, you can view the source online - I've shown a partial definition below, that just includes the configuration for the Users property which represents the users of your app

public abstract class IdentityDbContext<TUser>  
{
    public DbSet<TUser> Users { get; set; }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("UserNameIndex").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("EmailIndex");
            b.ToTable("AspNetUsers");
        }
        // additional configuration
    }
}

The IdentityDbContext uses the OnModelCreating method to configure the database schema. In particular, it defines the name of the user table to be "AspNetUsers" and sets the name of a number of indexes. The column names of the entities default to their C# property values, so they would also be CamelCased.

In your application, you would typically derive your own DbContext from the IdentityDbContext<>, and inherit all of the schema associated with ASP.NET Core Identity. In the example below I've done this, and specified TUser type for the application to be ApplicationUser:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }
}

With the configuration above, the database schema would use all of the default values, including the table names, and would give the database schema we saw previously. Luckily, we can override these values and replace them with our snake case values instead.

Replacing specific values with snake case

As is often the case, there are multiple ways to achieve our desired behaviour of mapping to snake case properties. The simplest conceptually is to just overwrite the values specified in IdentityDbContext.OnModelCreating() with new values. The later values will be used to generate the database schema. We simply override the OnModelCreating() method, call the base method, and then replace the values with our own:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        builder.Entity<TUser>(b =>
        {
            b.HasKey(u => u.Id);
            b.HasIndex(u => u.NormalizedUserName).HasName("user_name_index").IsUnique();
            b.HasIndex(u => u.NormalizedEmail).HasName("email_index");
            b.ToTable("asp_net_users");
        }
        // additional configuration
    }
}

Unfortunately, there's a problem with this. EF Core uses conventions to set the names for entities and properties where you don't explicitly define their schema name. In the example above, we didn't define the property names, so they will be CamelCase by default.

If we want to override these, then we need to add additional configuration for each entity property:

b.Property(b => b.EmailConfirmation).HasColumnName("email_confirmation");  

Every. Single. Property.

Talk about laborious and fragile…

Clearly we need another way. Instead of trying to explicitly replace each value, we can use a different approach, which essentially creates alternative conventions based on the existing ones.

Replacing the default conventions with snake case

The ModelBuilder instance that is passed to the OnModelCreating() method contains all the details of the database schema that will be created. By default, the database object names will all be CamelCased.

By overriding the OnModelCreating method, you can loop through each table, column, foreign key and index, and replace the existing value with its snake case equivalent. The following example shows how you can do this for every entity in the EF Core model. The ToSnakCase() extension method (shown shortly) converts a camel case string to a snake case string.

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>  
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);

        foreach(var entity in builder.Model.GetEntityTypes())
        {
            // Replace table names
            entity.Relational().TableName = entity.Relational().TableName.ToSnakeCase();

            // Replace column names            
            foreach(var property in entity.GetProperties())
            {
                property.Relational().ColumnName = property.Name.ToSnakeCase();
            }

            foreach(var key in entity.GetKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var key in entity.GetForeignKeys())
            {
                key.Relational().Name = key.Relational().Name.ToSnakeCase();
            }

            foreach(var index in entity.GetIndexes())
            {
                index.Relational().Name = index.Relational().Name.ToSnakeCase();
            }
        }
    }
}

The ToSnakeCase() method is just a simple extension method that looks for a lower case letter or number, followed by a capital letter, and inserts an underscore. There's probably a better / more efficient way to achieve this, but it does the job!

public static class StringExtensions  
{
    public static string ToSnakeCase(this string input)
    {
        if (string.IsNullOrEmpty(input)) { return input; }

        var startUnderscores = Regex.Match(input, @"^_+");
        return startUnderscores + Regex.Replace(input, @"([a-z0-9])([A-Z])", "$1_$2").ToLower();
    }
}

These conventions will replace all the database object names with snake case values, but there's one table that won't be modified, the actual migrations table. This is defined when you call UseNpgsql() or UseSqlServer(), and by default is called __EFMigrationsHistory. You'll rarely need to query it outside of migrations, so I won't worry about it for now.

With our new conventions in place, we can add the EF Core migrations for our snake case schema. If you're starting from one of the VS or dotnet new templates, delete the default migration files created by ASP.NET Core Identity:

  • 00000000000000_CreateIdentitySchema.cs
  • 00000000000000_CreateIdentitySchema.Designer.cs
  • ApplicationDbContextModelSnapshot.cs

and create a new set of migrations using:

$ dotnet ef migrations add SnakeCaseIdentitySchema

Finally, you can apply the migrations using

$ dotnet ef database update

After the update, you can see that the database schema has been suitably updated. We have snake case table names, as well as snake case columns (you can take my word for it on the foreign keys and indexes!)

Customising ASP.NET Core Identity EF Core naming conventions for PostgreSQL

Now we have the best of both worlds - we can use EF Core for all our standard database action, but have the option of hand crafting SQL queries without crazy amounts of ceremony.

Note, although this article focused on ASP.NET Core Identity, it is perfectly applicable to EF Core in general.

Summary

In this post, I showed how you could modify the OnModelCreating() method so that EF Core uses snake case for database objects instead of camel case. You can look through all the entities in EF Core's model, and change the table names, column names, keys, and indexes to use snake case. For more details on the default EF Core conventions, I recommend perusing the documentation!


Andrew Lock: Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

I was recently creating a new GitHub project and I wanted to target ASP.NET Core 2.0 preview 2. I like to use AppVeyor for the CI build and for publishing to MyGet/NuGet, as I can typically just copy and paste a single file between projects to get my standard build pipeline. Unfortunately, targeting the latest preview is easier said than done! In this post, I'll show how to update your appveyor.yml file so you can build your .NET Core preview libraries on AppVeyor.

Building .NET Core projects on AppVeyor

If you are targeting a .NET Core SDK version that AppVeyor explicitly supports, then you don't really have to do anything - a simple appveyor.yml file will handle everything for you. For example, the following is a (somewhat abridged) version I use on some of my existing projects:

version: '{build}'  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxxxxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

There's really nothing fancy here, most of this configuration is used to define when AppVeyor should run a build, and how to deploy the NuGet package to NuGet. There's essentially no configuration of the target environment required - the build simply calls the build.ps1 file to restore and build the project.

I've switched to using Cake for most of my projects these days, often based on a script from Muhammad Rehan Saeed. If this is your first time using AppVeyor to build your projects, I suggest you take a look at my previous post on using AppVeyor.

Unfortunately, if you try and build a .NET Core 2.0 preview 2 project with this script, you'll be out of luck. I found I got random, nondescript errors, such as this one:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Installing .NET Core 2.0 preview 2 in AppVeyor

Luckily, AppVeyor makes it easy to install additional dependencies before running your build script - you just add additional commands under the install node:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
install:  
  # Run additional commands here

The tricky part is working out exactly what to run! I couldn't find any official guidance on scripting the install, so I went hunting in some of the Microsoft GitHub repos. In particular I found the JavaScriptServices repo which manually installs .NET Core. The install node at the time of writing (for preview 1) was:

install:  
   # .NET Core SDK binaries
   - ps: $urlCurrent = "https://download.microsoft.com/download/3/7/F/37F1CA21-E5EE-4309-9714-E914703ED05A/dotnet-dev-win-x64.2.0.0-preview1-005977.exe"
   - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
   - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
   - ps: $tempFileCurrent = [System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), [System.IO.Path]::GetRandomFileName())
   - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
   - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
   - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"

There's a lot of commands in there. Most of it we can copy and paste, but the trickiest point is that download URL - GUIDs, really?

Luckily there's an easy way to find the URL for preview 2 - you can look at the release notes for the version of .NET Core you want to target.

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

The link you want is the Windows 64-bit SDK binaries. Just right-click, copy the link and paste into the appveyor.yml, to give the final file. The full AppVeyor file from my recent CommonPasswordValidator repository is shown below:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
environment:  
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true
  DOTNET_CLI_TELEMETRY_OPTOUT: true
install:  
  # Download .NET Core 2.0 Preview 2 SDK and add to PATH
  - ps: $urlCurrent = "https://download.microsoft.com/download/F/A/A/FAAE9280-F410-458E-8819-279C5A68EDCF/dotnet-sdk-2.0.0-preview2-006497-win-x64.zip"
  - ps: $env:DOTNET_INSTALL_DIR = "$pwd\.dotnetsdk"
  - ps: mkdir $env:DOTNET_INSTALL_DIR -Force | Out-Null
  - ps: $tempFileCurrent = [System.IO.Path]::GetTempFileName()
  - ps: (New-Object System.Net.WebClient).DownloadFile($urlCurrent, $tempFileCurrent)
  - ps: Add-Type -AssemblyName System.IO.Compression.FileSystem; [System.IO.Compression.ZipFile]::ExtractToDirectory($tempFileCurrent, $env:DOTNET_INSTALL_DIR)
  - ps: $env:Path = "$env:DOTNET_INSTALL_DIR;$env:Path"  
branches:  
  only:
  - master
clone_depth: 1  
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  server: https://www.myget.org/F/andrewlock-ci/api/v2/package
  api_key:
    secure: xxxxxx
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: xxxxxx
  on:
    branch: master
    appveyor_repo_tag: true

Now when AppVeyor runs, you can see it running the install steps before running the build script:

Building ASP.NET Core 2.0 preview 2 packages on AppVeyor

Using predictable download URLs

Shortly after battling with this issue, I took another look at the JavaScriptServices project, and noticed they'd switched to using nicer URLs for the SDK binaries. Instead of using the horrible GUIDy URLs, you can use zip files stored on an Azure CDN instead. These URLs just require you know the SDK version (including the build number) For example:

It looks like preview 2 is the first to be available at this URL, but as you can see, later builds are also available if you want to work with the bleeding edge builds.

Summary

In this post I showed how you could use the install node of an appveyor.yml file to install ASP.NET Core 2.0 preview 2 into your AppVeyor build pipeline. This lets you target preview versions of .NET Core in your build pipeline, before they're explicitly supported by AppVeyor.


Andrew Lock: Creating a validator to check for common passwords in ASP.NET Core Identity

Creating a validator to check for common passwords in ASP.NET Core Identity

In my last post, I showed how you can create a custom validator for ASP.NET Core. In this post, I introduce a package that lets you validate that a password is not one of the most common passwords users choose.

You can find the package on GitHub and on NuGet, and can install it using dotnet add package CommonPasswordValidator. Currently, it supports ASP.NET Core 2.0 preview 2.

Full disclosure, this post is 100% inspired by the codinghorror.com article by Jeff Atwood on how they validate passwords in Discourse. If you haven't read it yet, do it now!

As Jeff describes in the appropriately named article Password Rules Are Bullshit, password rules can be a real pain. Obviously in theory, password rules make sense, but reality can be a bit different. The default Identity templates require:

  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

All these rules will theoretically increase the entropy of any passwords a user enters. But you just know that's not really what happens.

All it means is that instead of entering password, they enter Password1!

And on top of that, if you're using a password manager, these password rules can get in the way. So your 40 character random password happens to not have a digit in this time? Pretty sure it's still OK... should you really have to generate a new password?

Instead, Jeff Attwood suggests 5 pieces of advice when designing your password validation:

  1. Password rules are bullshit - These rarely achieve their goal, don't make the passwords of average users better, and penalise users using password managers.

    You can easily disable password rules in ASP.NET Core Identity by disabling the composition rules.

  2. Enforce a minimum Unicode password length - Length is an easy rule for users to grasp, and in general, a longer password will be more secure than a short one

    You can similarly set the minimum length in ASP.NET Core Identity using the options pattern, e.g. options.Password.RequiredLength = 10

  3. Check for common passwords - There's plenty of stats on the terrible password choices user make to their own devices, and you an create your own by checking out password lists available online. For example, 30% have a password from the top 10,000 most common passwords!

    In this post I'll describe a custom validator you can add to your ASP.NET Core Identity project to prevent users using the most common passwords

  4. Check for basic entropy - Even with a length requirement, and checking for common passwords, users can make terrible password choices like 9999999999. A simple approach to tackling this is to require a minimum number of unique digits.

    In ASP.NET Core Identity 2.0, you can require a minimum number of required digits using options.Password.RequiredUniqueChars = 6

  5. Check for special case passwords - User's shouldn't be allowed to use their username, email or other obvious values as their password.

You can create custom validators for ASP.NET Core Identity, as I showed in my previous post.

Whether you agree 100% with these rules doesn't really matter, but I think most people will agree with at least a majority of them. Either way, preventing the most common passwords is somewhat of a no-brainer.

There's no built-in way of achieving this, but thanks to ASP.NET Core Identity's extensibility, we can create a custom validator instead.

Creating a validator to check for common passwords

ASP.NET Core Identity lets you register custom password validators. These are executed when a user registers on your site, or changes their password, and let you apply additional constraints to the password.

In my last post, I showed how to create custom validators. Creating a validator to check for common passwords is pretty simple - we load the list of forbidden passwords into a HashSet, and check that the user's password is not one of them:

public class Top100PasswordValidator<TUser> : IPasswordValidator<TUser>  
        where TUser : class
{
    static readonly HashSet<string> Passwords { get; } = PasswordLists.Top100Passwords;

    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager,
                                                TUser user,
                                                string password)
    {
        if(Passwords.Contains(password))
        {
            var result = IdentityResult.Failed(new IdentityError
            {
                Code = "CommonPassword",
                Description = "The password you chose is too common."
            });
            return Task.FromResult(result);
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator is pretty standard. We have a list of passwords that you are not allowed to use, stored in the static HashSet<string>. ASP.NET Core Identity will call ValidateAsync when a new user registers, passing in the new user object, and the new password.

As we don't need to access the user object itself, we can make this validator completely generic to TUser, instead of limiting it to IdentityUser<TKey> as we did in my last post.

There's plenty of different passwords list we could choose from, so I chose to implement a few different variations, based on the 10 million passwords from 2016, depending on how restrictive you want to be.

  • Block passwords in the top 100 most common
  • Block passwords in the top 500 most common
  • Block passwords in the top 1,000 most common
  • Block passwords in the top 10,000 most common
  • Block passwords in the top 100,000 most common

Each of these passwords lists is stored as an embedded resource in the NuGet package. In the new .csproj file format, you do this by removing it from the normal wildcard inclusion, and marking as EmbeddedResource:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <None Remove="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <EmbeddedResource Include="PasswordLists\10_million_password_list_top_100.txt" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Identity" Version="2.0.0-preview2-final" />
  </ItemGroup>

</Project>  

With the lists embedded in the dll, we can simply load the passwords from the embedded resource into a HashSet.

Loading a list of strings from an embedded resource

You can read an embedded resource as a stream from the assembly using the GetManifestResourceStream() method on the Assembly type. I created a small helper class that loads the embedded file from the assembly, reads it line by line, and adds the password to the HashSet (using a case-insensitive string comparer).

internal static class PasswordLists  
{
    private static HashSet<string> LoadPasswordList(string resourceName)
    {
        HashSet<string> hashset;

        var assembly = typeof(PasswordLists).GetTypeInfo().Assembly;
        using (var stream = assembly.GetManifestResourceStream(resourceName))
        {
            using (var streamReader = new StreamReader(stream))
            {
                hashset = new HashSet<string>(
                    GetLines(streamReader),
                    StringComparer.OrdinalIgnoreCase);
            }
        }
        return hashset;
    }

    private static IEnumerable<string> GetLines(StreamReader reader)
    {
        while (!reader.EndOfStream)
        {
            yield return reader.ReadLine();
        }
    }
}

NOTE: When you pass in the resourceName to load, it must be properly namespaced. The namespace is based on the namespace of the Assembly, and the subfolder of the resource file.

Adding the custom validator to ASP.NET Core Identity

That's all there is to the validator itself. You can add it to the ASP.NET Core Identity validators collection using the AddPasswordValidator<>() method. For example:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddPasswordValidator<Top100PasswordValidator<ApplicationUser>>();

It's somewhat of a convention to create helper extension methods in ASP.NET Core, so we can easily add an additional extension that simplifies the above slightly:

public static class IdentityBuilderExtensions  
{        
    public static IdentityBuilder AddTop100PasswordValidator<TUser>(this IdentityBuilder builder) where TUser : class
    {
        return builder.AddPasswordValidator<Top100PasswordValidator<TUser>>();
    }
}

With this extension, you can add the validator using the following:

services.AddIdentity<ApplicationUser, IdentityRole>()  
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddTop100PasswordValidator<ApplicationUser>();

With the validator in place, if a user tries to use a password that's too common, they'll get a standard warning when registering on your site:

Creating a validator to check for common passwords in ASP.NET Core Identity

Summary

This post was based on the suggestion by Jeff Attwood that we should limit password composition rules, focus on length, and ensure users can't choose common passwords.

ASP.NET Core Identity lets you add custom validators. This post showed how you could create a validator that ensures the entered password isn't in the top 100 - 100,000 of the 10 million most common passwords.

You can view the source code for the validator on GitHub, or you can install the NuGet package using the command

dotnet add package CommonPasswordValidator  

Currently, the package targets .NET Core 2.0 preview 2. If you have any comments, suggestions, or bugs, please raise an issue or leave a comment! Thanks


Anuraj Parameswaran: Send Mail Using SendGrid In .NET Core

This post is about sending emails using Send Grid API in .NET Core. SendGrid is a cloud-based SMTP provider that allows you to send email without having to maintain email servers. SendGrid manages all of the technical details, from scaling the infrastructure to ISP outreach and reputation monitoring to whitelist services and real time analytics.


Damien Bowden: Implementing Two-factor authentication with IdentityServer4 and Twilio

This article shows how to implement two factor authentication using Twilio and IdentityServer4 using Identity. On the Microsoft’s Two-factor authentication with SMS documentation, Twilio and ASPSMS are promoted, but any SMS provider can be used.

Code: https://github.com/damienbod/AspNetCoreID4External

2017-09-23 Updated to ASP.NET Core 2.0

Setting up Twilio

Create an account and login to https://www.twilio.com/

Now create a new phone number and use the Twilio documentation to set up your account to send SMS messages. You need the Account SID, Auth Token and the Phone number which are required in the application.

The phone number can be configured here:
https://www.twilio.com/console/phone-numbers/incoming

Adding the SMS support to IdentityServer4

Add the Twilio Nuget package to the IdentityServer4 project.

<PackageReference Include="Twilio" Version="5.6.5" />

The Twilio settings should be a secret, so these configuration properties are added to the app.settings.json file with dummy values. These can then be used for the deployments.

"TwilioSettings": {
  "Sid": "dummy",
  "Token": "dummy",
  "From": "dummy"
}

A configuration class is then created so that the settings can be added to the DI.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace IdentityServerWithAspNetIdentity.Services
{
    public class TwilioSettings
    {
        public string Sid { get; set; }
        public string Token { get; set; }
        public string From { get; set; }
    }
}

Now the user secrets configuration needs to be setup on your dev PC. Right click the IdentityServer4 project and add the user secrets with the proper values which you can get from your Twilio account.

{
  "MicrosoftClientId": "your_secret..",
  "MircosoftClientSecret":  "your_secret..",
  "TwilioSettings": {
    "Sid": "your_secret..",
    "Token": "your_secret..",
    "From": "your_secret..",
  }
}

The configuration class is then added to the DI in the Startup class ConfigureServices method.

var twilioSettings = Configuration.GetSection("TwilioSettings");
services.Configure<TwilioSettings>(twilioSettings);

Now the TwilioSettings can be added to the AuthMessageSender class which is defined in the MessageServices file, if using the IdentityServer4 samples.

private readonly TwilioSettings _twilioSettings;

public AuthMessageSender(ILogger<AuthMessageSender> logger, IOptions<TwilioSettings> twilioSettings)
{
	_logger = logger;
	_twilioSettings = twilioSettings.Value;
}

This class is also added to the DI in the startup class.

services.AddTransient<ISmsSender, AuthMessageSender>();

Now the TwilioClient can be setup to send the SMS in the SendSmsAsync method.

public Task SendSmsAsync(string number, string message)
{
	// Plug in your SMS service here to send a text message.
	_logger.LogInformation("SMS: {number}, Message: {message}", number, message);
	var sid = _twilioSettings.Sid;
	var token = _twilioSettings.Token;
	var from = _twilioSettings.From;
	TwilioClient.Init(sid, token);
	MessageResource.CreateAsync(new PhoneNumber(number),
		from: new PhoneNumber(from),
		body: message);
	return Task.FromResult(0);
}

The SendCode.cshtml view can now be changed to send the SMS with the style, layout you prefer.

<form asp-controller="Account" asp-action="SendCode" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="SelectedProvider" type="hidden" value="Phone" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <div class="row">
        <div class="col-md-8">
            <button type="submit" class="btn btn-default">Send a verification code using SMS</button>
        </div>
    </div>
</form>

In the VerifyCode.cshtml, the ReturnUrl from the model property must be added to the form as a hidden item, otherwise your client will not be redirected back to the calling app.

<form asp-controller="Account" asp-action="VerifyCode" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <div asp-validation-summary="All" class="text-danger"></div>
    <input asp-for="Provider" type="hidden" />
    <input asp-for="RememberMe" type="hidden" />
    <input asp-for="ReturnUrl" type="hidden" value="@Model.ReturnUrl" />
    <h4>@ViewData["Status"]</h4>
    <hr />
    <div class="form-group">
        <label asp-for="Code" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Code" class="form-control" />
            <span asp-validation-for="Code" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <div class="checkbox">
                <input asp-for="RememberBrowser" />
                <label asp-for="RememberBrowser"></label>
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">Submit</button>
        </div>
    </div>
</form>

Testing the application

If using an existing client, you need to update the Identity in the database. Each user requires that the TwoFactoredEnabled field is set to true and a mobile phone needs to be set in the phone number field, (Or any phone which can accept SMS)

Now login with this user:

The user is redirected to the send SMS page. Click the send SMS button. This sends a SMS to the phone number defined in the Identity for the user trying to authenticate.

You should recieve an SMS. Enter the code in the verify view. If no SMS was sent, check your Twilio account logs.

After a successful code validation, the user is redirected back to the consent page for the client application. If not redirected, the return url was not set in the model.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/2fa

https://www.twilio.com/

http://docs.identityserver.io/en/release/

https://www.twilio.com/use-cases/two-factor-authentication



Andrew Lock: Creating custom password validators for ASP.NET Core Identity

Creating custom password validators for ASP.NET Core Identity

ASP.NET Core Identity is a membership system that lets you add user accounts to your ASP.NET Core applications. It provides the low-level services for creating users, verifying passwords and signing users in to your application, as well as additional features such as two-factor authentication (2FA) and account lockout after too many failed attempts to login.

When users register on an application, they typically provide an email/username and a password. ASP.NET Core Identity lets you provide validation rules for the password, to try and prevent users from using passwords that are too simple.

In this post, I'll talk about the default password validation settings and how to customise them. Finally, I'll show how you can write your own password validator for ASP.NET Core Identity.

The default settings

By default, if you don't customise anything, Identity configures a default set of validation rules for new passwords:

  • Passwords must be at least 6 characters
  • Passwords must have at least one lowercase ('a'-'z')
  • Passwords must have at least one uppercase ('A'-'Z')
  • Passwords must have at least one digit ('0'-'9')
  • Passwords must have at least one non alphanumeric character

If you want to change these values, to increase the minimum length for example, you can do so when you add Identity to the DI container in ConfigureServices. In the following example I've increased the minimum password length from 6 to 10, and disabled the other validations:

Disclaimer: I'm not saying you should do this, it's just an example!

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequireLowercase = false;
    options.Password.RequireUppercase = false;
    options.Password.RequireNonAlphanumeric = false;
    options.Password.RequireDigit = false;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

Coming in ASP.NET Core 2.0

In ASP.NET Core Identity 2.0, which uses ASP.NET Core 2.0 (available as 2.0.0-preview2 at time of writing) you get another configurable default setting:

  • Passwords must use at least n different characters

This lets you guard against the (stupidly popular) password "111111" for example. By default, this setting is disabled for compatibility reasons (you only need 1 unique character), but you can enable it in a similar way. The following example requires passwords of length 10, with at least 6 unique characters, one upper, one lower, one digit, and one special character.

services.AddIdentity<ApplicationUser, IdentityRole>(options =>  
{
    options.Password.RequiredLength = 10;
    options.Password.RequiredUniqueChars = 6;
})
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

When the default validators aren't good enough..

Whether having all of these rules when creating a password is a good idea is up for debate, but it's certainly nice to have the options there. Unfortunately, sometimes these rules aren't enough to really protect users from themselves.

For example, it's quite common for a sub-set of users to use their username/email as their password. This is obviously a bad idea, but unfortunately the default password rules won't necessarily catch it! For example, in the following example I've used my username as my password:

Creating custom password validators for ASP.NET Core Identity

and it meets all the rules: more than 6 characters, upper and lower, number, even a special character @!

And voilà, we're logged in...

Creating custom password validators for ASP.NET Core Identity

Luckily, ASP.NET Core Identity lets you write your own password validators. Let's create a validator to catch this common no-no.

Writing a custom validator for ASP.NET Core Identity

You can create a custom validator for ASP.NET Core Identity by implementing the IPasswordValidator<TUser> interface:

public interface IPasswordValidator<TUser> where TUser : class  
{
    Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password);
}

One thing to note about this interface is that the TUser type parameter is only limited to class - that means that if you create the most generic implementation of this interface, you won't be able to use properties of the user parameter.

That's fine if you're validating the password by looking at the password itself, checking the length and which character types are in it etc. Unfortunately, it's no good for the validator we're trying to create - we need access to the UserName property so we can check if the password matches.

We can get round this by implementing the validator and restricting the TUser type parameter to an IdentityUser. This is the default Identity user type created by the templates (which use EF Core under the hood), so it's still pretty generic, and it means we can now build our validator.

public class UsernameAsPasswordValidator<TUser> : IPasswordValidator<TUser>  
    where TUser : IdentityUser
{
    public Task<IdentityResult> ValidateAsync(UserManager<TUser> manager, TUser user, string password)
    {
        if (string.Equals(user.UserName, password, StringComparison.OrdinalIgnoreCase))
        {
            return Task.FromResult(IdentityResult.Failed(new IdentityError
            {
                Code = "UsernameAsPassword",
                Description = "You cannot use your username as your password"
            }));
        }
        return Task.FromResult(IdentityResult.Success);
    }
}

This validator checks if the UserName of the new TUser object passed in matches the password (ignoring case). If they match, then it rejects the password using the IdentityResult.Failed method, passing in an IdentityError (and wrapping in a Task<>).

The IdentityError class has both a Code and a Description - the Code property is used by the Identity system internally to localise the errors, and the Description is obviously an English description of the error which is used by default.

Note: Your errors won't be localised by default - I'll write a follow up post about this soon.

If the password and username are different, then the validator returns IdentityResult.Success, indicating it has no problems.

Note: The default templates use the email address for both the UserName and Email properties. If your user entities are configured differently, the username is separate from the email for example, you could check the password doesn't match either property by updating the ValidateAsync method accordingly.

Now we have a validator, we just need to make Identity aware of it. You do this with the AddPasswordValidator<> method exposed on IdentityBuilder when configuring your app:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator<ApplicationUser>>();

    // EF Core, MVC service config etc
}

It looks a bit long-winded because we need to pass in the TUser generic parameter. If we're just building the validator for a single app, we could always remove the parameter altogether and simplify the signature somewhat:

public class UsernameAsPasswordValidator : IPasswordValidator<ApplicationUser>  
{
    public Task<IdentityResult> ValidateAsync(UserManager<ApplicationUser> manager, ApplicationUser user, string password)
    {
        // as before
    }
}

And then our Identity configuration becomes:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders()
        .AddPasswordValidator<UsernameAsPasswordValidator>();

    // EF Core, MVC service config etc
}

Now when you try and use your username as a password to register a new user you'll get a nice friendly warning to tell you to stop being stupid!

Creating custom password validators for ASP.NET Core Identity

Summary

The default password validation in ASP.NET Core Identity includes a variety of password rules that you configure, such as password length, and required character types.

You can write your own password validators by implementing IPasswordValidator<TUser> and calling .AddPasswordValidator<T> when configuring Identity.

I have created a small NuGet package containing the validator from this blog post, a similar validator for validating the password does not equal the email, and one that looks for specific phrases (for example the URL or domain of your website - another popular choice for security-lacking users!).

You can find the package NetEscapades.AspNetCore.Identity.Validators on Nuget, with instructions on how to get started on GitHub. Hope you find it useful!


Damien Bowden: Adding an external Microsoft login to IdentityServer4

This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.

Code https://github.com/damienbod/AspNetCoreID4External

2017-09-23 Updated to ASP.NET Core 2.0

Setting up the App Platform for the Microsoft Account

To setup the app, login using your Microsoft account and open the My Applications link

https://apps.dev.microsoft.com/?mkt=en-gb#/appList

Click the ‘Add an app’ button

Give the application a name and add your email. This app is called ‘microsoft_id4_damienbod’

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

Now Add a new platform. Choose a Web type.

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

Add the permissions as required

Application configuration

Note: The samples are at present not updated to ASP.NET Core 2.0

Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.

The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.

services.AddDbContext<ApplicationDbContext>(options =>
	   options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
	.AddEntityFrameworkStores<ApplicationDbContext>()
	.AddDefaultTokenProviders()
	.AddIdentityServer();

Now the AddMicrosoftAccount extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.

services.AddAuthentication()
	 .AddMicrosoftAccount(options => {
		  options.ClientId = _clientId;
		  options.SignInScheme = "Identity.External";
		  options.ClientSecret = _clientSecret;
	  });

services.AddMvc();

...

services.AddIdentityServer()
	 .AddSigningCredential(cert)
	 .AddInMemoryIdentityResources(Config.GetIdentityResources())
	 .AddInMemoryApiResources(Config.GetApiResources())
	 .AddInMemoryClients(Config.GetClients())
	 .AddAspNetIdentity<ApplicationUser>()
	 .AddProfileService<IdentityWithAdditionalClaimsProfileService>();

And the Configure method also needs to be configured correctly.

app.UseStaticFiles();

app.UseIdentityServer();
app.UseAuthentication();

app.UseMvc(routes =>
{
	routes.MapRoute(
		name: "default",
		template: "{controller=Home}/{action=Index}/{id?}");
});

The application can now be tested. An Angular client using OpenID Connect sends a login request to the server. The ClientId and the ClientSecret are saved using user secrets, so that the password is not committed in the src code.

Click the Microsoft button to login.

This redirects the user to the Microsoft Account login for the microsoft_id4_damienbod application.

After a successful login, the user is redirected to the consent page.

Click yes, and the user is redirected back to the IdentityServer4 application. If it’s a new user, a register page will be opened.

Click register and the ID4 consent page is opened.

Then the application opens.

What’s nice about the IdentityServer4 application is that it’s a simple ASP.NET Core application with standard Views and Controllers. This makes it really easy to change the flow, for example, if a user is not allowed to register or whatever.

Links

https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-how-to-configure-microsoft-authentication

http://docs.identityserver.io/en/release/topics/signin_external_providers.html



Anuraj Parameswaran: ASP.NET Core Gravatar Tag Helper

This post is about creating a tag helper in ASP.NET Core for displaying Gravatar images based on the email address. Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog.


Andrew Lock: Localising the DisplayAttribute in ASP.NET Core 1.1

Localising the DisplayAttribute in ASP.NET Core 1.1

This is a very quick post in response to a comment asking about the state of localisation for the DisplayAttribute in ASP.NET Core 1.1. A while ago I wrote a series of posts about localising your ASP.NET Core application, using the IStringLocalizer abstraction.

  1. Adding Localisation to an ASP.NET Core application
  2. Localising the DisplayAttribute and avoiding magic strings in ASP.NET Core
  3. Url culture provider using middleware as filters in ASP.NET Core 1.1.0
  4. Applying the RouteDataRequest CultureProvider globally with middleware as filters
  5. Using a culture constraint and catching 404s with the url culture provider
  6. Redirecting unknown cultures to the default culture when using the url culture provider

The IStringLocalizer is a new way of localising your validation messages, view templates, and arbitrary strings. If you're not familiar with localisation in ASP.NET Core, I suggest checking out my first post on localisation for the benefits and pitfalls it brings, but I'll give a quick refresher here.

Brief Recap

Localisation is handled in ASP.NET Core through two main abstractions IStringLocalizer and IStringLocalizer<T>. These allow you to retrieve the localised version of a string by essentially using it as the key into a dictionary; if the key does not exist for that resource, or you are using the default culture, the key itself is returned as the resource:

public class ExampleClass  
{
    public ExampleClass(IStringLocalizer<ExampleClass> localizer)
    {
        // If the resource exists, this returns the localised string
        var localisedString1 = _localizer["I exist"]; // "J'existe"

        // If the resource does not exist, the key itself  is returned
        var localisedString2 = _localizer["I don't exist"]; // "I don't exist"
    }
}

Resources are stored in .resx files that are named according to the class they are localising. So for example, the IStringLocalizer<ExampleClass> localiser would look for a file named (something similar to) ExampleClass.fr-FR.resx. Microsoft recommends that the resource keys/names in the .resx files are the localised values in the default culture. That way you can write your application without having to create any resource files - the supplied string will be used as the resource.

As well as arbitrary strings like this, DataAnnotations which derive from ValidationAttribute also have their ErrorMessage property localised automatically.

Finally, you can localise your Views, either providing whole replacements for your View by using filenames of the form Index.fr-FR.cshtml, or by localising specific strings in your view with another abstraction, the IViewLocalizer, which acts as a view-specific wrapper around IStringLocalizer.

Localising the DisplayAttribute in ASP.NET Core 1.0

Unfortunately, in ASP.NET Core 1.0, there was one big elephant in the room… You could localise the ValidationAttributes on your view models, but you couldn't localise the DisplayAttribute which would generate the associated labels!

Localising the DisplayAttribute in ASP.NET Core 1.1

That's a little unfair - you could localise the DisplayAttribute, it just required jumping through some significant hoops. You had to fall back to the ResourceManager class, use Visual Studio to generate .resx designer files, and move away from the simpler localisation approach adopted everywhere else. Instead of just passing a key to the Name or ErrorMessage property, you had to remember to set the ResourceType too:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address", ResourceType = typeof(Resources.ViewModels_HomeViewModel))]
    public string Email { get; set; }
}

Localising the DisplayAttribute in ASP.NET Core 1.1

Luckily, that issue has all gone away now. In ASP.NET Core 1.1 you can now localise the DisplayAttribute in the same way you do your ValidationAttributes:

public class HomeViewModel  
{
    [Required(ErrorMessage = "The field is required")]
    [EmailAddress(ErrorMessage = "Not a valid email address")]
    [Display(Name = "Your email address")]
    public string Email { get; set; }
}

These values will be used by the IStringLocalizer as keys in the resx files for localisation (or as the localised value itself if a value can't be found in the .resx). No fuss, no muss.

Note As an aside, I still really dislike this idea of using the English phrase as the key in the dictionary - it's too fragile for my liking, but at least you can easily fix that.

Summary

Localisation in ASP.NET Core still requires a lot of effort, but it's certainly easier than in the previous version of ASP.NET. With the update to the DisplayAttribute in ASP.NET Core 1.1, one of the annoying differences in behaviour was fixed, so that you localize it the same way you would your other DataAnnotations.

As with most of my blog posts, there's a small sample project demonstrating this on GitHub


Dominick Baier: Authorization is hard! Slides and Video from NDC Oslo 2017

A while ago I wrote a controversial article about the problems that can arise when mixing authentication and authorization systems – especially when using identity/access tokens to transmit authorization data – you can read it here.

In the meanwhile Brock and I sat down to prototype a possible solution (or at least an improvement) to the problem and presented it to various customers and at conferences.

Also many people asked me for a more detailed version of my blog post – and finally there is now a recording of our talk from NDC – video here – and slides here. HTH!

 


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: When you use the Polly circuit-breaker, make sure you share your Policy instances!

When you use the Polly circuit-breaker, make sure you share your Policy instances!

This post is somewhat of PSA about using the excellent open source Polly library for handling resiliency to your application. Recently, I was tasked with adding a circuit-breaker implementation to some code calling an external API, and I figured Polly would be perfect, especially as we already used it in our solution!

I hadn't used Polly directly in a little while, but the excellent design makes it easy to add retry handling, timeouts, or circuit-breaking to your application. Unfortunately, my initial implementation had one particular flaw, which meant that my circuit-breaker never actually worked!

In this post I'll outline the scenario I was working with, my initial implementation, the subsequent issues, and what I should have done!

tl;dr; Policy is thread safe, and for the circuit-breaker to work correctly, it must be shared so that you call Execute on the same Policy instance every time!

The scenario - dealing with a flakey external API

A common requirement when working with currencies is dealing with exchange rates. We have happily been using the Open Exchange Rates API to fetch a JSON list of exchange rates for a while now.

The existing implementation consists of three classes:

  • OpenExchangeRatesClient - Responsible for fetching the exchange rates from the API, and parsing the JSON into a strongly typed .NET object.
  • OpenExchangeRatesCache - We don't want to fetch exchange rates every time we need them, so this class caches the latest exchange rates for a day before calling the OpenExchangeRatesClient to get up-to-date rates.
  • FallbackExchangeRateProvider - If the call to fetch the latest rates using the OpenExchangeRatesClient fails, we fallback to a somewhat recent copy of the data, loaded from an embedded resource in the assembly.

All of these classes are registered as Singletons with the IoC container, so there isonly a single instance of each. This setup has been working fine for a while, but there was an issue where the Open Exchange Rates API went down, just as the local cache of exchange rates expired. The series of events was:

  1. A request was made to our internal API, which called a service that required exchange rates.
  2. The service called the OpenExchangeRatesCache which realised the current rates were out of date.
  3. The cache called the OpenExchangeRatesClient to fetch the latest rates.
  4. Unfortunately the service was down, and eventually caused a timeout (after 100 seconds!)
  5. At this point the cache used the FallbackExchangeRateProvider to use the stale rates for this single request.
  6. A separate request was made to our internal API - repeat steps 2-6!

An issue in the external dependency, the exchange rate API going down, was causing our internal services to take 100s to respond to requests, which in turn was causing other requests to timeout. Effectively we had a cascading failure, even though we thought we had accounted for this by providing a fallback.

Note I realise updating cached exchange rates should probably be a background task. This would stop requests failing if there are issues updating, but the general problem is common to many scenarios, especially if you're using micro-services.

Luckily, this outage didn't happen at a peak time, so by the time we came to investigate the issue, the problem had passed, and relatively few people were affected. However, it obviously flagged up a problem, so I set about trying to ensure this wouldn't happen again if the API had issues at a later date!

Fix 1 - Reduce the timeouts

The first fix was a relatively simple one. The OpenExchangeRatesClient was using an HttpClient to call the API and fetch the exchange rate data. This was instantiated in the constructor, and reused for the lifetime of the class. As the client was used as a singleton, the HttpClient was also a singleton (so we didn't have any of these issues).

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }
}

The first fix I made was to set the Timeout property on the HttpClient. In the failure scenario, it was taking 100s to get back an error response. Why 100s? Because that's the default timeout for HttpClient!

Checking our metrics of previous calls to the service, I could see that prior to the failure, virtually all calls were taking approximately 0.25s. Based on that, a 100s timeout was clearly overkill! Setting the timeout to something more modest, but still conservative, say 5s, should help prevent the scenario happening again.

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
            Timeout = TimeSpan.FromSeconds(5),
        };
    }
}

Fix 2 - Add a circuit breaker

The second fix was to add a circuit-breaker implementation to the API calls. The Polly documentation has a great explanation of the circuit-breaker pattern, but I'll give a brief summary here.

Circuit-breakers in brief

Circuit-breakers make sense when calling a somewhat unreliable API. They use a fail-fast approach when a method has failed several times in a row. As an example, in my scenario, there was no point repeatedly calling the API when it hadn't worked several times in a row, and was very likely to fail. All we were doing was adding additional delays to the method calls, when it's pretty likely you're going to have to use the fallback anyway.

The circuit-breaker tracks the number of times an API call has failed. Once it crosses a threshold number of failures in a row, it doesn't even try to call the API for subsequent requests. Instead, it fails immediately, as though the API had failed.

After some timeout, the circuit-breaker will let one method call through to "test" the API and see if it succeeds. If it fails, it goes back to just failing immediately. If it succeeds then the circuit is closed again, and it will go back to calling the API for every request.

When you use the Polly circuit-breaker, make sure you share your Policy instances!

Circuit breaker state diagram taken from the Polly documentation

The circuit-breaker was a perfect fit for the failure scenario in our app, so I set about adding it to the OpenExchangeRatesClient.

Creating a circuit breaker policy

You can create a circuit-breaker Policy in Polly using the CircuitBreakerSyntax. As we're going to be making requests with the HttpClient, I used the async methods for setting up the policy and for calling the API:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var rates = await circuitBreaker  
    .ExecuteAsync(() => CallRatesApi());

This configuration creates a new circuit breaker policy, defines the number of consecutive exceptions to allow before marking the API as broken and opening the breaker, and the amount of time the breaker should stay open for before moving to the half-closed state.

Once you have a policy in place, circuitBreaker, you can call ExecuteAsync and pass in the method to execute. At runtime, if an exception occurs executing CallRatesApi() the circuit breaker will catch it, and keep track of how many exceptions it has raised to control the breaker's state.

Adding a fallback

When an exception occurs in the CallRatesApi() method, the breaker will catch it, but it will re-throw the exception. In my case, I wanted to catch those exceptions and use the FallbackExchangeRateProvider. I could have used a try-catch block, but I decided to stay in the Polly-spirit and use a Fallback policy.

A fallback policy is effectively a try catch block - it simply executes an alternative method if CallRatesApi() throws. You can then wrap the fallback policy around the breaker policy to combine the two. If the circuit breaker fails, the fallback will run instead:

var circuitBreaker = Policy  
    .Handle<Exception>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2, 
        durationOfBreak: TimeSpan.FromMinutes(1)
    );

var fallback = Policy  
    .Handle<Exception>()
    .FallbackAsync(()=> GetFallbackRates())
    .WrapAsync(circuitBreaker);


var results = await fallback  
    .ExecuteAsync(() => CallRatesApi());

Putting it together - my failed attempt!

This all looked like it would work as best I could see, so I set about replacing the OpenExchangeRatesClient implementation, and testing it out.

*Note * This isn't correct, don't copy it!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        var fallback = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .WrapAsync(circuitBreaker);


        return fallback
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

In theory, this is the flow I was aiming for when the API goes down:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  4. Call GetLatestRates() -> skips CallRatesApi() -> Uses Fallback
  5. ... etc

What I actually saw was:

  1. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  2. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  3. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  4. Call GetLatestRates() -> CallRatesApi() throws -> Uses Fallback
  5. ... etc

It was as though the circuit breaker wasn't there at all! No matter how many times the CallRatesApi() method threw, the circuit was never breaking.

Can you see what I did wrong?

Using circuit breakers properly

Every time you call GetLatestRates(), I'm creating a new circuit breaker (and fallback) policy, and then calling ExecuteAsync on that!

The circuit breaker, by it's nature, has state that must be persisted between calls (the number of exceptions that have previously happened, the open/closed state of the breaker etc). By creating new Policy objects inside the GetLatestRates() method, I was effectively resetting the policy back to its initial state, hence why nothing was working!

The answer is simple - make sure the Policy persists between calls to GetLatestRates() so that its state persists. The Policy is thread safe, so there's no issues to worry about there either. As the client is implemented in our app as a singleton, I simply moved the policy configuration to the class constructor, and everything proceeded to work as expected!

public class OpenExchangeRatesClient  
{
    private readonly HttpClient _client;
    private readonly Policy _policy;
    public OpenExchangeRatesClient(string apiUrl)
    {
        _client = new HttpClient
        {
            BaseAddress = new Uri(apiUrl),
        };

        var circuitBreaker = Policy
            .Handle<Exception>()
            .CircuitBreakerAsync(
                exceptionsAllowedBeforeBreaking: 2,
                durationOfBreak: TimeSpan.FromMinutes(1)
            );

        _policy = Policy
            .Handle<Exception>()
            .FallbackAsync(() => GetFallbackRates())
            .Wrap(circuitBreaker);
    }

    public Task<ExchangeRates> GetLatestRates()
    {
        return _policy
            .ExecuteAsync(() => CallRatesApi());
    }

    public Task<ExchangeRates> CallRatesApi()
    {
        //call the API, parse the results
    }

    public Task<ExchangeRates> GetFallbackRates()
    {
        // load the rates from the embedded file and parse them
    }
}

And that's all it takes! It works brilliantly when you actually use it properly 😉

Summary

This post ended up a lot longer than I intended, as it was a bit of a post-incident brain-dump, so apologies for that! It serves as somewhat of a cautionary tale about having blinkers on when coding something. When I implemented the fix initially I was so caught up in how I was solving the problem I completely overlooked this simple, but crucial, difference in how policies can be implemented.

I'm not entirely sure if in general it's best to use shared policies, or if it's better to create and discard policies as I did originally. Obviously, the latter doesn't work for circuit breaker but what about Retry or WaitAndRetry? Also, creating a new policy each time is probably more "allocate-y", but is it faster due to not having to be thread safe?

I don't know the answer, but personally, and based on this episode, I'm inclined to go with shared policies everywhere. If you know otherwise, do let me know in the comments, thanks!


Damien Bowden: Using Protobuf Media Formatters with ASP.NET Core

Theis article shows how to use Protobuf with an ASP.NET Core MVC application. The API uses the WebApiContrib.Core.Formatter.Protobuf Nuget package to add support for Protobuf. This package uses the protobuf-net Nuget package from Marc Gravell, which makes it really easy to use a really fast serializer, deserializer for your APIs.

Code: https://github.com/damienbod/AspNetCoreWebApiContribProtobufSample

History

2017-08-19 Updated to ASP.NET Core 2.0, WebApiContrib.Core.Formatter.Protobuf 2.0

Setting up te ASP.NET Core MVC API

To use Protobuf with ASP.NET Core, the WebApiContrib.Core.Formatter.Protobuf Nuget package can be used in your project. You can add this using the Nuget manager in Visual Studio.

Or you can add it directly in your project file.

<PackageReference Include="WebApiContrib.Core.Formatter.Protobuf" Version="2.0.0" />

Now the formatters can be added in the Startup file.

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc()
		.AddProtobufFormatters();
}

A model now needs to be defined. The protobuf-net attributes are used to define the model class.

using ProtoBuf;

namespace Model
{
    [ProtoContract]
    public class Table
    {
        [ProtoMember(1)]
        public string Name {get;set;}

        [ProtoMember(2)]
        public string Description { get; set; }


        [ProtoMember(3)]
        public string Dimensions { get; set; }
    }
}

The ASP.NET Core MVC API can then be used with the Table class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Model;

namespace AspNetCoreWebApiContribProtobufSample.Controllers
{
    [Route("api/[controller]")]
    public class TablesController : Controller
    {
        // GET api/tables
        [HttpGet]
        public IActionResult Get()
        {
            List<Table> tables = new List<Table>
            {
                new Table{Name= "jim", Dimensions="190x80x90", Description="top of the range from Migro"},
                new Table{Name= "jim large", Dimensions="220x100x90", Description="top of the range from Migro"}
            };

            return Ok(tables);
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            var table = new Table { Name = "jim", Dimensions = "190x80x90", Description = "top of the range from Migro" };
            return Ok(table);
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]Table value)
        {
            var got = value;
            return Created("api/tables", got);
        }
    }
}

Creating a simple Protobuf HttpClient

A HttpClient using the same Table class with the protobuf-net definitions can be used to access the API and request the data with “application/x-protobuf” header.

static async System.Threading.Tasks.Task<Table[]> CallServerAsync()
{
	var client = new HttpClient();

	var request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:31004/api/tables");
	request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var result = await client.SendAsync(request);
	var tables = ProtoBuf.Serializer.Deserialize<Table[]>(await result.Content.ReadAsStreamAsync());
	return tables;
}

The data is returned in the response using Protobuf seriailzation.

If you want to post some data using Protobuf, you can serialize the data to Protobuf and post it to the server using the HttpClient. This example uses “application/x-protobuf”.

static async System.Threading.Tasks.Task<Table> PostStreamDataToServerAsync()
{
	HttpClient client = new HttpClient();
	client.DefaultRequestHeaders
		  .Accept
		  .Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));

	HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post,
		"http://localhost:31004/api/tables");

	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<Table>(stream, new Table
	{
		Name = "jim",
		Dimensions = "190x80x90",
		Description = "top of the range from Migro"
	});

	request.Content = new ByteArrayContent(stream.ToArray());

	// HTTP POST with Protobuf Request Body
	var responseForPost = client.SendAsync(request).Result;

	var resultData = ProtoBuf.Serializer.Deserialize<Table>(await responseForPost.Content.ReadAsStreamAsync());
	return resultData;
}

Links:

https://www.nuget.org/packages/WebApiContrib.Core.Formatter.Protobuf/

https://github.com/mgravell/protobuf-net



Anuraj Parameswaran: ASP.NET Core No authentication handler is configured to handle the scheme Cookies

This post is about ASP.NET Core authentication, which throws an InvalidOperationException - No authentication handler is configured to handle the scheme Cookies. In ASP.NET Core 1.x version, the runtime will throw this exception when you are running ASP.NET Cookie authentication. This can be fixed by setting options.AutomaticChallenge = true in the Configure method.


Andrew Lock: Controller activation and dependency injection in ASP.NET Core MVC

Controller activation and dependency injection in ASP.NET Core MVC

In my last post about disposing IDsiposables in ASP.NET Core, Mark Rendle pointed out that MVC controllers are also disposed at the end of a request. On first glance, this may seem obvious given that scoped resources are disposed at the end of a request, but MVC controllers are actually handled in a slightly different way to most services.

In this post, I'll describe how controllers are created in ASP.NET Core MVC using the IControllerActivator, the options available out of the box, and their differences when it comes to dependency injection.

The default IControllerActivator

In ASP.NET Core, when a request is received by the MvcMiddleware, routing - either conventional or attribute routing - is used to select the controller and action method to execute. In order to actually execute the action, the MvcMiddleware must create an instance of the selected controller.

The process of creating the controller depends on a number of different provider and factory classes, culminating in an instance of the IControllerActivator. This class implements just two methods:

public interface IControllerActivator  
{
    object Create(ControllerContext context);
    void Release(ControllerContext context, object controller);
}

As you can see, the IControllerActivator.Create method is passed a ControllerContext which defines the controller to be created. How the controller is created depends on the particular implementation.

Out of the box, ASP.NET Core uses the DefaultControllerActivator, which uses the TypeActivatorCache to create the controller. The TypeActivatorCache creates instances of objects by calling the constructor of the Type, and attempting to resolve the required constructor argument dependencies from the DI container.

This is an important point. The DefaultControllerActivator doesn't attempt to resolve the Controller instance from the DI container itself, only the Controller's dependencies.

Example of the default controller activator

To demonstrate this behaviour, I've created a simple MVC application, consisting of a single service, and a single controller. The service instance has a name property, that is set in the constructor. By default, it will have the value "default".

public class TestService  
{
    public TestService(string name = "default")
    {
        Name = name;
    }

    public string Name { get; }
}

The HomeController for the app takes a dependency on the TestService, and returns the Name property:

public class HomeController : Controller  
{
    private readonly TestService _testService;
    public HomeController(TestService testService)
    {
        _testService = testService;
    }

    public string Index()
    {
        return "TestService.Name: " + _testService.Name;
    }
}

The final piece of the puzzle is the Startup file. Here I register the TestService as a scoped service in the DI container, and set up the MvcMiddleware and services:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

You'll also notice I've defined a factory method for creating an instance of the HomeController. This registers the HomeController type in the DI container, injecting an instance of the TestService with a custom Name property.

So what do you get if you run the app?

Controller activation and dependency injection in ASP.NET Core MVC

As you can see, the TestService.Name property has the default value, indicating the TestService instance has been sourced directly from the DI container. The factory method we registered to create the HomeController has clearly been ignored.

This makes sense when you remember that the DefaultControllerActivator is creating the controller. It doesn't request the HomeController from the DI container, it just requests its constructor dependencies.

Most of the time, using the DefaultControllerActivator will be fine, but sometimes you may want to create your controllers by using the DI container directly. This is especially true when you are using third-party containers with features such as interceptors or decorators.

Luckily, the MVC framework includes an implementation of IControllerActivator to do just this, and even provides a handy extension method to enable it.

The ServiceBasedControllerActivator

As you've seen, the DefaultControllerActivator uses the TypeActivatorCache to create controllers, but MVC includes an alternative implementation, the ServiceBasedControllerActivator, which can be used to directly obtain controllers from the DI container. The implementation itself is trivial:

public class ServiceBasedControllerActivator : IControllerActivator  
{
    public object Create(ControllerContext actionContext)
    {
        var controllerType = actionContext.ActionDescriptor.ControllerTypeInfo.AsType();

        return actionContext.HttpContext.RequestServices.GetRequiredService(controllerType);
    }

    public virtual void Release(ControllerContext context, object controller)
    {
    }
}

You can configure the DI-based activator with the AddControllersAsServices() extension method, when you add the MVC services to your application:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc()
                .AddControllersAsServices();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

With this in place, hitting the home page will create a controller by loading it from the DI container. As we've registered a factory method for the HomeController, our custom TestService configuration will be honoured, and the alternative Name will be used:

Controller activation and dependency injection in ASP.NET Core MVC

The AddControllersAsServices method does two things - it registers all of the Controllers in your application with the DI container (if they haven't already been registered) and replaces the IControllerActivator registration with the ServiceBasedControllerActivator:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)  
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);

    foreach (var controller in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(controller, controller);
    }

    builder.Services.Replace(ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());

    return builder;
}

If you need to do something esoteric, you can always implement IControllerActivator yourself, but I can't think of any reason that these two implementations wouldn't satisfy all your requirements!

Summary

  • By default, the DefaultControllerActivator is configured as the IControllerActivator for ASP.NET Core MVC.
  • The DefaultControllerActivator uses the TypeActivatorCache to create controllers. This creates an instance of the controller, and loads constructor arguments from the DI container.
  • You can use an alternative activator, the ServiceBasedControllerActivator, which loads controllers directly from the DI container. You can configure this activator by using the AddControllersAsServices() extension method on the MvcBuilder instance in Startup.ConfigureServices.


Anuraj Parameswaran: How to Deploy Multiple Apps on Azure WebApps

This post is about deploying multiple applications on an Azure Web App. App Service Web Apps is a fully managed compute platform that is optimized for hosting websites and web applications. This platform-as-a-service (PaaS) offering of Microsoft Azure lets you focus on your business logic while Azure takes care of the infrastructure to run and scale your apps.


Anuraj Parameswaran: Develop and Run Azure Functions locally

This post is about developing, running and debugging azure functions locally. Trigger on events in Azure and debug C# and JavaScript functions. Azure functions is a new service offered by Microsoft. Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems.


Damien Bowden: Angular OIDC OAuth2 client with Google Identity Platform

This article shows how an Angular client could implement a login for a SPA application using Google Identity Platform OpenID. The Angular application uses the npm package angular-auth-oidc-client to implement the OpenID Connect Implicit Flow to connect with the google identity platform.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

History

2017-07-09 Updated to version 1.1.4, new configuration

Setting up Google Identity Platform

The Google Identity Platform provides good documentation on how to set up its OpenID Connect implementation.

You need to login into google using a gmail account.
https://accounts.google.com

Now open the OpenID Connect google documentation page

https://developers.google.com/identity/protocols/OpenIDConnect

Open the credentials page provided as a link.

https://console.developers.google.com/apis/credentials

Create new credentials for your application, select OAuth Client ID in the drop down:

Select a web application and configure the parameters to match your client application URLs.

Implementing the Angular OpenID Connect client

The client application is implemtented using ASP.NET Core and Angular.

The npm package angular-auth-oidc-client is used to connect to the OpenID server. The package can be added to the package.json file in the dependencies.

"dependencies": {
    ...
    "angular-auth-oidc-client": "1.1.4"
},

Now the AuthModule, OidcSecurityService, AuthConfiguration can be imported. The AuthModule.forRoot() is used and added to the root module imports, the OidcSecurityService is added to the providers and the AuthConfiguration is the configuration class which is used to set up the OpenID Connect Implicit Flow.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { AuthModule, OidcSecurityService, OpenIDImplicitFlowConfiguration } from 'angular-auth-oidc-client';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent
    ],
    providers: [
        OidcSecurityService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

The AuthConfiguration class is used to configure the module.

stsServer
This is the URL where the STS server is located. We use https://accounts.google.com in this example.

redirect_url
This is the redirect_url which was configured on the google client ID on the server.

client_id
The client_id must match the Client ID for Web application which was configured on the google server.

response_type
This must be ‘id_token token’ or ‘id_token’. If you want to use the user service, or access data using using APIs, you must use the ‘id_token token’ configuration. This is the OpenID Connect Implicit Flow. The possible values are defined in the well known configuration URL from the OpenID Connect server.

scope
Scope which are used by the client. The openid must be defined: ‘openid email profile’

post_logout_redirect_uri
Url after a server logout if using the end session API. This is not supported by google OpenID.

start_checksession
Checks the session using OpenID session management. Not supported by google OpenID

silent_renew
Renews the client tokens, once the token_id expires.

startup_route
Angular route after a successful login.

forbidden_route
HTTP 403

unauthorized_route
HTTP 401

log_console_warning_active
Logs all module warnings to the console.

log_console_debug_active
Logs all module debug messages to the console.


export class AppModule {
    constructor(public oidcSecurityService: OidcSecurityService) {

        let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
        openIDImplicitFlowConfiguration.stsServer = 'https://accounts.google.com';
        openIDImplicitFlowConfiguration.redirect_url = 'https://localhost:44386';
        openIDImplicitFlowConfiguration.client_id = '188968487735-b1hh7k87nkkh6vv84548sinju2kpr7gn.apps.googleusercontent.com';
        openIDImplicitFlowConfiguration.response_type = 'id_token token';
        openIDImplicitFlowConfiguration.scope = 'openid email profile';
        openIDImplicitFlowConfiguration.post_logout_redirect_uri = 'https://localhost:44386/Unauthorized';
        openIDImplicitFlowConfiguration.startup_route = '/home';
        openIDImplicitFlowConfiguration.forbidden_route = '/Forbidden';
        openIDImplicitFlowConfiguration.unauthorized_route = '/Unauthorized';
        openIDImplicitFlowConfiguration.log_console_warning_active = true;
        openIDImplicitFlowConfiguration.log_console_debug_active = true;
        openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = 10;
        openIDImplicitFlowConfiguration.override_well_known_configuration = true;
        openIDImplicitFlowConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

        // this.oidcSecurityService.setStorage(localStorage);
        this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);
    }
}

Google OpenID does not support the .well-known/openid-configuration API as defined by OpenID. Google blocks this when using it due to a CORS security restriction, so it can not be used from a browser application. As a workaround, the well known configuration can be configured locally when using angular-auth-oidc-client. The goole OpenID configuration can be downloaded using the following URL:

https://accounts.google.com/.well-known/openid-configuration

The json file can then be downloaded and saved locally on your server and this can then be configured in the authConfiguration class using the override_well_known_configuration_url property.

this.authConfiguration.override_well_known_configuration = true;
this.authConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

The following json is the actual configuration for the google well known configuration. What’s really interesting is that the end session endpoint is not supported, which is strange I think.
It’s also interesting to see that the response_types_supported supports a type which is not supported “token id_token”, this should be “id_token token”.

See: http://openid.net/specs/openid-connect-core-1_0.html

{
  "issuer": "https://accounts.google.com",
  "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
  "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
  "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
  "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
  "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
  "response_types_supported": [
    "code",
    "token",
    "id_token",
    "code token",
    "code id_token",
    "token id_token",
    "code token id_token",
    "none"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid",
    "email",
    "profile"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_post",
    "client_secret_basic"
  ],
  "claims_supported": [
    "aud",
    "email",
    "email_verified",
    "exp",
    "family_name",
    "given_name",
    "iat",
    "iss",
    "locale",
    "name",
    "picture",
    "sub"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}

The AppComponent implements the authorize and the authorizedCallback functions from the OidcSecurityService provider.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { Configuration } from './app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';


import './app.component.css';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html'
})

export class AppComponent implements OnInit {

    constructor(public securityService: OidcSecurityService) {
    }

    ngOnInit() {
        if (window.location.hash) {
            this.securityService.authorizedCallback();
        }
    }

    login() {
        console.log('start login');
        this.securityService.authorize();
    }

    refreshSession() {
        console.log('start refreshSession');
        this.securityService.authorize();
    }

    logout() {
        console.log('start logoff');
        this.securityService.logoff();
    }
}

Running the application

Start the application using IIS Express in Visual Studio 2017. This starts with https://localhost:44386 which is configured in the launch settings file. If you use a differnt URL, you need to change this in the client application and also the servers client credentials configuration.

Login then with your gmail.

And you are redirected back to the SPA.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://developers.google.com/identity/protocols/OpenIDConnect



Damien Bowden: angular-auth-oidc-client Release, an OpenID Implicit Flow client in Angular

I have been blogging and writing code for Angular and OpenID Connect since Nov 1, 2015. Now after all this time, I have decided to create my first npm package for Angular: angular-auth-oidc-client, which makes it easier to use the Angular Auth OpenID client. This is now available on npm.

npm package: https://www.npmjs.com/package/angular-auth-oidc-client

github code: https://github.com/damienbod/angular-auth-oidc-client

issues: https://github.com/damienbod/angular-auth-oidc-client/issues

Using the npm package: see the readme

Samples: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/npm-lib-test/src/AngularClient

OpenID Certification

This library is certified by OpenID Foundation. (Implicit RP)

Features:

Notes:

FabianGosebrink and Roberto Simonetti have decided to help and further develop this npm package which I’m very grateful. Anyone wishing to get involved, please do and create some issues and pull-requests. Help is always most welcome.

The next step is to do the OpenID Relying Parties certification.



Damien Bowden: OpenID Connect Session Management using an Angular application and IdentityServer4

The article shows how the OpenID Connect Session Management can be implemented in an Angular application. The OpenID Connect Session Management 1.0 provides a way of monitoring the user session on the server using iframes. IdentityServer4 implements the server side of the specification. This does not monitor the lifecycle of the tokens used in the browser application. This session only monitors the server session. This has nothing to do with the OpenID tokens used by the SPA application.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Code: Angular auth module

Other posts in this series:

The OidcSecurityCheckSession class implements the Session Management from the specification. The init function creates an iframe and adds it to the window document in the DOM. The iframe uses the ‘authWellKnownEndpoints.check_session_iframe’ value, which is the connect/checksession API got from the ‘.well-known/openid-configuration’ service.

The init function also adds the event for the message, which is specified in the OpenID Connect Session Management documentation.

init() {
	this.sessionIframe = window.document.createElement('iframe');
	this.oidcSecurityCommon.logDebug(this.sessionIframe);
	this.sessionIframe.style.display = 'none';
	this.sessionIframe.src = this.authWellKnownEndpoints.check_session_iframe;

	window.document.body.appendChild(this.sessionIframe);
	this.iframeMessageEvent = this.messageHandler.bind(this);
	window.addEventListener('message', this.iframeMessageEvent, false);

	return Observable.create((observer: Observer<any>) => {
		this.sessionIframe.onload = () => {
			observer.next(this);
			observer.complete();
		}
	});
}

The pollServerSession function, posts a message every 3 seconds to the iframe which checks if the session on the server has been changed. The session_state is the value returned in the HTTP callback from a successful authorization.

pollServerSession(session_state: any, clientId: any) {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
			this.oidcSecurityCommon.logDebug(this.sessionIframe);
			this.sessionIframe.contentWindow.postMessage(clientId + ' ' + session_state, this.authConfiguration.stsServer);
		},
		(err: any) => {
			this.oidcSecurityCommon.logError('pollServerSession error: ' + err);
		},
		() => {
			this.oidcSecurityCommon.logDebug('checksession pollServerSession completed');
		});
}

The messageHandler handles the callback from the iframe. If the server session has changed, the output onCheckSessionChanged event is triggered.

private messageHandler(e: any) {
	if (e.origin === this.authConfiguration.stsServer &&
		e.source === this.sessionIframe.contentWindow
	) {
		if (e.data === 'error') {
			this.oidcSecurityCommon.logWarning('error from checksession messageHandler');
		} else if (e.data === 'changed') {
			this.onCheckSessionChanged.emit();
		} else {
			this.oidcSecurityCommon.logDebug(e.data + ' from checksession messageHandler');
		}
	}
}

The onCheckSessionChanged is a public EventEmitter output for this provider.

@Output() onCheckSessionChanged: EventEmitter<any> = new EventEmitter<any>(true);

The OidcSecurityService provider subscribes to the onCheckSessionChanged event and uses its onCheckSessionChanged function to handle this event.

this.oidcSecurityCheckSession.onCheckSessionChanged.subscribe(() => { this.onCheckSessionChanged(); });

After a successful login, and if the tokens are valid, the client application checks if the checksession should be used, and calls the init method and subscribes to it. When ready, it uses the pollServerSession function to activate the monitoring.

if (this.authConfiguration.start_checksession) {
  this.oidcSecurityCheckSession.init().subscribe(() => {
    this.oidcSecurityCheckSession.pollServerSession(
      result.session_state,
      this.authConfiguration.client_id
    );
  });
}

The onCheckSessionChanged function sets a public boolean which can be used to implement the required application logic when the server sesion has changed.

private onCheckSessionChanged() {
  this.oidcSecurityCommon.logDebug('onCheckSessionChanged');
  this.checkSessionChanged = true;
}

In this demo, the navigation bar allows to Angular application to refresh the session if the server session has changed.

<li>
  <a class="navigationLinkButton" *ngIf="securityService.checkSessionChanged" (click)="refreshSession()">Refresh Session</a>
</li>

When the application is started, the unchanged message is returned.

Then open the server application in a tab in the same browser session, and logout.

And the client application notices tht the server session has changed and can react as required.

Links:

http://openid.net/specs/openid-connect-session-1_0-ID4.html

http://docs.identityserver.io/en/release/



Anuraj Parameswaran: Connecting to Azure Cosmos DB emulator from RoboMongo

This post is about connecting to Azure Cosmos DB emulator from RoboMongo. Azure Cosmos DB is Microsoft’s globally distributed multi-model database. It is superset of Azure Document DB. Due to some challenges, one of our team decided to try some new No SQL databases. One of the option was Document Db. I found it quite good option, since it supports Mongo protocol so existing app can work without much change. So I decided to explore that. First step I downloaded the Document Db emulator, now it is Azure Cosmos DB emulator. Installed and started the emulator, it is opening the Data Explorer web page (https://localhost:8081/_explorer/index.html), which helps to explore the Documents inside the database. Then I tried to connect to the same with Robo Mongo (It is a free Mongo Db client, can be downloaded from here). But is was not working. I was getting some errors. Later I spent some time to find some similar issues, blog post on how to connect from Robo Mongo to Document Db emulator. But I couldn’t find anything useful. After spenting almost a day, I finally figured out the solution. Here is the steps.


Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an interview – so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.