Anuraj Parameswaran: Aspect oriented programming with ASP.NET Core

This post is about implementing simple AOP (Aspect Oriented Programming) with ASP.NET Core. AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself. An example of crosscutting concerns is “logging,” which is frequently used in distributed applications to aid debugging by tracing method calls. AOP helps you to implement logging without affecting you actual code.


Anuraj Parameswaran: Hosting ASP.NET Core applications on Heroku using Docker

This post is about hosting ASP.NET Core applications on Heroku using Docker. Heroku is a cloud Platform-as-a-Service (PaaS) supporting several programming languages that is used as a web application deployment model. Heroku, one of the first cloud platforms, has been in development since June 2007, when it supported only the Ruby programming language, but now supports Java, Node.js, Scala, Clojure, Python, PHP, and Go. Heroku doesn’t support .NET or .NET Core natively, but recently they started supporting Docker. In this post I am using Docker to deploy my application to Heroku, there is build pack option is also available (Build Pack is the deployment mechanism which is supported by Heroku natively.), but there is no official build pack for .NET available yet.


Andrew Lock: Under the hood of the Middleware Analysis package

Under the hood of the Middleware Analysis package

In my last post I showed how you could use the Microsoft.AspNetCore.MiddlewareAnalysis package to analyse your middleware pipeline. In this post I take a look at the source code behind the package, to see how it's implemented.

What can you use the package for?

After you have added the MiddlewareAnalysis package to your ASP.NET Core application, you can use a DiagnosticSource to log arbitrary details about each middleware component when it starts and stops, or when an exception occurs. You can use this to create some very powerfull logs, where you can inspect the raw HttpContext and exception and log any pertinent details. At the simplest level though, you get a log when each middleware component starts or stops.

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: HelloWorld; /  
MiddlewareFinished: HelloWorld; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

Check out my previous post for details on how to add the middleware to your project, as well as how to create a diagnostic source.

How does it work?

In my last post I mention that the analysis package uses the IStartupFilter interface. I discussed this interface in a previous post, but in essence, it allows you to insert middleware into the pipeline without using the normal Startup.Configure approach. By using this filter, the MiddlewareAnalysis package can customise the pipeline by adding services to the DI container. It uses this to insert additional middleware that it uses to log to a DiagnosticSource.

In the rest of this post I'll take a look at the various internal classes that make up the Microsoft.AspNetCore.MiddlewareAnalysis package (don't worry, there's only 4 of them!).

The service collection extensions

First, the easy bit.

As is the case with most packages, there is an extension method to allow you to easily add the necessary services to your application:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMiddlewareAnalysis();
}

This extension method registers a single instance of an IStartupFilter, the AnalysisStartupFilter. This startup filter will be invoked before your Startup.Configure method is run. In fact, when invoked, the AnalysisStartupFilter will receive your Startup.Configure method as an argument.

The AnalysisStartupFilter

The AnalysisStartupFilter that is added by the previous call to AddMiddlewareAnalysis() acts as a "wrapper" around the Startup.Configure method. It both takes and returns an Action<IApplicationBuilder>, letting you customise the way the middleware pipeline is constructed:

public class AnalysisStartupFilter : IStartupFilter  
{
    public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
    {
        return builder =>
        {
            var wrappedBuilder = new AnalysisBuilder(builder);
            next(wrappedBuilder);

            // There's a couple of other bits here I'll gloss over for now
        };
    }
}

This is an interesting class. Rather than simply adding some fancy middleware to the pipeline, the startup filter creates a new instance of an AnalysisBuilder, passes in the ApplicationBuilder, and then invokes the next method, passing the wrappedBuilder.

This means that when the filter is run (on app startup), it creates a custom IApplicationBuilder, the AnalysisBuilder and passes that to all subsequent filters and the Startup.Configure method. Consequently, all the calls you make in a typical Configure method, are made on the AnalysisBuilder, instead of the original IApplicationBuilder instance:

public void Configure(IApplicationBuilder app)  
{
    // app is now the AnalysisBuilder, so all calls are made
    // on that, instead of the original IApplicationBuilder
    app.UseExceptionHandler("/error");
    app.UseStaticFiles();
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Let's take a look at the AnalysisBuilder and figure out what it's playing at.

Intercepting builder calls with the AnalysisBuilder

We now know that the AnalysisBuilder is intercepting all the calls that add middleware to the pipeline. What you may or may not know is that behind the scenes, all the extension method like UseStaticFiles, UseMvc, and even UseMiddleware are ultimately calling a single method on IApplicationBuilder, Use. This makes it relatively easy to implement a wrapper around the default builder:

public IApplicationBuilder Use(Func<RequestDelegate, RequestDelegate> middleware)  
{
    // You can set a custom name for the middleware by setting 
    // app.Properties["analysis.NextMiddlewareName"] = "Name";

    string middlewareName = string.Empty;
    object middlewareNameObj;
    if (Properties.TryGetValue(NextMiddlewareName, out middlewareNameObj))
    {
        middlewareName = middlewareNameObj?.ToString();
        Properties.Remove(NextMiddlewareName);
    }

    return InnerBuilder.UseMiddleware<AnalysisMiddleware>(middlewareName)
        .Use(middleware);
}

The bulk of this method is taken up attempting to see if you have set a value in the Properties collection, which allows you to specify a custom name for the next middleware in the pipeline (see my previous post).

The interesting bit occurs right at the end of the method. As well as adding the provided middleware to the underlying InnerBuilder, an instance of the AnalysisMiddleware is added to the pipeline. That means for every middleware added, an instance of the analysis middleware is added.

So for the example Configure method I showed earlier, that means our actual pipeline looks something like the following:

Under the hood of the Middleware Analysis package

You may notice that as well as starting with an AnalysisMiddleware instance, the pipeline adds an AnalysisMiddleware after the MVC middleware too. This is thanks to the definition of the Build function in the AnalysisBuilder:

public RequestDelegate Build()  
{
    // Add one maker at the end before the default 404 middleware (or any fancy Join middleware).
    return InnerBuilder.UseMiddleware<AnalysisMiddleware>("EndOfPipeline")
        .Build();
}

As the comment says, this ensures a final instance of the AnalysisMiddleware, "EndOfPipeline", is added before the end of the pipeline (and the default 404 middleware is added).

At the end of setup, we have a middleware pipeline configured as I showed in the previous image, where all the middleware we added to the pipeline in Startup.Configure is interspersed with AnalysisMiddleware.

This brings us onto the final piece of the puzzle, the AnalysisMiddleware itself.

Logging to DiagnosticSource with the AnalysisMiddleware

Up to this point, we haven't used DiagnosticSource anywhere in the package. It's all been about injecting the additional middleware into the pipeline. Inside this middleware is where we do the actual logging.

I'll show the code for the AnalysisMiddleware in a second, but essentially it is just doing three things:

  1. Logging to DiagnosticSource before the next middleware is invoked
  2. Logging to DiagnosticSource after the next middleware has finished being invoked
  3. Catching any exceptions, logging them to DiagnosticSource and rethrowing them.

For details on how DiagnosticSource works, check out my previous post. In brief, you can log to a source using Write(), providing a key and an anonymous object. You can see this is exactly what the middleware is doing in the code:

public class AnalysisMiddleware  
{
    private readonly Guid _instanceId = Guid.NewGuid();
    private readonly RequestDelegate _next;
    private readonly DiagnosticSource _diagnostics;
    private readonly string _middlewareName;

    public AnalysisMiddleware(RequestDelegate next, DiagnosticSource diagnosticSource, string middlewareName)
    {
        _next = next;
        _diagnostics = diagnosticSource;
        if (string.IsNullOrEmpty(middlewareName))
        {
            middlewareName = next.Target.GetType().FullName;
        }
        _middlewareName = middlewareName;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        var startTimestamp = Stopwatch.GetTimestamp();
        if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"))
        {
            _diagnostics.Write(
                "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting",
                new
                {
                    name = _middlewareName,
                    httpContext = httpContext,
                    instanceId = _instanceId,
                    timestamp = startTimestamp,
                });
        }

        try
        {
            await _next(httpContext);

            if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"))
            {
                var currentTimestamp = Stopwatch.GetTimestamp();
                _diagnostics.Write(
                    "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished", 
                    new
                    {
                        name = _middlewareName,
                        httpContext = httpContext,
                        instanceId = _instanceId,
                        timestamp = currentTimestamp,
                        duration = currentTimestamp - startTimestamp,
                    });
            }
        }
        catch (Exception ex)
        {
            if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"))
            {
                var currentTimestamp = Stopwatch.GetTimestamp();
                _diagnostics.Write(
                    "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException", 
                    new
                    {
                        name = _middlewareName,
                        httpContext = httpContext,
                        instanceId = _instanceId,
                        timestamp = currentTimestamp,
                        duration = currentTimestamp - startTimestamp,
                        exception = ex,
                    });
            }
            throw;
        }
    }
}

In the constructor, we are passed a reference to the next middleware in the pipeline, next. If we haven't already been passed an explicit middleware name (see the ApplicationBuilder section) then a name is obtained from the type of next. This will generally return something like "Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware".

The Invoke method is then called when the middleware is executed. The code is a little hard to read due to the various parameters passed around: to make it a little easier on your eyes, the overall psuedo-code for the class might look something like:

public class AnalysisMiddleware  
{
    private readonly RequestDelegate _next;

    public AnalysisMiddleware(RequestDelegate next, )
    {
        _next = next;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        Diagnostics.Log("MiddlewareStarting")

        try
        {
            await _next(httpContext);
            Diagnostics.Log("MiddlewareFinished")
        }
        catch ()
        {
            Diagnostics.Log("MiddlewareException")
            throw;
        }
    }
}

Hopefully the simplicity of the middleware is more apparent for this latter version. It really is just writing to the the diagnostics source and executing the next middleware in the pipeline.

Summary

And that's all there is to it! Just four classes, providing the ability to log lots of details about your middleware pipeline. The analysis middleware itself uses DiagnosticSource to expose details about the request HttpContext (or exception) currently executing.

The most interesting piece of the package is the way it uses AnalysisBuilder. This shows that you can get complete control over your middleware pipeline by using a simple wrapper class, and by injecting an IStartupFilter. If you haven't, already I really recommend checking out the code on GitHub. It's only four files after all! If you want to see how to actually use the package, check out my previous post on how to use the package in your projects, including setting up a Diagnosticlistener.


Anuraj Parameswaran: How to use Log4Net with ASP.NET Core for logging

This post is about using Log4Net with ASP.NET Core for implementing logging. The Apache log4net library is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent Apache log4j™ framework to the Microsoft® .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime.


Anuraj Parameswaran: Implementing the Repository and Unit of Work Patterns in ASP.NET Core

This post is about implementing the Repository and Unit of Work Patterns in ASP.NET Core. The repository and unit of work patterns are intended to create an abstraction layer between the data access layer and the business logic layer of an application. Implementing these patterns can help insulate your application from changes in the data store and can facilitate automated unit testing or test-driven development (TDD). Long back I wrote a post on implementing a generic repository in ASP.NET 5 (Yes in ASP.NET 5 days, which can be used in ASP.NET Core as well.). So I am not explaining more on Repository pattern. The UnitOfWork pattern is a design for grouping a set of tasks into a single group of transactional work. The UnitOfWork pattern is the solution to sharing the Entity Framework data context across multiple managers and repositories.


Andrew Lock: Understanding your middleware pipeline with the Middleware Analysis package

Understanding your middleware pipeline with the Middleware Analysis package

Edited 18th Feb 17, to add section on adding a custom name for anonymous middleware

In a recent post I took a look at the IStartupFilter interface, a little known feature that can be used to add middleware to your configured pipeline through a different route than the standard Configure method.

I have also previously looked at the DiagnosticSource logging framework, which provides a mechanism for logging rich data, as opposed to the strings that are logged using the ILogger infrastructure.

In this post, I will take a look at the Microsoft.AspNetCore.MiddlewareAnalysis package, which uses an IStartupFilter and DiagnosticSource to provide insights into your middleware pipeline.

Analysing your middleware pipeline

Before we dig into details, lets take a look at what you can expect when you use the MiddlewareAnalysis package in your solution.

I've started with a simple 'Hello world' ASP.NET Core application that just prints to the console for every request. The initial Startup.Configure method looks like the following:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.Run(async (context) =>
    {
        await context.Response.WriteAsync("Hello World!");
    });
}

When you run the app, you just get the Hello World as expected:

Understanding your middleware pipeline with the Middleware Analysis package

By default, we have a console logger set up, so the request to the root url will provide some details about the request:

Understanding your middleware pipeline with the Middleware Analysis package

Nothing particularly surprising there. The interesting stuff happens after we've added the analysis filter to our project. After doing so, we'll see a whole load of additional information is logged to the console, describing when each of the middleware components start and stop:

Understanding your middleware pipeline with the Middleware Analysis package

At first blush, this might not seem that useful, it doesn't appear to be logging anything especially useful. But as you'll see later, the real power comes from using the DiagnosticSource adapter, which allows you to log arbitrary details.

It could also be very handy in diagnosing complex branching middleware pipelines, when trying to figure out why a particular request takes a given route through your app.

Now you've seen what to expect, we'll add middleware analysis to our pipeline.

1. Add the required packages

There are a couple of packages needed for this example. The AnalysisStartupFilter we are going to use from the Microsoft.AspNetCore.MiddlewareAnalysis package writes events using DiagnosticSource. One of the easiest ways to consume these events is using the Microsoft.Extensions.DiagnosticAdapter package, as you'll see shortly.

First add the MiddlewareAnalysis and DiagnosticAdapter packages to either your project.json or .csproj file (depending if you've moved to the new msbuild format):

{
  "dependencies" : {
  ...
  "Microsoft.AspNetCore.MiddlewareAnalysis": "1.1.0",
  "Microsoft.Extensions.DiagnosticAdapter": "1.1.0"
  }
}

Next, we will create an adapter to consume the events generated by the MiddlewareAnalysis package.

2. Creating a diagnostic adapter

In order to consume events from a DiagnosticSource, we need to subscribe to the event stream. For further details on how DiagnosticSource works, check out my previous post or the user guide.

A diagnostic adapter is typically a standard POCO class with methods for each event you are interested in, decorated with a DiagnosticNameAttribute.

You can create a simple adapter for the MiddlewareAnalysis package such as the following (taken from the sample on GitHub):

public class TestDiagnosticListener  
{
    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting")]
    public virtual void OnMiddlewareStarting(HttpContext httpContext, string name)
    {
        Console.WriteLine($"MiddlewareStarting: {name}; {httpContext.Request.Path}");
    }

    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException")]
    public virtual void OnMiddlewareException(Exception exception, string name)
    {
        Console.WriteLine($"MiddlewareException: {name}; {exception.Message}");
    }

    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished")]
    public virtual void OnMiddlewareFinished(HttpContext httpContext, string name)
    {
        Console.WriteLine($"MiddlewareFinished: {name}; {httpContext.Response.StatusCode}");
    }
}

This adapter creates a separate method for each event that the analysis middleware exposes:

  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"

Each event exposes a particular set of named parameters which you can use in your method. For example, the MiddlewareStarting event exposes a number of parameters, though we are only using two in our TestDiagnosticListener methods :

  • string name: The name of the currently executing middleware
  • HttpContext httpContext: The HttpContext for the current request
  • Guid instanceId: A unique guid for the analysis middleware
  • long timestamp: The timestamp at which the middleware started to run, given by Stopwatch.GetTimestamp()

Warning: The name of the methods in our adapter are not important, but the name of the parameters are. If you don't name them correctly, you'll get exceptions or nulls passed to your methods at runtime.

The fact that the whole HttpContext is available to the logger is one of the really powerful points of the DiagnosticSource infrastructure. Instead of the decision about what to log being made at the calling site, it is made by the logging code itself, which has access to the full context of the event.

In the example, we are doing some very trivial logging, writing straight to the console and just noting down basic features of the request like the request path and status code, but the possibility is there to do far more interesting things as needs be: inspecting headers; query parameters; writing to other data sinks etc.

With an adapter created we can look at wiring it up in our application.

3. Add the necessary services

As with most standard middleware packages in ASP.NET Core, you must register some required services with the dependency injection container in ConfigureServices. Luckily, as is convention, the package contains an extension method to make this easy:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMiddlewareAnalysis();
}

With the services registered, the final step is to wire up our listener.

4. Wiring up the diagnostic listener

In order for the TestDiagnosticListener we created in step 2. to collect events, we must register it with a DiagnosticListener. Again, check the user guide if you are interested in the details.

public void Configure(IApplicationBuilder app, DiagnosticListener diagnosticListener, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    var listener = new TestDiagnosticListener();
    diagnosticListener.SubscribeWithAdapter(listener);

    // ... remainder of the existing Configure method
}

The DiagnosticListener can be injected into the Configure method using standard dependency injection, and an instance of the TestDiagnosticListener can just be created directly.

The registering of our listener is achieved using the SubscribeWithAdapter extension method that is exposed by the Microsoft.Extensions.DiagnosticAdapter package. This performs all the wiring up of the TestDiagnosticListener methods to the DiagnosticListener for us.

That is all there is to it. No additional middleware to add or modifying of our pipeline, just add the listener and give it a test!

5. Run your application and look at the output

With everything setup, you can run your application, make a request, and check the output. If all is setup correctly, you should see the same "Hello World" response, but your console will be peppered with extra details about the middleware being run:

Understanding your middleware pipeline with the Middleware Analysis package

Understanding the analysis output

Assuming this has all worked correctly you should have a series of "MiddlewareStarting" and "MiddlewareFinished" entries in your console. In my case, running in a development environment, and ignoring the Ilogger messages, my sample app gives me the following output when I make a request to the root path /:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: UnderstandingMiddlewarePipeline.Startup+<>c; /  
MiddlewareFinished: UnderstandingMiddlewarePipeline.Startup+<>c; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

There are two calls to "MiddlewareStarting" in here, and two corresponding calls to "MiddlewareFinished".

The first call is fairly self explanatory, as it lists the name of the middleware as DeveloperExceptionPageMiddleware. This was the first middleware I added to my pipeline, so it was the first middleware called for the request.

The second call is slightly more cryptic, as it lists the name of the middleware as UnderstandingMiddlewarePipeline.Startup+<>c. This is because I used an inline app.Run method to generate the "Hello world" response in the browser (as I showed right at the beginning of the post).

Using app.Run adds an additional logical piece of middleware to the pipeline, one that executes the provided lambda. That lambda obviously doesn't have a friendly name, so the analysis middleware package passes the automatically generated type name to the listener.

As expected, the MiddlewareFinished logs occur in the reverse order to the MiddlewareStarting logs, as the response passes back down through the middleware pipeline. At each stage it lists the status code for the request generated by the completing middleware. This would allows you to see, for example, at exactly which point in the pipeline the status code for a request switched from 200 to an error.

Adding a custom name for anonymous middleware

While it's fairly obvious what the anonymous middleware is in this case, what if you had used multiple app.Run or app.Use calls in your method? It could be confusing if your pipeline has multiple branches or generally anonymous methods. This rather defeats the point of using the middleware analysis middleware!

Luckily, there's an easy way to give a name to these anonymous methods if you wish, by setting a property on the IApplicationBulider called "analysis.NextMiddlewareName". For example, we could rewrite our middleware pipeline as the following:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.Properties["analysis.NextMiddlewareName"] = "HelloWorld";
    app.Run(async (context) =>
    {
        await context.Response.WriteAsync("Hello World!");
    });
}

If we ran the request again with the addition of our property, the logs would look like the following:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: HelloWorld; /  
MiddlewareFinished: HelloWorld; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

Much clearer!

Listening for errors

Previously I said that the analysis middleware generates three different events, one of which is MiddlewareException. To see this in action, the easiest approach is to view the demo for the middleware analysis package. This lets you test each of the different types of events you might get using simple links:

Understanding your middleware pipeline with the Middleware Analysis package

By clicking on the "throw" option, the sample app will throw an exception as part of the middleware pipeline execution, and you can inspect the logs. If you do, you'll see a number of "MiddlewareException" entries. Obviously the details shown in the sample are scant, but as I've already described, you have a huge amount of flexibility in your diagnostic adapter to log any details you need:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.UseExtensions+<>c__DisplayClass0_0; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; /throw  
MiddlewareStarting: MiddlewareAnaysisSample.Startup+<>c__DisplayClass1_0; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; /throw  
MiddlewareStarting: MiddlewareAnaysisSample.Startup+<>c;  
MiddlewareException: MiddlewareAnaysisSample.Startup+<>c; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; Application Exception  
MiddlewareException: MiddlewareAnaysisSample.Startup+<>c__DisplayClass1_0; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.UseExtensions+<>c__DisplayClass0_0; Application Exception  

Hopefully you now have a feel for the benefit of being able to get such detailed insight into exactly what your middleware pipeline is doing. I really encourage you to have a play with the sample app, and tweak it to see how powerful it could be for diagnosing issues in your own apps.

Event parameters reference

One of the slightly annoying things with the DiagnosticSource infrastructure is the lack of documentation around the events and parameters that a package exposes. You can always look through the source code but that's not exactly user friendly.

As of writing, the current version of Microsoft.AspNetCore.MiddlewareAnalysis is 1.1.0, which exposes three events, with the following parameters:

  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The current ticks timestamp at which the middleware started to run, given by Stopwatch.GetTimestamp()
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The timestamp at which the middleware finished running
    • long duration: The duration in ticks that the middleware took to run, given by the finish timestamp - the start timestamp.
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The timestamp at which the middleware finished running
    • long duration: The duration in ticks that the middleware took to run, given by the finish timestamp - the start timestamp.
    • Exception ex: The exception that occurred during execution of the middleware

Given the names of the events and the parameters must match these values, if you find you logger isn't working, it's worth checking back in the source code to see if things have changed.

Note: The event names must be an exact match, including case, but the parameter names are not case sensitive.

Under the hood

This post is already pretty long, so I'll save the details of how the middleware analysis filter works for a later post, but at it's core it is using two pieces of infrastructure, IStartupFilter and DiagnosticSource. Of course, if you can't wait, you can always check out the source code on GitHub!


Andrew Lock: Exploring IStartupFilter in ASP.NET Core

Exploring IStartupFilter in ASP.NET Core

Note The MEAP preview of my book, ASP.NET Core in Action is now available from Manning! Use the discount code mllock to get 50% off, valid through February 13.

I was spelunking through the ASP.NET Core source code the other day, when I came across something I hadn't seen before - the IStartupFilter interface. This lives in the Hosting repository in ASP.NET Core and is generally used by a number of framework services rather than by ASP.NET Core applications themselves.

In this post, I'll take a look at what the IStartupFilter is and how it is used in the ASP.NET Core infrastructure. In the next post I'll take a look at an external middleware implementation that makes use of it.

The IStartupFilter interface

The IStartupFilter interface lives in the Microsoft.AspNetCore.Hosting.Abstractions package in the Hosting repository on GitHub. It is very simple, and implements just a single method:

namespace Microsoft.AspNetCore.Hosting  
{
    public interface IStartupFilter
    {
        Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next);
    }
}

The single Configure method that IStartupFilter implements takes and returns a single parameter, an Action<IApplicationBuilder>. That's a pretty generic signature for a class, and doesn't reveal a lot of intent but we'll just go with it for now.

The IApplicationBuilder is what you use to configure a middleware pipeline when building an ASP.NET Core application. For example, a simple Startup.Configure method in an MVC app might look something like the following:

public void Configure(IApplicationBuilder app)  
{
    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

In this method, you are directly provided an instance of the IApplicationBuilder, and can add middleware to it. With the IStartupFilter, you are specifying and returning an Action<IApplicationBuilder>, that is, you are provided a method for configuring an IApplicationBuilder and you must return one too.

Consider this again for a second - the IStartupFilter.Configure method accepts a method for configuring an IApplicationBuilder. In other words, the IStartupFilter.Configure accepts a method such as Startup.Configure:

Startup _startup = new Startup();  
Action<IApplicationBuilder> startupConfigure = _startup.Configure;

IStartupFilter filter1 = new StartupFilter1(); //I'll show an example filter later on  
Action<IApplicationBuilder> filter1Configure = filter1.Configure(startupConfigure)

IStartupFilter filter2 = new StartupFilter2(); //I'll show an example filter later on  
Action<IApplicationBuilder> filter2Configure = filter2.Configure(filter1Configure)  

This may or may not start seeming somewhat familiar… We are building up another pipeline; but instead of a middleware pipeline, we are building a pipeline of Configure methods. This is the purpose of the IStartupFilter, to allow creating a pipeline of Configure methods in your application.

When are IStartupFilters called?

Now we better understand the signature of IStartupFilter, we can take a look at its usage in the ASP.NET Core framework.

To see IStartupFilter in action, you can take a look at the WebHost class in the Microsoft.AspNetCore.Hosting package, in the method BuildApplication. This method is called as part of the general initialisation that takes place when you call Build on a WebHostBuilder. This typically takes place in your program.cs file, e.g.:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()    
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup<Startup>()
            .Build();  // this will result in a call to BuildApplication()

        host.Run(); 
    }
}

Taking a look at BuildApplication in elided form (below), you can see that this method is responsible for instantiating the middleware pipeline. The RequestDelegate it returns represents a complete pipeline, and can be called by the server (Kestrel) when a request arrives.

private RequestDelegate BuildApplication()  
{
    //some additional setup not shown
    IApplicationBuilder builder = builderFactory.CreateBuilder(Server.Features);
    builder.ApplicationServices = _applicationServices;

    var startupFilters = _applicationServices.GetService<IEnumerable<IStartupFilter>>();
    Action<IApplicationBuilder> configure = _startup.Configure;
    foreach (var filter in startupFilters.Reverse())
    {
        configure = filter.Configure(configure);
    }

    configure(builder);

    return builder.Build();

First, this method creates an instance of an IApplicationBuilder, which will be used to build the middleware pipeline, and sets the ApplicationServices to a configured DI container.

The next block is the interesting part. First, an IEnumerable<IStartupFilter> is fetched from the DI container. As I've already hinted, we can configure multiple IStartupFilters to form a pipeline, so this method just fetches them all from the container. Also, the Startup.Configure method is captured into a local variable, configure. This is the Configure method that you typically write in your Startup class to configure your middleware pipeline.

Now we create the pipeline of Configure methods by looping through each IStartupFilter (in reverse order), passing in the Startup.Configure method, and then updating the local variable. This has the effect of creating a nested pipeline of Configure methods. For example, if we have three instances of IStartupFilter, you will end up with something a little like this, where the the inner configure methods are passed in the parameter to the outer methods:

Exploring IStartupFilter in ASP.NET Core

The final value of configure is then used to perform the actual middleware pipeline configuration by invoking it with the prepared IApplicationBuilder. Calling builder.Build() generates the RequestDelegate required for handling HTTP requests.

What does an implementation look like?

We've described in general what IStartupFilter is for, but it's always easier to have a concrete implementation to look at. By default, the WebHostBuilder registers a single IStartupFilter when it initialises - the AutoRequestServicesStartupFilter:

public class AutoRequestServicesStartupFilter : IStartupFilter  
{
    public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
    {
        return builder =>
        {
            builder.UseMiddleware<RequestServicesContainerMiddleware>();
            next(builder);
        };
    }
}

Hopefully, the behaviour of this class is fairly obvious. Essentially it adds an additional piece of middleware, the RequestServicesContainerMiddleware, at the start of your middleware pipeline.

This is the only IStartupFilter registered by default, and so in that case the parameter next will be the Configure method of your Startup class.

And that is essentially all there is to IStartupFilter - it is a way to add additional middleware (or other configuration) at the beginning or end of the configured pipeline.

How are they registered?

Registering an IStartupFilter is simple, just register it in your ConfigureServices call as usual. The AutoRequestServicesStartupFilter is registered by default in the WebHostBuilder as part of its initialisation:

private IServiceCollection BuildHostingServices()  
{
    ...
    services.AddTransient<IStartupFilter, AutoRequestServicesStartupFilter>();
    ...
}

The RequestServicesContainerMiddleware

On a slightly tangential point, but just for interest, the RequestServicesContainerMiddleware (that is registered by the AutoRequestServicesStartupFilter) is shown in reduced format below:

public class RequestServicesContainerMiddleware  
{
    private readonly RequestDelegate _next;
    private IServiceScopeFactory _scopeFactory;

    public RequestServicesContainerMiddleware(RequestDelegate next, IServiceScopeFactory scopeFactory)
    {
        _scopeFactory = scopeFactory;
        _next = next;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        var existingFeature = httpContext.Features.Get<IServiceProvidersFeature>();

        // All done if request services is set
        if (existingFeature?.RequestServices != null)
        {
            await _next.Invoke(httpContext);
            return;
        }

        using (var feature = new RequestServicesFeature(_scopeFactory))
        {
            try
            {
                httpContext.Features.Set<IServiceProvidersFeature>(feature);
                await _next.Invoke(httpContext);
            }
            finally
            {
                httpContext.Features.Set(existingFeature);
            }
        }
    }
}

This middleware is responsible for setting the IServiceProvidersFeature. When created, the RequestServicesFeature creates a new IServiceScope and IServiceProvider for the request. This handles the creation and disposing of dependencies added to the dependency injection controller with a Scoped lifecycle.

Hopefully it's clear why it's important that this middleware is added at the beginning of the pipeline - subsequent middleware may need access to the scoped services it manages.

By using an IStartupFilter, the framework can be sure the middleware is added at the start of the pipeline, doing it an extensible, self contained way.

When should you use it?

Generally speaking, I would not imagine that there will much need for IStartupFilter to be used in user's applications. By their nature, users can define the middleware pipeline as they like in the Configure method, so IStartupFilter is rather unnecessary.

I can see a couple of situations in which IStartupFilter would be useful to implement:

  1. You are a library author, and you need to ensure your middleware runs at the beginning (or end) of the middleware pipeline.
  2. You are using a library which makes use of the IStartupFilter and you need to make sure your middleware runs before its does.

Considering the first point, you may have some middleware that absolutely needs to run at a particular point in the middleware pipeline. This is effectively the use case for the RequestServicesContainerMiddleware shown previously.

Currently, the order in which services T are registered with the DI container controls the order they will be returned when you fetch an IEnumerable<T> using GetServices(). As the AutoRequestServicesStartupFilter is added first, it will be returned first when fetched as part of an IEnumerable<IStartupFilter>. Thanks to the call to Reverse() in the WebHost.BuildApplication() method, its Configure method will be the last one called, and hence the outermost method.

If you register additional IStartupFilters in your ConfigureServices method, they will be run prior to the AutoRequestServicesStartupFilter, in the reverse order that you register them. The earlier they are registered with the container, the closer to the beginning of the pipeline any middleware they define will be.

This means you can control the order of middleware added by IStartupFilters in your application. If you use a library that registers an IStartupFilter in its 'Add' method, you can choose whether your own IStartupFilter should run before or after it by whether it is registered before or after in your ConfigureServices method.

The whole concept of IStartupFilters is a little confusing and somewhat esoteric, but it's nice to know it's there as an option should it be required!

Summary

In this post I discussed the IStartupFilter and its use by the WebHost when building a middleware pipeline. In the next post I'll explore a specific usage of the IStartupFilter.


Anuraj Parameswaran: Watermark Images on the Fly in ASP.NET Core

This post is about applying Watermark images on the fly in ASP.NET Core. From the initial days of ASP.NET Core image manipulation was a challenge since the System.Drawing library was depend on GDI+ and Microsoft didn’t released a package for image manipulation.


Damien Bowden: Hot Module Replacement with Angular and Webpack

This article shows how HMR, or Hot Module Replacement can be used together with Angular and Webpack.

Code: Visual Studio 2015 project | Visual Studio 2017 project

Blogs in this series:

2017.02.06: Updated to webpack 2.2.1, Angular 2.4.6, renaming to angular

package.json npm file

The webpack-dev-server from Kees Kluskens is added to the devDependencies in the npm package.json file. The webpack-dev-server package implements and supports the HMR feature.

"devDependencies": {
  ...
  "webpack": "^2.2.1",
  "webpack-dev-server": "2.2.1"
},

In the scripts section of the package.json, the start command is configured to start the dotnet server and also the webpack-dev-server with the –hot and the –inline parameters.

See the webpack-dev-server documentation for more information about the possible parameters.

The dotnet server is only required because this demo application uses a Web API service implemented in ASP.NET Core.

"start": "concurrently \"webpack-dev-server --hot --inline --port 8080\" \"dotnet run\" "

webpack dev configuration

The devServer is added to the module.exports in the webpack.dev.js. This configures the webpack-dev-server as required. The webpack-dev-server configuration can be set here as well as the command line options, so you as a developer can decide which is better for you.

devServer: {
	historyApiFallback: true,
	contentBase: path.join(__dirname, '/wwwroot/'),
	watchOptions: {
		aggregateTimeout: 300,
		poll: 1000
	}
},

The output in the module.exports also needs to be configured correctly for the webpack-dev-server to work correctly. If the ‘./’ path is used in the path option of the output section, the webpack-dev-server will not start.

output: {
	path: __dirname +  '/wwwroot/',
	filename: 'dist/[name].bundle.js',
	chunkFilename: 'dist/[id].chunk.js',
	publicPath: '/'
},

Running the application

Build the application using the webpack dev build. This can be done in the command line. Before building, you need to install all the npm packages using npm install.

$ npm run build-dev

The npm script build-dev is defined in the package.json file and uses the webpack-dev script which does a development build.

"build-dev": "npm run webpack-dev",
"webpack-dev": "set NODE_ENV=development && webpack",

Now the server can be started using the start script.

$ npm start

hmr_angular_01

The application is now running on localhost with port 8080 as defined.

http://localhost:8080/home

If for example, the color is changed in the app.scss, the bundles will be reloaded in the browser without refreshing.
hmr_angular2_03

Links

https://webpack.js.org/concepts/hot-module-replacement/

https://webpack.js.org/configuration/dev-server/#devserver

https://github.com/webpack/webpack-dev-server

https://www.sitepoint.com/beginners-guide-to-webpack-2-and-module-bundling/

View story at Medium.com



Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Andrew Lock: Logging using DiagnosticSource in ASP.NET Core

Logging using DiagnosticSource in ASP.NET Core

Logging in the ASP.NET Core framework is implemented as an extensible set of providers that allows you to easily plug in new providers without having to change your logging code itself. The docs give a great summary of how to use the ILogger and ILoggerFactory in your application and how to pipe the output to the console, to Serilog, to Azure etc. However, the ILogger isn't the only logging possibility in ASP.NET Core.

In this post, I'll show how to use the DiagnosticSource logging system in your ASP.NET Core application.

ASP.NET Core logging systems

There are actually three logging system in ASP.NET Core:

  1. EventSource - Fast and strongly typed. Designed to interface with OS logging systems.
  2. ILogger - An extensible logging system designed to allow you to plug in additional consumers of logging events.
  3. DiagnosticSource - Similar in design to EventSource, but does not require the logged data be serialisable.

EventSource has been available since the .NET Framework 4.5 and is used extensively by the framework to instrument itself. The data that gets logged is strongly typed, but must be serialisable as the data is sent out of the process to be logged. Ultimately, EventSource is designed to interface with the underlying operating system's logging infrastructure, e.g. Event Tracing for Windows (ETW) or LTTng on Linux.

The ILogger infrastructure is the most commonly used logging ASP.NET Core infrastructure. You can log to the infrastructure by injecting an instance of ILogger into your classes, and calling, for example, ILogger.LogInformation(). The infrastructure is designed for logging strings only, but does allow you to pass objects as additional parameters which can be used for structured logging (such as that provided by SeriLog). Generally speaking, the ILogger implementation will be the infrastructure you want to use in your applications, so check out the documentation if you are not familiar with it.

The DiagnosticSource infrastructure is very similar to the EventSource infrastructure, but the data being logged does not leave the process, so it does not need to be serialisable. There is also an adapter to allow converting DiagnosticSource events to ETW events which can be useful in some cases. It is worth reading the users guide for DiagnosticSource on GitHub if you wish to use it in your code.

When to use DiagnosticSource vs ILogger?

The ASP.NET Core internals use both the ILogger and the DiagnosticSource infrastructure to instrument itself. Generally speaking, and unsurprisingly, DiagnosticSource is used strictly for diagnostics. It records events such as "Microsoft.AspNetCore.Mvc.BeforeViewComponent" and "Microsoft.AspNetCore.Mvc.ViewNotFound".

In contrast, the ILogger is used to log more specific information such as "Executing JsonResult, writing value {Value}." or when an error occurs such as ""JSON input formatter threw an exception.".

So in essence, you should only use DiagnosticSource for infrastructure related events, for tracing the flow of your application process. Generally, ILogger will be the appropriate interface in almost all cases.

An example project using DiagnosticSource

For the rest of this post I'll show an example of how to log events to DiagnosticSource, and how to write a listener to consume them. This example will simply log to the DiagnosticSource when some custom middleware executes, and the listener will write details about the current request to the console. You can find the example project here.

Adding the necessary dependencies.

We'll start by adding the NuGet packages we're going to need for our DiagnosticSource to our project.json (I haven't moved to csproj based projects yet):

{
  dependencies: {
    ...
    "Microsoft.Extensions.DiagnosticAdapter": "1.1.0",
    "System.Diagnostics.DiagnosticSource": "4.3.0"
  }
}

Strictly speaking, the System.Diagnostics.DiagnosticSource package is the only one required, but we will add the adapter to give us an easier way to write a listener later.

Logging to the DiagnosticSource from middleware

Next, we'll create the custom middleware. This middleware doesn't do anything other than log to the diagnostic source:

public class DemoMiddleware  
{
    private readonly RequestDelegate _next;
    private readonly DiagnosticSource _diagnostics;

    public DemoMiddleware(RequestDelegate next, DiagnosticSource diagnosticSource)
    {
        _next = next;
        _diagnostics = diagnosticSource;
    }

    public async Task Invoke(HttpContext context)
    {
        if (_diagnostics.IsEnabled("DiagnosticListenerExample.MiddlewareStarting"))
        {
            _diagnostics.Write("DiagnosticListenerExample.MiddlewareStarting",
                new
                {
                    httpContext = context
                });
        }

        await _next.Invoke(context);
    }
}

This shows the standard way to log using a DiagnosticSource. You inject the DiagnosticSource into the constructor of the middleware for use when the middleware executes.

When you intend to log an event, you first check that there is a listener for the specific event. This approach keeps the logger lightweight, as the code contained within the body of the if statement is only executed if a listener is attached.

In order to create the log, you use the Write method, providing the event name and the data that should be logged. The data to be logged is generally passed as an anonymous object. In this case, the HttpContext is passed to the attached listeners, which they can use to log the data in any ways they sees fit.

Creating a diagnostic listener

There are a number of ways to create a listener that consumes DiagnosticSource events, but one of the easiest approaches is to use the functionality provided by the Microsoft.Extensions.DiagnosticAdapter package.

To create a listener, you can create a POCO class that contains a method designed to accept parameters of the appropriate type. You then decorate the method with a [DiagnosticName] attribute, providing the event name to listen for:

public class DemoDiagnosticListener  
{
    [DiagnosticName("DiagnosticListenerExample.MiddlewareStarting")]
    public virtual void OnMiddlewareStarting(HttpContext httpContext)
    {
        Console.WriteLine($"Demo Middleware Starting, path: {httpContext.Request.Path}");
    }
}

In this example, the OnMiddlewareStarting() method is configured to handle the "DiagnosticListenerExample.MiddlewareStarting" diagnostic event. The HttpContext, that is provided when the event is logged is passed to the method as it has the same name, httpContext that was provided when the event was logged.

Hopefully one of the advantages of the DiagnosticSource infrastructure is apparent in that you can log anything provided as data. We have access to the full HttpContext object that was passed, so we can choose to log anything it contains (just the request path in this case).

Wiring up the DiagnosticListener

All that remains is to hook up our listener and middleware pipeline in our Startup.Configure method:

public class Startup  
{
    public void Configure(IApplicationBuilder app, DiagnosticListener diagnosticListener)
    {
        // Listen for middleware events and log them to the console.
        var listener = new DemoDiagnosticListener();
        diagnosticListener.SubscribeWithAdapter(listener);

        app.UseMiddleware<DemoMiddleware>();
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("Hello World!");
        });
    }
}

A DiagnosticListener is injected into the Configure method from the DI container. This is the actual class that is used to subscribe to diagnostic events. We use the SubscribeWithAdapter extension method from the Microsoft.Extensions.DiagnosticAdapter package to register our DemoDiagnosticListener. This hooks into the [DiagnosticName] attribute to register our events, so that the listener is invoked when the event is written.

Finally, we configure the middleware pipeline with out demo middleware, and a simple 'Hello world' endpoint to the pipeline.

Running the example

At this point we're all set to run the example. If we hit any page, we just get the 'Hello world' output, no matter the path.

Logging using DiagnosticSource in ASP.NET Core

However, if we check the console, we can see the DemoMiddleware has been raising diagnostic events. These have been captured by the DemoDiagnosticListener which logs the path to the console:

Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
Demo Middleware Starting, path: /  
Demo Middleware Starting, path: /a/path  
Demo Middleware Starting, path: /another/path  
Demo Middleware Starting, path: /one/more  

Summary

And that's it, we have successfully written and consumed a DiagnosticSource. As I stated earlier, you are more likely to use the ILogger in your applications than DiagnosticSource, but hopefully now you will able to use it should you need to. Do let me know in the comments if there's anything I've missed or got wrong!


Damien Bowden: Docker compose with ASP.NET Core, EF Core and the PostgreSQL image

This article show how an ASP.NET Core application with a PostgreSQL database can be setup together using docker as the deployment containers for both web and database parts of the application. docker-compose is used to connect the 2 containers and the application is build using Visual Studio 2017.

Code: https://github.com/damienbod/AspNetCorePostgreSQLDocker

Setting up the PostgreSQL docker container from the command line

2017.02.03: Updated to VS2017 RC3 msbuild3

The PostgreSQL docker image can be started or setup from the command line simple by defining the required environment parameters, and the port which can be used to connect with PostgreSQL. A named volume called pgdata is also defined in the following command. The container is called postgres-server.

$ docker run -d -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=damienbod 
 --name postgres-server -p 5432:5432 -v pgdata:/var/lib/postgresql/data 
 --restart=always postgres

You can check all your local volumes with the following docker command:

$ docker volume ls

The docker containers can be viewed by running the docker ps -a:

$ docker ps -a

Then you can check the docker container for the postgres-server by using the logs command and the id of the container. Only the first few characters from the container id is required for docker to find the container.

$ docker logs <docker_id>

If you would like to view the docker container configuration and its properties, the inspect command can be used:

$ docker inspect <docker_id>

When developing docker applications, you will regularly need to clean up the images, containers and volumes. Here’s some quick commands which are used regularly.

If you need to find the dangling volumes:

$ docker volume ls -qf dangling=true

A volume can be removed using the volume id:

$ docker volume rm <volume id>

Clean up container and volume (dangerous as you might not want to remove the data):

$ docker rm -fv <docker id>

Configure the database using pgAdmin

Open pgAdmin to configure a new user in PostgreSQL, which will be used for the application.

EF7_PostgreSQL_01

Right click your user and click properties to set the password

EF7_PostgreSQL_02

Now a PostgreSQL database using docker is ready to be used. This is not the only way to do this, a better way would be to use a Dockerfile and and docker-compose.

Creating the PostgreSQL docker image using a Dockerfile

Usually you do not want to create the application from hand. You can do everything described above using a Dockerfile and docker-compose. The PostgresSQL docker image for this project is created using a Dockerfile and docker-compose. The Dockerfile uses the latest offical postgres docker image and adds the required database to the docker-entrypoint-initdb.d folder inside the container. When the PostgreSQL inits, it executes these scripts.

FROM postgres:latest
EXPOSE 5432
COPY dbscripts/10-init.sql /docker-entrypoint-initdb.d/10-init.sql
COPY dbscripts/20-damienbod.sql /docker-entrypoint-initdb.d/20-database.sql

The docker-compose defines the image, ports and a named volume for this image. The POSTGRES_PASSWORD is required.

version: '2'

services:
  damienbodpostgres:
     image: damienbodpostgres
     restart: always
     build:
       context: .
       dockerfile: Dockerfile
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Now switch to the directory where the docker-compose file is and build.

$ docker-compose build

If you want to deploy, you could create a new docker tag on the postgres container. Use your docker hub name if you have.

$ docker ps -a
$ docker tag damienbodpostgres damienbod/postgres-server

You can check your images and should see it in your list.

$ docker images

Creating the ASP.NET Core application

An ASP.NET Core application was created in VS2017. The EF Core and the PostgreSQL nuget packages were added as required. The Docker support was also added using the Visual Studio tooling.

<Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Relational" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.0.0-msbuild3-final" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="1.1.0" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0-msbuild3-final" />
  </ItemGroup>
</Project>

The EF Core context is setup to access the 2 tables defined in PostgreSQL.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AspNetCorePostgreSQLDocker
{
    // >dotnet ef migration add testMigration in AspNet5MultipleProject
    public class DomainModelPostgreSqlContext : DbContext
    {
        public DomainModelPostgreSqlContext(DbContextOptions<DomainModelPostgreSqlContext> options) :base(options)
        {
        }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

The used database was created using the dockerfile scripts executed in the docker container init. This could also be done with EF Core migrations.

$ dotnet ef migrations add postgres-scripts

$ dotnet ef database update

The connection string used in the application must use the network name defined for the database in the docker-compose file. When debugging locally using IIS without docker, you would have so supply a way of switching the connection string hosts. The host postgresserver is defined in this demo, and so used in the connection string.

 "DataAccessPostgreSqlProvider": "User ID=damienbod;Password=damienbod;Host=postgresserver;Port=5432;Database=damienbod;Pooling=true;"

Now the application can be built. You need to check that it can be published to the release bin folder, which is used by the docker-compose.

Setup the docker-compose

The docker-compose for the application defines the web tier, database server and the network settings for docker. The postgresserver service is built using the damienbodpostgres image. It exposes the PostgreSQL standard post like we have defined before. The aspnetcorepostgresqldocker web application runs on post 5001 and depends on postgresserver. This is the ASP.NET Core application in Visual studio 2017.

version: '2'

services:
  postgresserver:
     image: damienbodpostgres
     restart: always
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data
     networks:
       - mynetwork

  aspnetcorepostgresqldocker:
     image: aspnetcorepostgresqldocker
     ports:
       - 5001:80
     build:
       context: ./src/AspNetCorePostgreSQLDocker
       dockerfile: Dockerfile
     links:
       - postgresserver
     depends_on:
       - "postgresserver"
     networks:
       - mynetwork

volumes:
  pgdata:

networks:
  mynetwork:
     driver: bridge

Now the application can be started, deployed or tested. The following command will start the application in detached mode.

$ docker-compose -d up

Once the application is started, you can test it using:

http://localhost:5001/index.html

01_postgresqldocker

You can add some data using Postman
02_postgresqldocker

POST http://localhost:5001/api/dataeventrecords
{
  "DataEventRecordId":3,
  "Name":"Funny data",
  "Description":"yes",
  "Timestamp":"2015-12-27T08:31:35Z",
   "SourceInfo":
  { 
    "SourceInfoId":0,
    "Name":"Beauty",
    "Description":"second Source",
    "Timestamp":"2015-12-23T08:31:35+01:00",
    "DataEventRecords":[]
  },
 "SourceInfoId":0 
}

And the data can be viewed using

http://localhost:5001/api/dataeventrecords

03_postgresqldocker

Or you can view the data using pgAdmin

04_postgresqldocker
Links

https://hub.docker.com/_/postgres/

https://www.andreagrandi.it/2015/02/21/how-to-create-a-docker-image-for-postgresql-and-persist-data/

https://docs.docker.com/engine/examples/postgresql_service/

http://stackoverflow.com/questions/25540711/docker-postgres-pgadmin-local-connection

http://www.postgresql.org

http://www.pgadmin.org/

https://github.com/npgsql/npgsql

https://docs.docker.com/engine/tutorials/dockervolumes/



Andrew Lock: Reloading strongly typed options in ASP.NET Core 1.1.0

Reloading strongly typed options in ASP.NET Core 1.1.0

Back in June, when ASP.NET Core was still in RC2, I wrote a post about reloading strongly typed Options when the underlying configuration sources (e.g. a JSON) file changes. As I noted in that post, this functionality was removed prior to the release of ASP.NET Core 1.0.0, as the experience was a little confusing. With ASP.NET Core 1.1.0, it's back, and much simpler to use.

In this post, I'll show how you can use the new IOptionsSnapshot<> interface to simplify reloading strongly typed options. I'll provide a very brief summary of using strongly typed configuration in ASP.NET Core, and touch on the approach that used to be required with RC2 to show how much simpler it is now!

tl;dr; To have your options reload when the underlying file / IConfigurationRoot changes, just replace any usages of IOptions<> with IOptionsSnapshot<>

The ASP.NET Core configuration system

The configuration system in ASP.NET Core is rather different to the approach taken in ASP.NET 4.X. Previously, you would typically store your configuration in the AppSettings section of the XML web.config file, and you would load these settings using a static helper class. Any changes to web.config would cause the app pool to recycle, so changing settings on the fly this way wasn't really feasible.

In ASP.NET Core, configuration of app settings is a more dynamic affair. App settings are still essentially key-value pairs, but they can be obtained from a wide array of sources. You can still load settings from XML files, but also JSON files, from the command line, from environment variables, and many others. Writing your own custom configuration provider is also possible if you have another source you wish to use to configure your application.

Configuration is typically performed in the constructor of Startup, loading from multiple sources:

public Startup(IHostingEnvironment env)  
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }  

This constructor creates a configuration object, loading the configuration found in each of the file sources (two JSON files and Environment Variables in this case). Each source supplies a set of key-value pairs, and each subsequent source overwrites values found in earlier sources. The final IConfigurationRoot is essentially a dictionary of all the final key-value pairs from all of your configuration sources.

It is perfectly possible to use this IConfigurationRoot directly in your application, but the suggested approach is to use strongly typed settings instead. Rather than injecting the whole dictionary of settings whenever you need to access a single value, you take a dependency on a strongly typed POCO C# class. This can be bound to your configuration values and used directly.

For example, imagine I have the following values in appsettings.json:

{
  "MyValues": {
    "DefaultValue" : "first"
  }
}

This could be bound to the following class:

public class MyValues  
{
    public string DefaultValue { get; set; }
}

The binding is setup when you are configuring your application for dependency injection in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
}

With this approach, you can inject an instance of IOptions<MyValues> into your controllers and access the settings values using the strongly typed object. For example, a simple web API controller that just displays the setting value:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptions<MyValues> values)
    {
        _myValues = values.Value;
    }

    // GET api/values
    [HttpGet]
    public string Get()
    {
        return _myValues.DefaultValue;
    }
}

would give the following output when the url /api/Values is hit:

Reloading strongly typed options in ASP.NET Core 1.1.0

Reloading strongly typed options in ASP.NET Core RC2

Now that you know how to read settings in ASP.NET Core, we get to the interesting bit - reloading options. You may have noticed that there is a reloadOnChange parameter on the AddJsonFile method when building your configuration object in Startup. Based on this parameter it would seem like any changes to the underlying file should propagate into your project.

Unfortunately, as I explored in a previous post, you can't just expect that functionality to happen magically. While it is possible to achieve, it takes a bit of work.

The problem lies in the fact that although the IConfigurationRoot is automatically updated whenever the underlying appsettings.json file changes, the strongly typed configuration IOptions<> is not. Instead, the IOptions<> is created as a singleton when first requested and is never updated again.

To get around this, RC2 provided the IOptionsMonitor<> interface. In principle, this could be used almost identically to the IOptions<> interface, but it would be updated when the underlying IConfigurationRoot changed. So, for example, you should be able to modify your constructor to take an instance of IOptionsMonitor<MyValues> instead, and to use the CurrentValue property:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptionsMonitor<MyValues> values)
    {
        _myValues = values.CurrentValue;
    }
}

Unfortunately, as written, this does not have quite the desired effect - there is an additional step required. As well as injecting an instance of IOptionsMonitor you must also configure an event handler for when the underlying configuration changes. This doesn't have to actually do anything, it just has to be set. So for example, you could set the monitor to just create a log whenever the underlying file changes:

public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory, IOptionsMonitor<MyValues> monitor)  
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    monitor.OnChange(
        vals =>
        {
            loggerFactory
                .CreateLogger<IOptionsMonitor<MyValues>>()
                .LogDebug($"Config changed: {string.Join(", ", vals)}");
        });

    app.UseMvc();
}

With this in place, changes to the underlying appsettings.json file will be reflected each time you request an instance of IOptionsMonitor<MyValues> from the dependency injection container.

The new way in ASP.NET Core 1.1.0

The approach required for RC2 felt a bit convoluted and was very easy to miss. Microsoft clearly thought the same, as they removed IOptionsMonitor<> from the public package when they went RTM with 1.0.0. Luckily, a new improved approach is back with version 1.1.0 of ASP.NET Core.

No additional setup is required to have your strongly typed options reload when the IConfigurationRoot changes. All you need to do is inject IOptionsSnapshot<> instead of IOptions<>:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptionsSnapshot<MyValues> values)
    {
        _myValues = values.Value;
    }
}

No additional faffing in the ConfigureMethod, no need to setup additional services to make use of IOptionsSnapshot - it is all setup and works out of the box once you configure your strongly typed class using

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
}

Trying it out

To make sure it really did work as expected, I created a simple project using the values described in this post, and injected both an IOptions<MyValues> object and an IOptionsSnapshot<MyValues> object into a web API controller:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    private readonly MyValues _snapshot;
    public ValuesController(IOptions<MyValues> optionsValue, IOptionsSnapshot<MyValues> snapshotValue)
    {
        _myValues = optionsValue.Value;
        _snapshot = snapshotValue.Value;
    }

    // GET api/values
    [HttpGet]
    public string Get()
    {
        return $@"
IOptions<>:         {_myValues.DefaultValue}  
IOptionsSnapshot<>: {_snapshot.DefaultValue},  
Are same:           {_myValues == _snapshot}";  
    }
}

When you hit /api/Values this simply writes out the values stored in the current IOptions and IOptionsSnapshot<> values as plaintext:

Reloading strongly typed options in ASP.NET Core 1.1.0

With the application still running, I edited the appsettings.json file:

{
  "MyValues": {
    "DefaultValue" : "The second value"
  }
}

I then reloaded the web page (without restarting the app), and voila, the value contained in IOptionsSnapshot<> has updated while the IOptions value remains the same:

Reloading strongly typed options in ASP.NET Core 1.1.0

One point of note here - although the initial values are the same for both IOptions<> and IOptionsSnapshot<>, they are not actually the same object. If I had injected two IOptions<> objects, they would have been the same object, but that is not the case when one is an IOptionsSnapshot<>. (This makes sense if you think about it - you couldn't have them both be the same object and have one change while the other stayed the same).

If you don't like to use IOptions

Some people don't like polluting their controllers by using the IOptions<> interface everywhere they want to inject settings. There are a number of ways around this, such as those described by Khalid here and Filip from StrathWeb here. You can easily extend those techniques to use the IOptionsSnapshot<> approach, so that all of your strongly typed options classes are reloaded when an underlying file changes.

A simple solution is to just delegate the request for the MyValues object to the IOptionsSnapshot<MyValues>.Value value, by setting up a delegate in ConfigureServices:

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
    services.AddScoped(cfg => cfg.GetService<IOptionsSnapshot<MyValues>>().Value);
}

With this approach, you can have reloading of the MyValues object in the ValuesController, without needing to explicitly specify the IOptionsSnapshot<> interface - just use MyValues directly:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(MyValues values)
    {
        _myValues = values;
    }
}

Summary

Reloading strongly typed options in ASP.NET Core when the underlying configuration file changes is easy when you are using ASP.NET Core 1.1.0. Simply replace your usages of IOptions<> with IOptionsSnapshot<>.


Anuraj Parameswaran: VS2017 RC - a product matching the following parameters cannot be found: channelId: VisualStudio.15.Release

This post is about an installation issue while installing the VS 2017 RC. While installing the VS 2017 RC, the installation was failing, the setup was throwing an exception like this - a product matching the following parameters cannot be found: channelId: VisualStudio.15.Release product Id : Microsoft.VisualStudio.Product.Professional.


Anuraj Parameswaran: Running a specific test with .NET Core and NUnit

This post is about running a specific test or specific category with .NET Core and NUnit. dotnet-test-nunit is the unit test runner for .NET Core for running unit tests with NUnit 3.


Damien Bowden: Creating an ASP.NET Core Docker application and deploying to Azure

This blog is a simple step through, which creates an ASP.NET Core Docker image using Visual Studio 2017, deploys it to Docker Hub and then deploys the image to Azure.

Thanks to Malte Lantin for his fantastic posts on MSDN. See the links at the end of this post.

Code: https://github.com/damienbod/AspNetCoreDockerAzureDemo

2017.02.03: Updated to VS2017 RC3 msbuild3

Step 1: Create a Docker project in Visual Studio 2017 using ASP.NET Core

In the example, an ASP.NET Core Visual Studio 2017 project using msbuild is used as the demo application. Then the Docker support is added to the project using Visual Studio 2017.

Right click the project, Add/Docker Project Support
firstazuredocker_01

Update the docker files to ASPNET.Core and the correct docker version as required. More information can be found here:

http://www.jeffreyfritz.com/2017/01/docker-compose-api-too-old-for-windows/

https://damienbod.com/2016/12/24/creating-an-asp-net-core-1-1-vs2017-docker-application/

Now the application will be built in a layer on top of the microsoft/aspnetcore image.

Dockerfile:

FROM microsoft/aspnetcore:1.0.3
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-bin/Release/PublishOutput} .
ENTRYPOINT ["dotnet", "AngularClient.dll"]

docker-compose.yml

version: '2'

services:
  angularclient:
    image: angularclient
    build:
      context: .
      dockerfile: Dockerfile

Once the project is built and ready, it can be deployed to docker hub. Do a release build of the projects.

Step 2: Build a docker image and deploy to docker hub

Before you can deploy the docker image to docker hub, you need to have, or create a docker hub account.

Then open up the console and create a docker tag for your application. Replace damienbod with your docker hub user name. The docker image angularclient, created from Visual Studio 2017, will be tagged to damienbod/aspnetcorethingsclient.

docker tag angularclient damienbod/aspnetcorethingsclient

Now login to docker hub in the command line:

docker login

Once logged in, the image can be pushed to docker hub. Again replace damienbod with your docker hub name.

docker push damienbod/aspnetcorethingsclient

Once deployed, you can view this on docker hub.

https://hub.docker.com/u/damienbod/

firstazuredocker_02

For more information on docker images and containers:

https://docs.docker.com/engine/getstarted/step_four/

Step 3: Deploy to Azure

Login to https://portal.azure.com/ and click the new button and search for Web App On Linux. We want to deploy a docker container to this, using our docker image.

Select the + New and search for Web App On Linux.
firstazuredocker_03

Then select. Do not click the create until the docker container has been configured.

firstazuredocker_04

Now configure the docker container. Add the new created image on docker hub to the text field.

firstazuredocker_05

Click create and the aplication will be deployed on Azure. Now the application can be used.

firstazuredocker_07

And the application runs as requires.

http://thingsclient.azurewebsites.net/home

firstazuredocker_06

Notes:

The Visual Studio 2017 docker tooling is still rough and has problems when using newer versions of docker, or the msbuild 1.1 versions etc, but it is still in RC and will improve before the release. Next steps are now to use CI to automatically complete all these steps, add security, and use docker compose for multiple container deployments.

Links

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/12/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-13/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-23/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-33/

https://hub.docker.com/

Orchestrating multi service asp.net core application using docker-compose

Debugging Asp.Net core apps running in Docker Containers using VS 2017

https://docs.docker.com/engine/getstarted/step_four/

https://stefanprodan.com/2016/aspnetcore-cd-pipeline-docker-hub/



Andrew Lock: How to pass parameters to a view component

How to pass parameters to a view component

In my last post I showed how to create a custom view component to simplify my Razor views, and separate the logic of what to display from the UI concern.

View components are a good fit where you have some complex rendering logic, which does not belong in the UI, and is also not a good fit for an action endpoint - approximately equivalent to child actions from the previous version of ASP.NET.

In this post I will show how you can pass parameters to a view component when invoking it from your view, from a controller, or when used as a tag helper.

In the previous post I showed how to create a simple LoginStatusViewComponent that shows you the email of the user and a log out link when a user is logged in, and register or login links when the user is anonymous:

How to pass parameters to a view component

The view component itself was simple, but it separated out the logic of which template to display from the templates themselves. It was created with a simple InvokeAsync method that did not require any parameters:

public class LoginStatusViewComponent : ViewComponent  
{
    private readonly SignInManager<ApplicationUser> _signInManager;
    private readonly UserManager<ApplicationUser> _userManager;

    public LoginStatusViewComponent(SignInManager<ApplicationUser> signInManager, UserManager<ApplicationUser> userManager)
    {
        _signInManager = signInManager;
        _userManager = userManager;
    }

    public async Task<IViewComponentResult> InvokeAsync()
    {
        if (_signInManager.IsSignedIn(HttpContext.User))
        {
            var user = await _userManager.GetUserAsync(HttpContext.User);
            return View("LoggedIn", user);
        }
        else
        {
            return View("Anonymous");
        }
    }
}

Invoking the LoginStatus view component from the _layout.cshtml involves calling Component.InvokeAsync and awaiting the response:

 @await Component.InvokeAsync("LoginStatus")

Updating a view component to accept parameters

The example presented is pretty simple, in that it is self contained; the InvokeAsync method does not have any parameters to pass to it. But what if we wanted to control how the view component behaves when invoked. For example, imagine that you want to control whether to display the Register link for anonymous users. Maybe your site that has an external registration system instead, so the "register" link is not valid in some cases.

First, lets create a simple view model to use in our "anonymous" view:

public class AnonymousViewModel  
{
    public bool IsRegisterLinkVisible { get; set; }
}

Next, we update the InvokeAsync method of our view component to take a boolean parameter. If the user is not logged in, we will pass this parameter down into the view model:

public async Task<IViewComponentResult> InvokeAsync(bool shouldShowRegisterLink)  
{
    if (_signInManager.IsSignedIn(HttpContext.User))
    {
        var user = await _userManager.GetUserAsync(HttpContext.User);
        return View("LoggedIn", user);
    }
    else
    {
        var viewModel = new AnonymousViewModel
        {
            IsRegisterLinkVisible = shouldShowRegisterLink
        };
        return View(viewModel);
    }
}

Finally, we update the anonymous default.cshtml template to honour this boolean:

@model LoginStatusViewComponent.AnonymousViewModel
<ul class="nav navbar-nav navbar-right">  
    @if(Model.IsRegisterLinkVisible)
    {
        <li><a asp-area="" asp-controller="Account" asp-action="Register">Register</a></li>
    }
    <li><a asp-area="" asp-controller="Account" asp-action="Login">Log in</a></li>
</ul>  

Passing parameters to view components using InvokeAsync

Our component is all set up to conditionally show or hide the register link, all that remains is to invoke it.

Passing parameters to a view component is achieved using anonymous types. In our layout, we specify the parameters in an optional parameter passed to InvokeAsync:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    @await Component.InvokeAsync("LoginStatus", new { shouldShowRegisterLink = false })
</div>  

With this in place, the register link can be shown:

How to pass parameters to a view component

or hidden:

How to pass parameters to a view component

If you omit the anonymous type, then the parameters will all have their default values (false for our bool, but null for objects).

Passing parameters to view components when invoked from a controller

Passing parameters to a view component when invoked from a controller is very similar - just pass an anonymous method with the appropriate values when creating the controller:

public IActionResult IndexVC()  
{
    return ViewComponent("LoginStatus", new { shouldShowRegisterLink = false })
}

Passing parameters to view components when invoked as a tag helper in ASP.NET Core 1.1.0

In the previous post I showed how to invoke view components as tag helpers. The parameterless version of our invocation looks like this:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    <vc:login-status></vc:login-status>
</div>  

Passing parameters to a view component tag helper is the same as for normal tag helpers. You convert the parameters to lower-kebab case and add them as attributes to the tag, e.g.:

<vc:login-status should-show-register-link="false"></vc:login-status>  

This gives a nice syntax for invoking our view components without having to drop into C# land use @await Component.InvokeAsync(), and will almost certainly become the preferred way to use them in the future.

Summary

In this post I showed how you can pass parameters to a view component. When invoking from a view in ASP.NET Core 1.0.0 or from a controller, you can use an anonymous method to pass parameters, where the properties are the name of the parameters.

In ASP.NET Core 1.1.0 you can use the alternative tag helper invocation method to pass parameters as attributes. Just remember to use lower-kebab-case for your component name and parameters! You can find sample code for this approach on GitHub.


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Anuraj Parameswaran: Integrate HangFire With ASP.NET Core

This post is about integrating HangFire With ASP.NET Core. HangFire is an incredibly easy way to perform fire-and-forget, delayed and recurring jobs inside ASP.NET applications. CPU and I/O intensive, long-running and short-running jobs are supported. No Windows Service / Task Scheduler required. Backed by Redis, SQL Server, SQL Azure and MSMQ. Hangfire provides a unified programming model to handle background tasks in a reliable way and run them on shared hosting, dedicated hosting or in cloud. The product I am working has a feature of adding watermark to the images uploaded by users. Right now we are using a console app, which will monitor a directory in specified intervals and apply watermark to the newly uploaded images. But using HangFire we can schedule / execute the watermark opertation as a background task, instead of polling a directory for new images.


Damien Bowden: Angular Lazy Loading with Webpack 2

This article shows how Angular lazy loading can be supported using Webpack 2 for both JIT and AOT builds. The Webpack loader angular-router-loader from Brandon Roberts is used to implement this.

A big thanks to Roberto Simonetti for his help in this.

Code: Visual Studio 2015 project | Visual Studio 2017 project

Blogs in this series:

2017.02.06: Updated to webpack 2.2.1, Angular 2.4.6, renaming to angular
2017.01.18: Updated to webpack 2.2.0

First create an Angular module

In this example, the about module will be lazy loaded when the user clicks on the about tab. The about.module.ts is the entry point for this feature. The module has its own component and routing.
The app will now be setup to lazy load the AboutModule.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

import { AboutRoutes } from './about.routes';
import { AboutComponent } from './components/about.component';

@NgModule({
    imports: [
        CommonModule,
        AboutRoutes
    ],

    declarations: [
        AboutComponent
    ],

})

export class AboutModule { }

Add the angular-router-loader Webpack loader to the packages.json file

To add lazy loading to the app, the angular-router-loader npm package needs to be added to the packages.json npm file in the devDependencies.

"devDependencies": {
    "@types/node": "7.0.0",
    "angular2-template-loader": "^0.6.0",
    "angular-router-loader": "^0.5.0",

Configure the Angular 2 routing

The lazy loading routing can be added to the app.routes.ts file. The loadChildren defines the module and the class name of the module which can be lazy loaded. It is also possible to pre-load lazy load modules if required.

import { Routes, RouterModule } from '@angular/router';

export const routes: Routes = [
    { path: '', redirectTo: 'home', pathMatch: 'full' },
    {
        path: 'about', loadChildren: './about/about.module#AboutModule',
    }
];

export const AppRoutes = RouterModule.forRoot(routes);

Update the tsconfig-aot.json and tsconfig.json files

Now the tsconfig.json for development JIT builds and the tsconfig-aot.json for AOT production builds need to be configured to load the AboutModule module.

AOT production build

The files property contains all the module entry points as well as the app entry file. The angularCompilerOptions property defines the folder where the AOT will be built into. This must match the configuration in the Webpack production config file.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angularApp/app/app.module.ts",
    "angularApp/app/about/about.module.ts",
    "angularApp/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

JIT development build

The modules and entry points are also defined for the JIT build.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ],
    "types": [
      "node"
    ]
  },
  "files": [
    "angularApp/app/app.module.ts",
    "angularApp/app/about/about.module.ts",
    "angularApp/main.ts"
  ],
  "awesomeTypescriptLoaderOptions": {
    "useWebpackText": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

Configure Webpack to chunk and use the router lazy loading

Now the webpack configuration needs to be updated for the lazy loading.

AOT production build

The webpack.prod.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders. The genDir folder defined here must match the definition in tsconfig-aot.json.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader?aot=true&genDir=aot/'
        ]
    },

JIT development build

The webpack.dev.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].bundle.js',
        chunkFilename: 'dist/[id].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader',
            'angular2-template-loader',        
            'source-map-loader',
            'tslint-loader'
        ]
    },

Build and run

Now the application can be built using the npm build scripts and the dotnet command tool.

Open a command line in the root of the src files. Install the npm packages:

npm install

Now build the production build. The build-production does a ngc build, and then a webpack production build.

npm run build-production

You can see that Webpack creates an extra chunked file for the About Module.

lazyloadingwebpack_01

Then start the application. The server is implemented using ASP.NET Core 1.1.

dotnet run

When the application is started, the AboutModule is not loaded.

lazyloadingwebpack_02

When the about tab is clicked, the chunked AboutModule is loaded.

lazyloadingwebpack_03

Absolutely fantastic. You could also pre-load the modules if required. See this blog.

Links:

https://github.com/brandonroberts/angular-router-loader

https://www.npmjs.com/package/angular-router-loader

https://vsavkin.com/angular-router-preloading-modules-ba3c75e424cb

https://webpack.github.io/docs/



Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Anuraj Parameswaran: Using MEF in .NET Core

This post is about using MEF (Managed Extensibility Framework) in .NET Core. The Managed Extensibility Framework or MEF is a library for creating lightweight, extensible applications. It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well.


Andrew Lock: An introduction to ViewComponents - a login status view component

An introduction to ViewComponents - a login status view component

View components are one of the potentially less well known features of ASP.NET Core Razor views. Unlike tag-helpers which have the pretty much direct equivalent of Html Helpers in the previous ASP.NET, view components are a bit different.

In spirit, they fit somewhere between a partial view and a full controller - approximately like a ChildAction. However whereas actions and controllers have full model binding semantics and the filter pipeline etc, view components are invoked directly with explicit data. They are more powerful than a partial view however, as they can contain business logic, and separate the UI generation from the underlying behaviour.

View components seem to fit best in situations where you would want to use a partial, but that the rendering logic is complicated and may need to be tested.

In this post, I'll use the example of a Login widget that displays your email address when you are logged in:

An introduction to ViewComponents - a login status view component

and a register / login link when you are logged out:

An introduction to ViewComponents - a login status view component

This is a trivial example - the behaviour above is achieved without the use of view components in the templates. This post is just meant to introduce you to the concept of view components, so you can see when to use them in your own applications.

Creating a view component

View components can be defined in a multitude of ways. You can give your component a name ending in ViewComponent, you can decorate it with the [ViewComponent] attribute, or you can derive from the ViewComponent base class. The latter of these is probably the most obvious, and provides a number of helper properties you can use, but the choice is yours.

To implement a view component you must expose a public method called InvokeAsync which is called when the component is invoked:

public Task<IViewComponentResult> InvokeAsync();  

As is typical for ASP.NET Core, this method is found at runtime using reflection, so if you forget to add it, you won't get compile time errors, but you will get an exception at runtime:

An introduction to ViewComponents - a login status view component

Other than this restriction, you are pretty much free to design your view components as you like. They support dependency injection, so you are able to inject dependencies into the constructor and use them in your InvokeAsync method. For example, you could inject a DbContext and query the database for the data to display in your component.

The LoginStatusViewComponent

Now you have a basic understanding of view components, we can take a look at the LoginStatusViewComponent. I created this component in a project created using the default MVC web template in VisualStudio with authentication.

This simple view component only has a small bit of logic, but it demonstrates the features of view components nicely.

public class LoginStatusViewComponent : ViewComponent  
{
    private readonly SignInManager<ApplicationUser> _signInManager;
    private readonly UserManager<ApplicationUser> _userManager;

    public LoginStatusViewComponent(SignInManager<ApplicationUser> signInManager, UserManager<ApplicationUser> userManager)
    {
        _signInManager = signInManager;
        _userManager = userManager;
    }

    public async Task<IViewComponentResult> InvokeAsync()
    {
        if (_signInManager.IsSignedIn(HttpContext.User))
        {
            var user = await _userManager.GetUserAsync(HttpContext.User);
            return View("LoggedIn", user);
        }
        else
        {
            return View();
        }
    }
}

You can see I have chosen to derive from the base ViewComponent class, as that provides me access to a number of helper methods.

We are injecting two services into the constructor of our component. These will be fulfilled automatically by the dependency injection container when our component is invoked.

Our InvokeAsync method is pretty self explanatory. We are checking if the current user is signed in using the SignInManager<>, and if they are we fetch the associated ApplicationUser from the UserManager<>. Finally we call the helper View method, passing in a template to render and the model user. If the user is not signed in, we call the helper View without a template argument.

The calls at the end of the InvokeAsync method are reminiscent of action methods. They are doing a very similar thing, in that they are creating a result which will execute a view template, passing in the provided model.

In our example, we are rendering a different template depending on whether the user is logged in or not. That means we could test this ViewComponent in isolation, testing that the correct template is displayed depending on our business requirements, without having to inspect the HTML output, which would be our only choice if this logic was embedded in a partial view instead.

Rendering View templates

When you use return View() in your view component, you are returning a ViewViewComponentResult (yes, that name is correct!) which is analogous to the ViewResult you typically return from MVC action methods.

This object contains an optional template name and view model, which is used to invoke a Razor view template. The location of the view to execute is given by convention, very similar to MVC actions. In the case of our LoginStatusViewComponent, the Razor engine will search for views in two folders:

  1. Views\Components\LoginStatus; and
  2. Views\Components\Shared

If you don't specify the name of the template to find, then the engine will assume the file is called default.cshtml. In the example I provided, when the user is signed in we explicitly provide a template name, so the engine will look for the template at

  1. Views\Components\LoginStatus\LoggedIn.cshtml; and
  2. Views\Components\Shared\LoggedIn.cshtml

The view templates themselves are just normal razor, so they can contain all the usual features, tag helpers, strongly typed models etc. The LoggedIn.cshtml file for our LoginviewComponent is shown below:

@model ApplicationUser
<form asp-area="" asp-controller="Account" asp-action="LogOff" method="post" id="logoutForm" class="navbar-right">  
    <ul class="nav navbar-nav navbar-right">
        <li>
            <a asp-area="" asp-controller="Manage" asp-action="Index" title="Manage">Hello @Model.Email!</a>
        </li>
        <li>
            <button type="submit" class="btn btn-link navbar-btn navbar-link">Log off</button>
        </li>
    </ul>
</form>  

There is nothing special here - we are using the form and action link tag helpers to create links and we are writing values from our strongly typed model to the response. All bread and butter for razor templates!

When the user is not logged in, I didn't specify a template name, so the default name of default.cshtml is used:

An introduction to ViewComponents - a login status view component

This view is even simpler as we didn't pass a model to the view, it just contains a couple of links:

<ul class="nav navbar-nav navbar-right">  
    <li><a asp-area="" asp-controller="Account" asp-action="Register">Register</a></li>
    <li><a asp-area="" asp-controller="Account" asp-action="Login">Log in</a></li>
</ul>  

Invoking a view component

With your component configured, all that remains is to invoke it from your view. View components can be invoked from a different view by calling, in this case, @await Component.InvokeAsync("LoginStatus"), where "LoginStatus" is the name of the view component. We can call it in the header of our _Layout.cshtml:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    @await Component.InvokeAsync("LoginStatus")
</div>  

Invoking directly from a controller

It is also possible to return a view component directly from a controller; this is the closest you can get to directly exposing a view component at an endpoint:

public IActionResult IndexVC()  
{
    return ViewComponent("LoginStatus");
}

Calling View Components like TagHelpers in ASP.NET Core 1.1.0

View components work well, but one of the things that seemed like a bit of a step back was the need to explicitly use the @ symbol to render them. One of the nice things brought to Razor with ASP.NET Core was tag-helpers. These do pretty much the same job as the HTML helpers from the previous ASP.NET MVC Razor views, but in a more editor-friendly way.

For example, consider the following block, which would render a label, text box and validation summary for a property on your model called Email

<div class="form-group">  
    @Html.LabelFor(x=>x.Email, new { @class= "col-md-2 control-label"})
    <div class="col-md-10">
        @Html.TextBoxFor(x=>x.Email, new { @class= "form-control"})
        @Html.ValidationMessageFor(x=>x.Email, null, new { @class= "text-danger" })
    </div>
</div>  

Compare that to the new tag helpers, which allow you to declare your model bindings as asp- attributes:

<div class="form-group">  
    <label asp-for="Email" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="Email" class="form-control" />
        <span asp-validation-for="Email" class="text-danger"></span>
    </div>
</div>  

Syntax highlighting is easier for basic editors and you don't need to use ugly @ symbols to escape the class properties - everything is just that little bit nicer. In ASP.NET Core 1.1.0, you can get similar benefits for calling your tag helpers, by using a vc: prefix.

To repeat my LoginStatus example in ASP.NET Core 1.1.0, you first need to register your view components as tag helpers in _ViewImports.cshtml (where WebApplication1 is the namespace of your view components) :

@addTagHelper *, WebApplication1

and you can then invoke your view component using the tag helper syntax:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    <vc:login-status></vc:login-status>
</div>  

Note the name of the tag helper here, vc:login-status. The vc helper, indicates that you are invoking a view component, and the name of the helper is our view component's name (LoginStatus) converted to lower-kebab case (thanks to the ASP.NET monsters for figuring out the correct name)!

With these two pieces in place, your tag-helper is functionally equivalent to the previous invocation, but is a bit nicer to read:)

Summary

This post provided an introduction to building your first view component, including how to invoke it. You can find sample code on GitHub. In the next post, I'll show how you can pass parameters to your component when you invoke it.


Anuraj Parameswaran: Using NLog in ASP.NET Core

This post is about using NLog in ASP.NET Core. NLog is a free logging platform for .NET, Xamarin, Silverlight and Windows Phone with rich log routing and management capabilities. NLog makes it easy to produce and manage high-quality logs for your application regardless of its size or complexity.


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: Understanding and updating package versions for .NET Core 1.0.3

Understanding and updating package versions for .NET Core 1.0.3

Microsoft introduced the second update to their Long Term Support (LTS) version of .NET Core on 13th December, 3 months after releasing the first update to the platform. This included updates to .NET Core, ASP.NET Core and Entity Framework Core, and takes the overall version number to 1.0.3, though this number can be confusing, as you'll see shortly! You can read about the update in their blog post - I'm going to focus primarily on the ASP.NET Core changes here.

Understanding the version numbers

The first thing to take in with this update, is that it is only for the LTS track. .NET Core and ASP.NET Core follow releases in two different tracks: the safer, more stable, LTS version; and the alternative Fast Track Support (FTS) which sees new features at a higher rate.

Depending on your requirements for stability and the need for new features, you can stick to either the FTS or LTS track - both are supported. The important thing is that you make sure your whole application sticks to one or the other. You can't use some packages from the LTS track and some from the FTS track.

As of writing, the LTS version is at 1.0.3, which follows version numbers of the format 1.0.x. This, as expected implies it will only see patch/bug fixes. In contrast, the FTS version is currently at 1.1.0, which brings a number of additional features over the LTS branch. You can read more about the versioning story on the .NET blog.

Is this the second or third LTS update?

You may have noticed I said that this was the second update to the LTS track, even though we're up to update 1.0.3. That's because the .NET Core 1.0.2 update didn't actually change any code, it simply fixed an issue in the installer on macOS. So although the version number was bumped, there weren't actually any noticeable changes.

Package numbers don't match the ASP.NET Core version

This is where things start to get confusing.

ASP.NET Core is composed of a whole host of loosely coupled packages which can be added to your application to provide various features, as and when you need them. If you don't need a feature, you don't add it to your project. This contrasts with the previous model of ASP.NET in which you always had access to all of the features. It was more of a set-meal approach rather than the à la carte buffet approach of ASP.NET Core.

Each of these packages that make up ASP.NET Core - packages such as Microsoft.AspNetCore.Mvc, Microsoft.Extensions.Configuration.Abstractions, and Microsoft.AspNetCore.Diagnostics - follow semantic versioning. They version independently of one another, and of the framework as a whole.

ASP.NET Core has an overall version number, which for the LTS track is 1.0.3. However, just because the overall ASP.NET Core version has incremented, that doesn't mean that the underlying packages of which it is composed have necessarily changed. If a package has not changed, there is no sense in updating its version number, even though a new version of ASP.NET Core is being released.

Updating your project

Although Microsoft have take a perfectly reasonable approach with regard to this in theory, the reality of trying to keep up with these version changes is somewhat bewildering.

In order to stay supported, you have to ensure all your packages stay on the latest version of the LTS (or FTS) track of ASP.NET Core. But there isn't anywhere that actually lists out all the supported packages for a given overall version of ASP.NET Core, or provides an easy way to update all the packages in your project to the latest on the LTS track. And it's not easy to know what they should be - some packages may be on version 1.0.2, others 1.0.1 and some may still be 1.0.0. It's very hard to tell whether your project.json (or csproj) is all up-to-date.

In a recent post, Steve Gordon ran into exactly this problem when updating the allReady project to 1.0.3. He found he had to go through the NuGet Package Manager GUI in ASP.NET Core and update each of his dependencies independently. He couldn't use the 'Update All' button as this would update to the latest in the FTS track. Hopefully his suggestion of a toggle for selecting which track you wish to stick to will be implemented in VS2017!

As part of his post, he lists all the dependencies he had to update in his project.json in making the move. You also have to ensure you install the latest SDK from https://dot.net and update your global.json accordingly.

Steve lists a whole host of packages to update, but I wanted to try and provide a more comprehensive list, so I decided to take a look through each of the ASP.NET Core repos, and fetch the latest version of the packages for the LTS update.

Latest versions

The latest version of ASP.NET Core packages for version 1.0.3 are listed below. This list attempts to be exhaustive for the core packages in the Microsoft ASP.NET Core repos in GitHub. It's quite possible I've missed some out though - if so, let me know in the comments!

Note that not all of these packages will have changed in the 1.0.3 release (though it seems like most have), these are just the latest packages that it uses.

  "Microsoft.ApplicationInsights.AspNetCore" : "1.0.2",
  "Microsoft.AspNet.Identity.AspNetCoreCompat" : "0.1.1",
  "Microsoft.AspNet.WebApi.Client" : "5.2.2",
  "Microsoft.AspNetCore.Antiforgery" : "1.0.2",
  "Microsoft.AspNetCore.Authentication" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Cookies" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Facebook" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Google" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.JwtBearer" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.MicrosoftAccount" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.OAuth" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.OpenIdConnect" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Twitter" : "1.0.1",
  "Microsoft.AspNetCore.Authorization" : "1.0.1",
  "Microsoft.AspNetCore.Buffering" : "0.1.1",
  "Microsoft.AspNetCore.CookiePolicy" : "1.0.1",
  "Microsoft.AspNetCore.Cors" : "1.0.1",
  "Microsoft.AspNetCore.Cryptography.Internal" : "1.0.1",
  "Microsoft.AspNetCore.Cryptography.KeyDerivation" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.Extensions" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.SystemWeb" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics.Elm" : "0.1.1",
  "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" : "1.0.1",
  "Microsoft.AspNetCore.Hosting" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.Server.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.WindowsServices" : "1.0.1",
  "Microsoft.AspNetCore.Html.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Http" : "1.0.1",
  "Microsoft.AspNetCore.Http.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Http.Extensions" : "1.0.1",
  "Microsoft.AspNetCore.Http.Features" : "1.0.1",
  "Microsoft.AspNetCore.HttpOverrides" : "1.0.1",
  "Microsoft.AspNetCore.Identity" : "1.0.1",
  "Microsoft.AspNetCore.Identity.EntityFrameworkCore" : "1.0.1",
  "Microsoft.AspNetCore.JsonPatch" : "1.0.0",
  "Microsoft.AspNetCore.Localization" : "1.0.1",
  "Microsoft.AspNetCore.MiddlewareAnalysis" : "1.0.1",
  "Microsoft.AspNetCore.Mvc" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Abstractions" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.ApiExplorer" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Core" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Cors" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.DataAnnotations" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Formatters.Json" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Formatters.Xml" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Localization" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Razor" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Razor.Host" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.TagHelpers" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.ViewFeatures" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.WebApiCompatShim" : "1.0.2",
  "Microsoft.AspNetCore.Owin" : "1.0.1",
  "Microsoft.AspNetCore.Razor.Runtime" : "1.0.1",
  "Microsoft.AspNetCore.Routing" : "1.0.2",
  "Microsoft.AspNetCore.Routing.Abstractions" : "1.0.2",
  "Microsoft.AspNetCore.Server.IISIntegration" : "1.0.1",
  "Microsoft.AspNetCore.Server.IISIntegration.Tools" : "1.0.0-preview4-final",
  "Microsoft.AspNetCore.Server.Kestrel" : "1.0.2",
  "Microsoft.AspNetCore.Server.Kestrel.Https" : "1.0.2",
  "Microsoft.AspNetCore.Server.Testing" : "0.1.1",
  "Microsoft.AspNetCore.StaticFiles" : "1.0.1",
  "Microsoft.AspNetCore.TestHost" : "1.0.1",
  "Microsoft.AspNetCore.Testing" : "1.0.1",
  "Microsoft.AspNetCore.WebUtilities" : "1.0.1",
  "Microsoft.CodeAnalysis.CSharp" : "1.3.0",
  "Microsoft.DotNet.Watcher.Core" : "1.0.0-preview4-final",
  "Microsoft.DotNet.Watcher.Tools" : "1.0.0-preview4-final",
  "Microsoft.EntityFrameworkCore" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.InMemory" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Design.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.SqlServer" : "1.0.2",
  "Microsoft.EntityFrameworkCore.SqlServer.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Sqlite" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Sqlite.Design" : "1.0.2",
  "Microsoft.Extensions.Caching.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Caching.Memory" : "1.0.1",
  "Microsoft.Extensions.Caching.Redis" : "1.0.1",
  "Microsoft.Extensions.Caching.SqlConfig.Tools" : "1.0.0-preview4-final",
  "Microsoft.Extensions.Caching.SqlServer" : "1.0.1",
  "Microsoft.Extensions.CommandLineUtils" : "1.0.1",
  "Microsoft.Extensions.Configuration" : "1.0.1",
  "Microsoft.Extensions.Configuration.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Configuration.Binder" : "1.0.1",
  "Microsoft.Extensions.Configuration.CommandLine" : "1.0.1",
  "Microsoft.Extensions.Configuration.EnvironmentVariables" : "1.0.1",
  "Microsoft.Extensions.Configuration.FileExtensions" : "1.0.1",
  "Microsoft.Extensions.Configuration.Ini" : "1.0.1",
  "Microsoft.Extensions.Configuration.Json" : "1.0.1",
  "Microsoft.Extensions.Configuration.UserSecrets" : "1.0.1",
  "Microsoft.Extensions.Configuration.Xml" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection.Abstractions" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection.Specification.Tests" : "1.0.1",
  "Microsoft.Extensions.DependencyModel" : "1.0.0",
  "Microsoft.Extensions.DiagnosticAdapter": "1.0.1",
  "Microsoft.Extensions.FileProviders.Abstractions" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Composite" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Embedded" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Physical" : "1.0.1",
  "Microsoft.Extensions.FileSystemGlobbing" : "1.0.1",
  "Microsoft.Extensions.Globalization.CultureInfoCache" : "1.0.1",
  "Microsoft.Extensions.Localization" : "1.0.1",
  "Microsoft.Extensions.Localization.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Logging" : "1.0.1",
  "Microsoft.Extensions.Logging.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Logging.Console" : "1.0.1",
  "Microsoft.Extensions.Logging.Debug" : "1.0.1",
  "Microsoft.Extensions.Logging.EventLog" : "1.0.1",
  "Microsoft.Extensions.Logging.Filter" : "1.0.1",
  "Microsoft.Extensions.Logging.Testing" : "1.0.1",
  "Microsoft.Extensions.Logging.TraceSource" : "1.0.1",
  "Microsoft.Extensions.ObjectPool" : "1.0.1",
  "Microsoft.Extensions.Options" : "1.0.1",
  "Microsoft.Extensions.Options.ConfigurationExtensions" : "1.0.1",
  "Microsoft.Extensions.PlatformAbstractions" : "1.0.0",
  "Microsoft.Extensions.Primitives" : "1.0.1",
  "Microsoft.Extensions.SecretManager.Tools" : "1.0.0-preview4-final",
  "Microsoft.Extensions.WebEncoders" : "1.0.1",
  "Microsoft.IdentityModel.Protocols.OpenIdConnect" : "2.0.0",
  "Microsoft.Net.Http.Headers" : "1.0.1",
  "Microsoft.VisualStudio.Web.BrowserLink.Loader" : "14.0.1"

Hopefully someone will find this useful when trying to work out which *&^#$% package they need to update!


Damien Bowden: Building production ready Angular apps with Visual Studio and ASP.NET Core

This article shows how Angular SPA apps can be built using Visual Studio and ASP.NET Core which can be used in production. Lots of articles, blogs templates exist for ASP.NET Core and Angular but very few support Angular production builds.

Although Angular is not so old, many different seeds and build templates already exist, so care should be taken when choosing the infrastructure for the Angular application. Any Angular template, seed which does not support AoT or treeshaking should NOT be used, and also any third party Angular component which does not support AoT should not be used.

This example uses webpack 2 to build and bundle the Angular application. In the package.json, npm scripts are used to configure the different builds and can be used inside Visual Studio using the npm task runner.

Code: Visual Studio 2015 project | Visual Studio 2017 project

Blogs in this series:

2017.02.06: Updated to webpack 2.2.1, Angular 2.4.6, renaming to angular
2017.01.18: Updated to webpack 2.2.0
2017.01.14 Added lazy loading, updated the Angular 2.4.3 and webpack 2.2.0-rc.4

Short introduction to AoT and treeshaking

AoT

AoT stands for Ahead of Time compilation. As per definition from the Angular docs:

“With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the app first.”

With AoT, you have smaller packages sizes, fewer asynchronous requests and better security. All is explained very well in the Angular Docs:

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

The AoT uses the platformBrowser to bootstrap and not platformBrowserDynamic which is used for JIT, Just in Time.

// Entry point for AoT compilation.
export * from './polyfills';

import { platformBrowser } from '@angular/platform-browser';
import { enableProdMode } from '@angular/core';
import { AppModuleNgFactory } from '../aot/angular2App/app/app.module.ngfactory';

enableProdMode();

platformBrowser().bootstrapModuleFactory(AppModuleNgFactory);

treeshaking

Treeshaking removes the unused portions of the libraries from the application, reducing the size of the application.

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

npm task runner

npm scripts can be used easily inside Visual Studio by using the npm task runner. Once installed, this needs to be configured correctly.

VS2015: Go to Tools –> Options –> Projects and Solutions –> External Web Tools and select all the checkboxes. More infomation can be found here.

In VS2017, this is slightly different:

Go to Tools –> Options –> Projects and Solutions –> Web Package Management –> External Web Tools and select all checkboxes:

vs_angular_build_01

npm scripts

ngc

ngc is the angular compiler which is used to do the AoT build using the tsconfig-aot.json configuration.

"ngc": "ngc -p ./tsconfig-aot.json",

The tsconfig-aot.json file builds to the aot folder.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angular2App/app/app.module.ts",
    "angular2App/app/about/about.module.ts",
    "angular2App/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

build-production

The build-production npm script is used for the production build and can be used for the publish or the CI as required. The npm script used the ngc script and the webpack-production build.

"build-production": "npm run ngc && npm run webpack-production",

webpack-production npm script:

"webpack-production": "set NODE_ENV=production&& webpack",

watch-webpack-dev

The watch build monitors the source files and builds if any file changes.

"watch-webpack-dev": "set NODE_ENV=development&& webpack --watch --color",

start (webpack-dev-server)

The start script runs the webpack-dev-server client application and also the ASPNET Core server application.

"start": "concurrently \"webpack-dev-server --hot --inline --progress --port 8080\" \"dotnet run\" ",

Any of these npm scripts can be run from the npm task runner.

vs_angular_build_02

Deployment

When deploying the application for an IIS, the build-production needs to be run, then the dotnet publish command, and then the contents can be copied to the IIS server. The publish-for-iis npm script can be used to publish. The command can be started from a build server without problem.

"publish-for-iis": "npm run build-production && dotnet publish -c Release" 

vs_angular_build_02

https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-publish

When deploying to an IIS, you need to install the DotNetCore.1.1.0-WindowsHosting.exe for the IIS. Setting up the server IIS docs:

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

Why not webpack task runner?

The Webpack task runner cannot be used for Webpack Angular applications because it does not support the required commands for Angular Webpack builds, either dev or production. The webpack -d build causes map errors in IE and the ngc compiler cannot be used, hence no production builds can be started from the Webpack Task Runner. For Angular Webpack projects, do not use the Webpack Task Runner, use the npm task runner.

Full package.json

{
  "name": "angular2-webpack-visualstudio",
  "version": "1.0.0",
  "description": "",
  "main": "wwwroot/index.html",
  "author": "",
  "license": "ISC",
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "start": "concurrently \"webpack-dev-server --hot --inline --port 8080\" \"dotnet run\" ",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release"
  },
  "dependencies": {
    "@angular/common": "~2.4.6",
    "@angular/compiler": "~2.4.6",
    "@angular/core": "~2.4.6",
    "@angular/forms": "~2.4.6",
    "@angular/http": "~2.4.6",
    "@angular/platform-browser": "~2.4.6",
    "@angular/platform-browser-dynamic": "~2.4.6",
    "@angular/router": "~3.4.6",
    "@angular/upgrade": "~2.4.6",
    "angular-in-memory-web-api": "0.2.4",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.9",
    "rxjs": "5.0.3",
    "zone.js": "0.7.5",
    "@angular/compiler-cli": "~2.4.6",
    "@angular/platform-server": "~2.4.6",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
    "@types/node": "7.0.0",
    "angular2-template-loader": "^0.6.0",
    "angular-router-loader": "^0.5.0",
    "awesome-typescript-loader": "^2.2.4",
    "clean-webpack-plugin": "^0.1.15",
    "concurrently": "^3.1.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.26.1",
    "file-loader": "^0.9.0",
    "html-webpack-plugin": "^2.26.0",
    "jquery": "^2.2.0",
    "json-loader": "^0.5.4",
    "node-sass": "^4.3.0",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.5.4",
    "sass-loader": "^4.1.1",
    "source-map-loader": "^0.1.6",
    "style-loader": "^0.13.1",
    "ts-helpers": "^1.1.2",
    "tslint": "^4.3.1",
    "tslint-loader": "^3.3.0",
    "typescript": "2.0.3",
    "url-loader": "^0.5.7",
    "webpack": "^2.2.1",
    "webpack-dev-server": "2.2.1"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

Full webpack.prod.js

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angularApp/vendor.ts',
        'polyfills': './angularApp/polyfills.ts',
        'app': './angularApp/main-aot.ts' // AoT compilation
    },

    output: {
        path: './wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            inject: 'body',
            template: 'angularApp/index.html'
        }),

        new CopyWebpackPlugin([
            { from: './angularApp/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};


Links:

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/preboot/angular2-webpack

https://webpack.github.io/docs/

https://github.com/jtangelder/sass-loader

https://github.com/petehunt/webpack-howto/blob/master/README.md

https://blogs.msdn.microsoft.com/webdev/2015/03/19/customize-external-web-tools-in-visual-studio-2015/

https://marketplace.visualstudio.com/items?itemName=MadsKristensen.NPMTaskRunner

http://sass-lang.com/

http://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html

https://angular.io/docs/ts/latest/guide/webpack.html

http://blog.mgechev.com/2016/06/26/tree-shaking-angular2-production-build-rollup-javascript/

https://angular.io/docs/ts/latest/tutorial/toh-pt5.html

http://angularjs.blogspot.ch/2016/06/improvements-coming-for-routing-in.html?platform=hootsuite

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

https://weblog.west-wind.com/posts/2016/Jun/06/Publishing-and-Running-ASPNET-Core-Applications-with-IIS

http://blog.mgechev.com/2017/01/17/angular-in-production/



Damien Bowden: Creating an ASP.NET Core 1.1 VS2017 Docker application

This blog shows how to setup a basic ASP.NET Core 1.1 application using Visual studio 2017 and Docker.

Code: https://github.com/damienbod/AspNetCoreVS2017Docker

2017.02.03 Updated to VS2017 RC3 msbuild3

This article from Swaminathan Vetri demonstates how to setup everything for an ASP.NET Core 1.0 application.

Now the application needs to be updated to ASP.NET Core 1.1. Open the csproj file and update the PackageReference and also the TargetFramework to the 1.1 packages and target. At present, there is no help when updating the packages, like the project.json file had, so you need to know exactly what packages to use when updating directly.

<Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.0" />
  </ItemGroup>
</Project>

Then update the docker-compose.ci.build.yml file. You need to select the required image which can be found on Docker Hub. The microsoft/aspnetcore-build:1.1.0-msbuild image is used here.

version: '2'

services:
  ci-build:
    image: microsoft/aspnetcore-build:1.1.0-msbuild
    volumes:
      - .:/src
    working_dir: /src
    command: /bin/bash -c "dotnet restore && dotnet publish -c Release -o ./bin/Release/PublishOutput"

Now update the DockerFile to target the 1.1.0 version.

FROM microsoft/aspnetcore:1.1.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-bin/Release/PublishOutput} .
ENTRYPOINT ["dotnet", "AspNetCoreVS2017Docker.dll"]

Start the application using Docker. You can debug the application which is running inside Docker, all in Visual Studio 2017. Very neat.

vs2017_docker_basic_1

Notes:

When setting up Docker, you need to share the C drive in Docker.

Links:

Debugging Asp.Net core apps running in Docker Containers using VS 2017

https://www.sesispla.net/blog/language/en/2016/05/running-asp-net-core-1-0-rc2-in-docker/

https://hub.docker.com/r/microsoft/aspnetcore-build/tags/

https://blogs.msdn.microsoft.com/webdev/2016/11/16/new-docker-tools-for-visual-studio/

https://blogs.msdn.microsoft.com/stevelasker/2016/06/14/configuring-docker-for-windows-volumes/



Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Andrew Lock: Redirecting unknown cultures when using the url culture provider

Redirecting unknown cultures when using the url culture provider

This is the next in a series of posts on using the middleware as filters feature of ASP.NET Core 1.1 to add a url culture provider to your application. In this post I show how to handle the case where a user requests a culture that does not exist, or that we do not support, by redirecting to a URL with a supported culture.

The current series of posts is given below:

By working through each of these posts we are slowly building a full system for having a useable url culture provider. We now have globally defined routing conventions that ensure our urls are prefixed with a culture like en-GB or fr-FR. In the last post we added a culture constraint and catch-all routes to ensure that requests to a culture-less url like Home/Index/ are redirected to a cultured one, like en-GB/Home/Index.

One of the remaining holes in our current implementation is handling the case when users request a URL for a culture that does not exist, or we do not support. For example, in the example below, we do not support Spanish in the application, so the request localisation is set to the default culture en-GB:

Redirecting unknown cultures when using the url culture provider

This is fine from the application's point of view , but it is not great for the user. It looks to the user as though we support Spanish, as we have a Spanish culture url, but all the text will be in English. A potentially better approach would be to redirect the user to a URL with the culture that is actually being used. This also helps reduce the number of pages which are essentially equivalent, which is good for SEO.

Handling redirects in middleware as filters

The technique I'm going to use involves adding an additional piece of middleware to our middleware-as-filters pipeline. If you're not comfortable with how this works I suggest checking out the earlier posts in this series.

This middleware checks the culture that has been applied to the current request to see if it matches the value that was requested via the routing {culture} value. If the values match (ignoring case differences), the middleware just moves on to the next middleware in the pipeline and nothing else happens.

If the requested and actual cultures are different, then the middleware short-circuits the request, sending a redirect to the same URL but with the correct culture. Middleware-as-filters run as ResourceFilters, so they can bypass the action method completely, as in this case.

That is the high level approach, now onto the code. Brace yourself, there's quite a lot, which I'll walk through afterwards.

public class RedirectUnsupportedCulturesMiddleware  
{
    private readonly RequestDelegate _next;
    private readonly string _routeDataStringKey;

    public RedirectUnsupportedCulturesMiddleware(
        RequestDelegate next,
        RequestLocalizationOptions options)
    {
        _next = next;
        var provider = options.RequestCultureProviders
            .Select(x => x as RouteDataRequestCultureProvider)
            .Where(x => x != null)
            .FirstOrDefault();
        _routeDataStringKey = provider.RouteDataStringKey;
    }

    public async Task Invoke(HttpContext context)
    {
        var requestedCulture = context.GetRouteValue(_routeDataStringKey)?.ToString();
        var cultureFeature = context.Features.Get<IRequestCultureFeature>();

        var actualCulture = cultureFeature?.RequestCulture.Culture.Name;

        if (string.IsNullOrEmpty(requestedCulture) ||
            !string.Equals(requestedCulture, actualCulture, StringComparison.OrdinalIgnoreCase))
        {
            var newCulturedPath = GetNewPath(context, actualCulture);
            context.Response.Redirect(newCulturedPath);
            return;
        }

        await _next.Invoke(context);
    }

    private string GetNewPath(HttpContext context, string newCulture)
    {
        var routeData = context.GetRouteData();
        var router = routeData.Routers[0];
        var virtualPathContext = new VirtualPathContext(
            context,
            routeData.Values,
            new RouteValueDictionary { { _routeDataStringKey, newCulture } });

        return router.GetVirtualPath(virtualPathContext).VirtualPath;
    }
}

Breaking down the code

This is a standard piece of ASP.NET Core middleware, so our constructor takes a RequestDelegate which it calls in order to invoke the next middleware in the pipeline.

Our middleware also takes in an instance of RequestLocalizationOptions. It uses this to attempt to determine how the RouteDataRequestCultureProvider has been configured. In particular we need the RouteDataStringKey which represents culture in our URLs. By default it is "culture", but this approach would pick up any changes too.

Note that we assume that we will always have a RouteDataRequestCultureProvider here. That sort of makes sense, as redirecting to a different URL based on culture only makes sense if we are taking the culture from the URL!

We have implemented the standard middleware Invoke function without any further dependencies other than the HttpContext. When invoked, the middleware will attempt to find a route value corresponding to the specified RouteDataStringKey. This will give the name of the culture the user requested, for example es-ES.

Next, we obtain the current culture. I chose to retrieve this from the context using the IRequestCultureFeature, mostly just to show it is possible, but you could also just use the thread culture directly by using CultureInfo.CurrentCulture.Name.

We then compare the culture requested with the actual culture that was set. If the requested culture was one we support, then these should be the same (ignoring case). If the culture requested was not a real culture, was not a culture we support, or was a more-specific culture than we support, then these will not match.

Considering that last point - if the user requested de-DE but we only support de then the culture provider will automatically fall back to de. This is desirable behaviour, but the requested and actual cultures will not match.

Once we have identified that the cultures do not match, we need to redirect the user to the correct url. Achieving this goal seemed surprisingly tricky, and potentially rather fragile, but it worked for me.

In order to route to a url you need an instance of an IRouter. You can obtain a collection of these, along with all the current route values by calling HttpContext.GetData(). I simply chose the first IRouter instance, passed in all the current route values, and provided a new value for the "culture" route value to create a VirtualPathContext, which can in turn be used to generate a path. Hard work!

Adding the middleware to your application

Now we have our middleware, we actually need to add it to our application somewhere. Luckily, we are already using middleware as filters to extract the culture from the url, so we can simply insert our middleware into the pipeline.

public class LocalizationPipeline  
{
    public void Configure(IApplicationBuilder app, RequestLocalizationOptions options)
    {
        app.UseRequestLocalization(options);
        app.UseMiddleware<RedirectUnsupportedCulturesMiddleware>();
    }
}

So our localisation pipeline (which will be run as a filter, thanks to a global MiddlewareFilterAttribute) will first attempt to resolve the request's culture. Immediately after doing so, we run our new middleware, and redirect the request if it is not a culture we support.

If you're not sure what's going on here, I suggest checking out my earlier posts on setting up url localisation in your apps.

Trying it out

That should be all we need to do in order to automatically redirect requests that don't match in culture.

Trying a gibberish culture localhost/zz-ZZ redirects to our default culture:

Redirecting unknown cultures when using the url culture provider

Using a culture we don't support localhost/es-ES similarly redirects to the default culture:

Redirecting unknown cultures when using the url culture provider

If we support a fallback culture de then the request localhost/de-DE is redirected to that:

Redirecting unknown cultures when using the url culture provider

Caveats

One thing I haven't handled here is the difference between CurrentCulture and CurrentUICulture. These two can be different, and are supported by both the RequestLocalizationMiddleware and the RouteDataRequestCultureProvider. I chose not to address it here, but if you are using both in your application, you could easily extend the middleware to handle differences in either value.

Summary

Will these redirects in place, you should hopefully have the last piece of the puzzle for implementing the url culture provider in your ASP.NET Core 1.1 apps. If you come across anything I've missed, comments, or improvements, then do let me know!


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Damien Bowden: Implementing a Client White-list using ASP.NET Core Middleware

This article shows how a client white-list could be implemented using ASP.NET Core middleware checking the Remote IP address of the request. If the client IP is on the white-list, no restrictions exist.

Code: https://github.com/damienbod/ClientIpAspNetCoreIIS

The middleware uses an admin white-list parameter from the constructor to compare with the remote ip address from the HttpContext Connection property. This is different to previous versions of .NET. In the example, all GET requests are allowed. If any other request method is used, the remote IP is used to check if it exists in the white-list. If it does not exist, a 403 is returned.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;

namespace ClientIpAspNetCore
{
    public class AdminWhiteListMiddleware
    {
        private readonly RequestDelegate _next;
        private readonly ILogger<AdminWhiteListMiddleware> _logger;
        private readonly string _adminWhiteList;

        public AdminWhiteListMiddleware(RequestDelegate next, ILogger<AdminWhiteListMiddleware> logger, string adminWhiteList)
        {
            _adminWhiteList = adminWhiteList;
            _next = next;
            _logger = logger;
        }

        public async Task Invoke(HttpContext context)
        {
            if (context.Request.Method != "GET")
            {
                var remoteIp = context.Connection.RemoteIpAddress;
                _logger.LogInformation($"Request from Remote IP address: {remoteIp}");

                string[] ip = _adminWhiteList.Split(';');
                if (!ip.Any(option => option == remoteIp.ToString()))
                {
                    _logger.LogInformation($"Forbidden Request from Remote IP address: {remoteIp}");
                    context.Response.StatusCode = (int)HttpStatusCode.Forbidden;
                    return;
                }
            }

            await _next.Invoke(context);

        }
    }
}

The white-list is configured in the appsettings.config. This is a ‘;’ separated list which is split in the middleware class.

{
    "AdminWhiteList":  "127.0.0.1;192.168.1.5",
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    }
}

In the startup class, the AdminWhiteListMiddleware type is added using the appsettings configuration.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseStaticFiles();

	app.UseMiddleware<AdminWhiteListMiddleware>(Configuration["AdminWhiteList"]);
	app.UseMvc();
}

If a request is sent, other that a GET method, and it is not in the white-list, the 403 response is returned to the client and logged.

2016-12-18 16:45:42.8891|0|ClientIpAspNetCore.AdminWhiteListMiddleware|INFO|  Request from Remote IP address: 192.168.1.4 
2016-12-18 16:45:42.9031|0|ClientIpAspNetCore.AdminWhiteListMiddleware|INFO|  Forbidden Request from Remote IP address: 192.168.1.4 

An ActionFilter could also be used to implement this, for example if more specific logic is required.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc.Authorization;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.Extensions.Logging;

namespace ClientIpAspNetCore.Filters
{
    public class ClientIdCheckFilter : ActionFilterAttribute
    {
        private readonly ILogger _logger;

        public ClientIdCheckFilter(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger("ClassConsoleLogActionOneFilter");
        }

        public override void OnActionExecuting(ActionExecutingContext context)
        {
            _logger.LogInformation($"Remote IpAddress: {context.HttpContext.Connection.RemoteIpAddress}");

            // TODO implement some business logic for this...

            base.OnActionExecuting(context);
        }
    }
}

The ActionFilter can be added to the services.

public void ConfigureServices(IServiceCollection services)
{
	services.AddScoped<ClientIdCheckFilter>();

	services.AddMvc();
}

And can be used specifically on any controller as required.

[ServiceFilter(typeof(ClientIdCheckFilter))]
[Route("api/[controller]")]
public class ValuesController : Controller

Note: I have not tested this with all the different possible hops, forward headers. Only tested with IIS and kestrel.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware

http://odetocode.com/blogs/scott/archive/2016/11/22/asp-net-core-and-the-enterprise-part-3-middleware.aspx



Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Damien Bowden: EF Core diagnosis and features with MS SQL Server

This article shows how Entity Framework Core messages can be logged, and compared using the SQL Profiler and also some of the cool new 1.1 features, but not all. All information can be found on the links at the bottom and especially the excellent docs for EF Core.

Code: https://github.com/damienbod/EFCoreFeaturesAndDiag

project.json with EF Core packages and tools

When using EF Core, you need to add the correct packages and tools to the project file. EF Core 1.1 has a lot of changes compared to 1.0. You should not mix the different versions from EF Core. Use either the LTS or the current version, but not both. The Microsoft.EntityFrameworkCore.Tools.DotNet is a new package which came with 1.1.

{
    "dependencies": {
        "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.1.0",   
        "Microsoft.EntityFrameworkCore": "1.1.0",
        "Microsoft.EntityFrameworkCore.Relational": "1.1.0",
        "Microsoft.EntityFrameworkCore.SqlServer": "1.1.0",
        "Microsoft.EntityFrameworkCore.SqlServer.Design": {
            "version": "1.1.0",
            "type": "build"
        },
        "Microsoft.EntityFrameworkCore.Tools": "1.1.0-preview4-final",
        ...
    },

    "tools": {
        "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.1.0-preview4",
        ...
    },

    ...

}

Startup ConfigureServices

The database and EF Core can be configured and usually is in the Startup ConfigureServices method for an ASP.NET Core application. The following code sets up EF Core to use a MS SQL server database and uses the new retry on failure method from version 1.1.

public void ConfigureServices(IServiceCollection services)
{
	var sqlConnectionString = Configuration.GetConnectionString("DataAccessMsSqlServerProvider");

	services.AddDbContext<DomainModelMsSqlServerContext>(
		options => options.UseSqlServer(
			sqlConnectionString,
			sqlServerOptions => sqlServerOptions.EnableRetryOnFailure()
		)
	);

	services.AddScoped<IDataAccessProvider, DataAccessMsSqlServerProvider>();

	services.AddMvc().AddJsonOptions(options =>
	{
		options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
	});
}

Data Model

As with version 1.1, Entity Framework Core can now use backing fields to connect with the database and not just properties. This opens up a whole new world of possibilities of how entities can be designed or used. The _description field from the following DataEventRecord entity will be used for the database.

public class DataEventRecord
{
	private string _description;

	[Key]
	public long DataEventRecordId { get; set; }

	public string Name { get; set; }

	public string MadDescription {
		get { return _description; }
		set { _description = value;  }
	}

	public DateTime Timestamp { get; set; }

	[ForeignKey("SourceInfoId")]
	public SourceInfo SourceInfo { get; set; }

	public long SourceInfoId { get; set; }
}

DbContext with Field and Column mapping

In the OnModelCreating method from the DBContext, the description field is mapped to the column description for the MadDescription property. The context also configures shadow properties to add updated timestamps.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace EFCoreFeaturesAndDiag.Model
{
    // >dotnet ef migration add testMigration
    public class DomainModelMsSqlServerContext : DbContext
    {
        public DomainModelMsSqlServerContext(DbContextOptions<DomainModelMsSqlServerContext> options) :base(options)
        { }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            builder.Entity<DataEventRecord>()
                .Property(b => b.MadDescription)
                .HasField("_description")
                .HasColumnName("_description");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

Logging and diagnosis

Logging for EF Core is configured for this application as described here in the Entity Framework Core docs.

The EfCoreFilteredLoggerProvider.

using Microsoft.Extensions.Logging;
using System;
using System.Linq;
using Microsoft.EntityFrameworkCore.Storage.Internal;

namespace EFCoreFeaturesAndDiag.Logging
{
    public class EfCoreFilteredLoggerProvider : ILoggerProvider
    {
        private static string[] _categories =
        {
            typeof(Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommandBuilderFactory).FullName,
            typeof(Microsoft.EntityFrameworkCore.Storage.Internal.SqlServerConnection).FullName
        };

        public ILogger CreateLogger(string categoryName)
        {
            if (_categories.Contains(categoryName))
            {
                return new MyLogger();
            }

            return new NullLogger();
        }

        public void Dispose()
        { }

        private class MyLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return true;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            {
                Console.WriteLine(formatter(state, exception));
            }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }

        private class NullLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return false;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            { }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }
    }
}

The EfCoreLoggerProvider class:

using Microsoft.Extensions.Logging;
using System;
using System.IO;

namespace EFCoreFeaturesAndDiag.Logging
{
    public class EfCoreLoggerProvider : ILoggerProvider
    {
        public ILogger CreateLogger(string categoryName)
        {
            return new MyLogger();
        }

        public void Dispose()
        { }

        private class MyLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return true;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            {
                File.AppendAllText(@"C:\temp\log.txt", formatter(state, exception));
                Console.WriteLine(formatter(state, exception));
            }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }
    }
}

The EfCoreLoggerProvider logger is then added in the DataAccessMsSqlServerProvider constructor.

public class DataAccessMsSqlServerProvider : IDataAccessProvider
{
	private readonly DomainModelMsSqlServerContext _context;
	private readonly ILogger _logger;

	public DataAccessMsSqlServerProvider(DomainModelMsSqlServerContext context, ILoggerFactory loggerFactory)
	{
		_context = context;
		loggerFactory.AddProvider(new EfCoreLoggerProvider());
		_logger = loggerFactory.CreateLogger("DataAccessMsSqlServerProvider");

	}

The EF Core logs

Here’s the logs produced for an insert command:

Executing action method AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag) with arguments ((null)) - ModelState is ValidOpening connection to database 'EfcoreTest' on server 'N275\MSSQLSERVER2014'.Beginning transaction with isolation level 'Unspecified'.Executed DbCommand (60ms) [Parameters=[@p0='?' (Size = 4000), @p1='?' (Size = 4000), @p2='?', @p3='?'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [SourceInfos] ([Description], [Name], [Timestamp], [UpdatedTimestamp])
VALUES (@p0, @p1, @p2, @p3);
SELECT [SourceInfoId]
FROM [SourceInfos]
WHERE @@ROWCOUNT = 1 AND [SourceInfoId] = scope_identity();Executed DbCommand (1ms) [Parameters=[@p4='?' (Size = 4000), @p5='?' (Size = 4000), @p6='?', @p7='?', @p8='?'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [DataEventRecords] ([_description], [Name], [SourceInfoId], [Timestamp], [UpdatedTimestamp])
VALUES (@p4, @p5, @p6, @p7, @p8);
SELECT [DataEventRecordId]
FROM [DataEventRecords]
WHERE @@ROWCOUNT = 1 AND [DataEventRecordId] = scope_identity();Committing transaction.Closing connection to database 'EfcoreTest' on server 'N275\MSSQLSERVER2014'.Executed action method AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag), returned result Microsoft.AspNetCore.Mvc.OkObjectResult.No information found on request to perform content negotiation.Selected output formatter 'Microsoft.AspNetCore.Mvc.Formatters.StringOutputFormatter' and content type 'text/plain; charset=utf-8' to write the response.Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.Executed action AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag) in 1181.5083msConnection id "0HL13KIK3Q7QG" completed keep alive response.Request finished in 1259.1888ms 200 text/plain; charset=utf-8Request starting HTTP/1.1 GET http://localhost:46799/favicon.ico  The request path /favicon.ico does not match an existing fileRequest did not match any routes.Connection id "0HL13KIK3Q7QG" completed keep alive response.Request finished in 36.2143ms 404 

MS SQL Profiler

And here’s the corresponding insert request in the SQL Profiler from MS SQL Server.

sqlprofiler_efcore_01

EF Core Find

One feature which was also implemented in EF Core 1.1 is the Find method, which is very convenient to use.

public DataEventRecord GetDataEventRecord(long dataEventRecordId)
{
	return _context.DataEventRecords.Find(dataEventRecordId);
}

EF Core is looking really good and with the promise of further features and providers, this is becoming really exciting.

Links:

https://docs.microsoft.com/en-us/ef/

https://docs.microsoft.com/en-us/ef/core/miscellaneous/logging

https://blogs.msdn.microsoft.com/dotnet/2016/11/16/announcing-entity-framework-core-1-1/

https://msdn.microsoft.com/en-us/library/ms181091.aspx

https://github.com/aspnet/EntityFramework/releases/tag/rel%2F1.1.0

https://damienbod.com/2016/01/07/experiments-with-entity-framework-7-and-asp-net-5-mvc-6/

https://damienbod.com/2016/01/11/asp-net-5-with-postgresql-and-entity-framework-7/

https://damienbod.com/2015/12/05/asp-net-5-mvc-6-file-upload-with-ms-sql-server-filetable/

https://damienbod.com/2016/09/22/setting-the-nlog-database-connection-string-in-the-asp-net-core-appsettings-json/



Andrew Lock: Using a culture constraint and redirecting 404s with the url culture provider

Using a culture constraint and redirecting 404s with the url culture provider

This is the next in a series of posts on using the middleware as filters feature of ASP.NET Core 1.1 to add a url culture provider to your application. To get an idea for how this works, take a look at the microsoft.com homepage, which includes the request culture in the url.

Using a culture constraint and redirecting 404s with the url culture provider

In my original post, I showed how you could set this up in your own app using the new RouteDataRequestCultureProvider which shipped with ASP.NET Core 1.1. When combined with the middleware as filters feature, you can extract this culture name from the url and use it update the request culture.

In my previous post, we extended our implementation to setup global conventions, to ensure that all our routes would be prefixed with a {culture} url segment. As I pointed out in the post, the downside to this approach is that urls without a culture segment are not longer valid. Hitting the home page / of your application would give a 404 - hardly a friendly user experience!

In this post, I'll show how we can create a custom route constraint to help prevent invalid route matching, and add additional routes to catch those pesky 404s by redirecting to a cultured version of the url.

Creating a custom route constraint

As a reminder, in the last post we setup both a global route and an IApplicationModelConvention for attribute routes. The techniques described in this post can be used with both approaches, but I will just talk about the global route for brevity.

The global route we created used a {culture} segment which is extracted by the CultureProvider to determine the request culture:

app.UseMvc(routes =>  
            {
                routes.MapRoute(
                    name: "default",
                    template: "{culture}/{controller=Home}/{action=Index}/{id?}");

One of the problems with this route as it stands, is that there are no limitations on what can match the {culture} segment. If I navigate to /gibberish/ then that would match the route, using the default values for controller and action, and setting culture=gibberish as a route value.

Using a culture constraint and redirecting 404s with the url culture provider

Note that the url contains the route value gibberish, even though the request has fallen back to the default culture as gibberish is not a valid culture. Whether you consider this a big problem or not is somewhat up to you, but consider the case where the url is /Home/Index - that corresponds to a culture of Home and a controller of Index, even though this is clearly not the intention in the url.

Creating a constraint using regular expressions

We can mitigate this issue by adding a constraint to the route value. Constraints limit the values that a route value is allowed to have. If the route value does not satisfy the constraint, then the route will not match the request. There are a whole host of constraints you can use in your routes, such as restricting to integers, maximum lengths, whether the value is optional etc. You can also create new ones.

We want to restrict our {culture} route value to be a valid culture name, i.e. a 2 letter language code, optionally followed by a hyphen and a 2 letter region code. Now, ideally we would also validate that the 2 letters are actually a valid language (e.g. en, de, and fr are valid while zz is not), but for our purposes a simple regular expression will suffice.

With this slightly simplified model, we can easily create a new constraint to satisfy our requirements using the RegexRouteConstraint base class to do all the heavy lifting for us:

using Microsoft.AspNetCore.Routing.Constraints;

public class CultureRouteConstraint : RegexRouteConstraint  
{
    public CultureRouteConstraint()
        : base(@"^[a-zA-Z]{2}(\-[a-zA-Z]{2})?$") { }
}

The next step before we can use the constraint in our routes, is to tell the router about it. We do this by providing a string key for it, and registering our constraint with the RouteOptions object in ConfigureServices. I chose the key "culturecode".

services.Configure<RouteOptions>(opts =>  
    opts.ConstraintMap.Add("culturecode", typeof(CultureRouteConstraint)));

With this in place, we can start using the constraint in our routes

Using a custom constraint in routes

Using the constraint is as simple as adding the key "culturecode" after a colon when specifying our route values:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{culture:culturecode}/{controller=Home}/{action=Index}/{id?}");
});

Now, if we hit the gibberish url, we are met with the following instead:

Using a culture constraint and redirecting 404s with the url culture provider

Success! Sort of. Depending on how you look at it. The constraint is certainly doing the job, as the url provided does not match the specified route, so MVC returns a 404.

Adding the culture constraint doesn't seem to achieve a whole lot on its own, but it allows us to more safely add additional catch-all routes, to handle cases where the request culture was not provided.

Handling urls with no specified culture

As I mention in my last post, one of the problems with adding culture to the global routing conventions is that urls such as your home page at / will not match, and will return 404s.

How you want to handle to handle this is a matter of opinion. Maybe you want to have every 'culture-less' route match its 'cultured' equivalent with the default culture, so / would serve the same data as /en-GB/ (for your default culture).

An approach I prefer (and in fact the behaviour you see on the www.microsoft.com website), is that hitting a culture-less route sends a 302 redirect to the cultured route. In that case, / would redirect to /en-GB/.

We can achieve this behaviour by combining our culture constraint with a couple of additional routes, which we'll place after our global route defined above. I'll introduce the new routes one at a time.

routes.MapGet("{culture:culturecode}/{*path}", appBuilder => { });  

This route has two sections to it, the first route value is the {culture} value as we've seen before. The second, is a catch-all route which will match anything at all. This route would catch paths such as /en-GB/this/is/the/path, /en-US/, /es/Home/Missing - basically anything that has a valid culture value.

The handler for this method is essentially doing nothing - normally you would configure how to handle this route, but I am explicitly not adding to the pipeline, so that anything matching this route will return a 404. That means any URL which

  1. Has a culture; and
  2. Does not match the previous global route url

will return a 404.

Redirecting culture-less routes to the default culture

The above route does not do anything when used on its own after the global route, but it allows us to use a complete catch-all route afterward. It essentially filters out any requests that already have a culture route-value specified.

To redirect culture-less routes, we can use the following route:

routes.MapGet("{*path}", (RequestDelegate)(ctx =>  
{
    var defaultCulture = localizationOptions.DefaultRequestCulture.Culture.Name;
    var path = ctx.GetRouteValue("path") ?? string.Empty;
    var culturedPath = $"/{defaultCulture}/{path}";
    ctx.Response.Redirect(culturedPath);
    return Task.CompletedTask;
}));

This route uses a different overload of MapGet to provide a RequestDelgate rather than the Action<IApplicationBuilder> we used in the previous route. The difference is that a RequestDelegate is explicitly handling a matched route, while the previous route was essentially forking the pipeline when the route matched.

This route again uses a catch-all route value called {path}, which this time contains the whole request URL.

First, we obtain the name of the default culture from the RequestLocalizationOptions which we inject into the Configure method (see below for the full code in context). This could be en-GB in my case, or it may be en-US, de etc.

Next, we obtain the request url by fetching the {path} from the request and combine it with our default culture to create the culturedPath.

Finally, we redirect to the culture path and return a completed Task to satisfy the RequestDelegate method signature.

You may notice that I am only redirecting on a GET request. This is to prevent unexpected side effects, and in practice should not be an issue for most MVC sites, as users will be redirected to cultured urls when first hitting your site.

Putting it all together

We now have all the pieces we need to add redirecting to our MVC application. Our Configure method should now look something like this:

public void Configure(IApplicationBuilder app, RequestLocalizationOptions localizationOptions)  
{
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{culture:culturecode}/{controller=Home}/{action=Index}/{id?}");
        routes.MapGet("{culture:culturecode}/{*path}", appBuilder => { });
        routes.MapGet("{*path}", (RequestDelegate)(ctx =>
        {
            var defaultCulture = localizationOptions.DefaultRequestCulture.Culture.Name;
            var path = ctx.GetRouteValue("path") ?? string.Empty;
            var culturedPath = $"/{defaultCulture}/{path}";
            ctx.Response.Redirect(culturedPath);
            return Task.CompletedTask;
        }));
    });

Now when we hit our homepage at localhost/ we are redirected to localhost/en-GB/ - a much nicer experience for the user than the 404 we received previously!

Using a culture constraint and redirecting 404s with the url culture provider

If we consider the route I described earlier, localhost/gibberish/Home/Index/, I will still receive a 404, as it did before. Note however that the user is redirected to a correctly cultured route first:

Using a culture constraint and redirecting 404s with the url culture provider

The first time the url is hit it skips the first and second routes, as it does not have a culture, and is redirected to its culture equivalent, localhost/en-GB/gibberish/Home/Index/.

When this url is hit, it matches the first route, but attempts to find a GibberishController which obviously does not match. It therefore matches our second, cultured catch-all route, which returns a 404. The purpose of this second route becomes clear here, in that it prevents an infinite redirect loop, and ensures we return a 404 for urls which genuinely should be returning Not Found.

Summary

In this post I showed how you could extend the global conventions for culture I described in my previous post to handle the case when a user does not provide the culture in the url.

Using a custom routing constraint and two catch-all routes it is possible to have a single 'correct' route which contains a culture, and to re-map culture-less requests onto this route.

For more details on creating and testing custom route constraints, I recommend you check out this post by Scott Hanselman.


Dominick Baier: IdentityServer4 and ASP.NET Core 1.1

aka RC5 – last RC – promised!

The update from ASP.NET Core 1.0 (aka LTS – long term support) to ASP.NET Core 1.1 (aka Current) didn’t go so well (at least IMHO).

There were a couple of breaking changes both on the APIs as well as in behaviour. Especially around challenge/response based authentication middleware and EF Core.

Long story short – it was not possible for us to make IdentityServer support both versions. That’s why we decided to move to 1.1, which includes a bunch of bug fixes, and will also most probably be the version that ships with the new Visual Studio.

To be more specific – we build against ASP.NET Core 1.1 and the 1.0.0-preview2-003131 SDK.

Here’s a guide that describes how to update your host to 1.1. Our docs and samples have been updated.


Filed under: ASP.NET, OAuth, OpenID Connect, WebAPI


Andrew Lock: Applying the RouteDataRequest CultureProvider globally with middleware as filters

Applying the RouteDataRequest CultureProvider globally with middleware as filters

In my last post I showed how your could use the middleware as filters feature of ASP.NET Core 1.1.0 along with the RouteDataRequestCultureProvider to set the culture of your application from the url. This allowed you to distinguish between different cultures from a url segment, for example www.microsoft.com/en-GB/ and www.microsoft.com/fr-FR/.

The main downside to that approach was that it required inserting and additional {culture} route segment into all your routes, so that the RouteDataRequestCultureProvider could extract the route, and adding a MiddlewareFilter to every applicable controller. I only showed an example for when you are using Attribute routing, but it would also be necessary to add {culture} to all your convention-based routes too (if you're using them).

In this post, I'll show the various ways you can configure your routes globally, so that all your urls will have a culture prefix by default.

Adding a global MiddlewareFilter

I'm going to be continuing where I left off in the last post, with a ValuesController I am using for displaying the current culture:

[Route("{culture}/[controller]")]
[MiddlewareFilter(typeof(LocalizationPipeline))]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Hitting the url /fr-FR/Values/ShowMeTheCulture for example would show that the current culture was set to fr-FR, which was our goal. The downside to using this approach more generally is that we would need to add the MiddlewareFilter to all our controllers, and add the {culture} url segment. Ideally, we want to just be able to define our routes and controller the same as we were, before we were thinking about localisation.

The first of these problems is easily fixed by adding the MiddlewareFilter as a Global filter to MVC. You can do this by updating the call to AddMvc in ConfigureServices of your Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(opts =>
    {
        opts.Filters.Add(new MiddlewareFilterAttribute(typeof(LocalizationPipeline)));
    });

    // other service configuration
}

By adding the filter here, we can remove the MiddlewareFilter attribute from our ValuesController; it will be automatically applied to all our action methods. That's the first step done!

Using a convention to globally add a culture prefix to attribute routes

Now we've dealt with that, we can take a look at our RouteAttribute based routes. We want to avoid having to explicitly add the {culture} segment to every route we define.

Luckily, in ASP.NET Core MVC, you can register custom conventions on application startup which specify additional conventions that can be applied to the url. For example, you could ensure all your url paths are prefixed with /api, or you could specify the current environment (live/test) in the url, or rename your action methods completely.

In this case, we are going to prefix all our attribute routes with {culture} so we don't have to do it manually. I'm not going to go extensively into how the convention works, so I strongly suggest checking out the above links for more details!

First we create our convention by implementing IApplicationModelConvention:

public class LocalizationConvention : IApplicationModelConvention  
{
    public void Apply(ApplicationModel application)
    {
        var culturePrefix = new AttributeRouteModel(new RouteAttribute("{culture}"));

        foreach (var controller in application.Controllers)
        {
            var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
            if (matchedSelectors.Any())
            {
                foreach (var selectorModel in matchedSelectors)
                {
                    selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(culturePrefix,
                        selectorModel.AttributeRouteModel);
                }
            }

            var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
            if (unmatchedSelectors.Any())
            {
                foreach (var selectorModel in unmatchedSelectors)
                {
                    selectorModel.AttributeRouteModel = culturePrefix;
                }
            }
        }
    }
}

This convention is pretty much identical to the one presented by Filip from StrathWeb. It works by looping through all the Controllers in the application, and checking if the controller has an AttributeRoute attribute. If it does, then it combines the route template with the {culture} prefix, otherwise it adds a new one.

After this convention has run, every controller should effectively have a RouteAttribute that is prefixed with {culture}. The next thing to do is to let MVC know about our new convention. We can do this by adding it in the call to AddMvc:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc(opts =>
    {
        opts.Conventions.Insert(0, new LocalizationConvention());
        opts.Filters.Add(new MiddlewareFilterAttribute(typeof(LocalizationPipeline)));
    });
}

With that in place, we can update our ValuesController to remove the {culture} prefix from the RouteAttribute, and can delete the MiddlewareFilterAttribute entirely:

[Route("[controller]")]
public class ValuesController : Controller  
{
    //overall route /{culture}/Values/ShowMeTheCulture
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

And we're done! We don't need to reference the {culture} directly in our route attributes, but our urls will still require it:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

Caveats

There's a couple of points to be aware of with this method. First off, it's important to understand we have replaced the previous route; so the previous route of /Values/ShowMeThCulture is no longer accessible - you must provide the culture, just as if you had added the {culture} segment to the RouteAttribute directly:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

The other point to be aware of, is that we specified the {culture} prefix on the controller RouteAttribute in the convention. That means using an action RouteAttribute that specifies a path relative to root (which ignores the controller RouteAttribute) will not contain the {culture} prefix.

For example using [Route("~/ShowMeTheCulture")] on an action, will correspond to the url /ShowMeTheCulture - not /{culture}/ShowMeTheCulture. This may or may not be desirable for your use case, but it's likely you want these routes to be localised too, so it's worth keeping an eye out for. There's probably a different way of writing the convention to handle this, but I haven't dug into it too far yet, so please let me know below if you know a way!

Updating the default route handler

We have covered adding a convention for attribute routing, but what if you're using global route handling conventions? In the default templates, ASP.NET Core MVC is configured with the following route:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
});

This allows you to create controllers without using a RouteAttribute. Instead, the controller and action will be inferred. This lets you create Controllers like this:

    public class HomeController : Controller
    {
        public string Index()
        {
            return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
        }
    }

The default values in our routing convention mean that this action method will be hit for the urls /Home/Index/, /Home/ and just /.

If we update the default convention with a {culture} segment, then we can continue to have this behaviour, but with the culture prefixed to the url, so that they map to
/en-GB/Home/Index/ or /fr-FR/ for example. It is as simple as updating the template in UseMvc:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{culture}/{controller=Home}/{action=Index}/{id?}");
});

Now when we browse our website, we will get the desired result:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

Caveats

The biggest caveat here is that the IApplicationModelConvention we added previously will break this global route by adding a RouteAttribute to controllers that do not have one. Generally speaking, if you're using an IApplicationModelConvention I'd recommend using either the global routes or RouteAttributes rather than trying to combine both. Again, there's probably a way to write the convention to work with both attribute and global routes but I haven't dug too deep yet.

Also, as before, you won't be able to access the controller at /Home/Index anymore - you always have to specify the culture in the url.

Other considerations

With both of these routes, one major issue is that you always need to specify the culture in the url. This may be fine for an API, but for a website this could give a poor user experience - hitting the base url / would return a 404 with this setup!

It's important to setup additional routes to handle this, most likely redirecting to a route containing the default culture. For example, if you hit the url www.microsoft.com/ you will be redirected to www.microsoft.com/en-GB/ or something similar. This may require adding additional conventions or global routes depending on your setup. I will cover some approaches for doing this in a couple of upcoming posts.

Summary

This post led on from my previous post in which I showed how you could use the middleware as filters feature of ASP.NET Core to set the culture for a request using a URL segment. This post showed how to extend that setup to avoid having to add explicit {culture} segments to all of your controllers by adding a global convention (for route attributes) or by amending your global route configuration.

There are still a number of limitations to be aware of in this setup as I highlighted, but it brings you closer to a complete url localisation solution!


Ben Foster: Bare metal APIs with ASP.NET Core MVC

ASP.NET Core MVC now provides a true "one asp.net" framework that can be used for building both APIs and websites. But what if you only want to build an API?

Most of the ASP.NET Core MVC tutorials I've seen advise using the Microsoft.AspNetCore.Mvc package. While this does indeed give you what you need to build APIs, it also gives you a lot more:

  • Microsoft.AspNetCore.Mvc.ApiExplorer
  • Microsoft.AspNetCore.Mvc.Cors
  • Microsoft.AspNetCore.Mvc.DataAnnotations
  • Microsoft.AspNetCore.Mvc.Formatters.Json
  • Microsoft.AspNetCore.Mvc.Localization
  • Microsoft.AspNetCore.Mvc.Razor
  • Microsoft.AspNetCore.Mvc.TagHelpers
  • Microsoft.AspNetCore.Mvc.ViewFeatures
  • Microsoft.Extensions.Caching.Memory
  • Microsoft.Extensions.DependencyInjection
  • NETStandard.Library

A few of these packages are still needed if you're building APIs but many are specific to building full websites.

After installing the above package we typically register MVC in Startup.ConfigureServices like so:

services.AddMvc();

This code is responsible for wiring up the necessary MVC services with application container. Let's look at what this actually does:

public static IMvcBuilder AddMvc(this IServiceCollection services)
{
    var builder = services.AddMvcCore();

    builder.AddApiExplorer();
    builder.AddAuthorization();

    AddDefaultFrameworkParts(builder.PartManager);

    // Order added affects options setup order

    // Default framework order
    builder.AddFormatterMappings();
    builder.AddViews();
    builder.AddRazorViewEngine();
    builder.AddCacheTagHelper();

    // +1 order
    builder.AddDataAnnotations(); // +1 order

    // +10 order
    builder.AddJsonFormatters();

    builder.AddCors();

    return new MvcBuilder(builder.Services, builder.PartManager);
}

Again most of the service registration refers to the components used for rendering web pages.

Bare Metal APIs

It turns out that the ASP.NET team anticipated that developers may only want to build APIs and nothing else, so they gave us the ability to do just that.

First of all, rather than installing Microsoft.AspNetCore.Mvc, only install Microsoft.AspNetCore.Mvc.Core. This will give you the bare MVC middleware (routing, controllers, HTTP results) and not a lot else.

In order to process JSON requests and return JSON responses we also need the Microsoft.AspNetCore.Mvc.Formatters.Json package.

Then, to add both the core MVC middleware and JSON formatter, add the following code to ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddJsonFormatters();
}

The final thing to do is to change your controllers to derive from ControllerBase instead of Controller. This provides a base class for MVC controllers without any View support.

Looking at the final list of packages in project.json, you can see we really don't need that much after all, especially given most of these are related to configuration and logging:

"Microsoft.AspNetCore.Mvc.Core": "1.1.0",
"Microsoft.AspNetCore.Mvc.Formatters.Json": "1.1.0",
"Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
"Microsoft.Extensions.Configuration.Json": "1.1.0",
"Microsoft.Extensions.Configuration.CommandLine": "1.1.0",
"Microsoft.Extensions.Logging": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Microsoft.Extensions.Logging.Debug": "1.1.0"

You can find the complete code on GitHub.


Dominick Baier: New in IdentityServer4: Resource-based Configuration

For RC4 we decided to re-design our configuration object model for resources (formerly known as scopes).

I know, I know – we are not supposed to make fundamental breaking changes once reaching the RC status – but hey – we kind of had our “DNX” moment, and realized that we either change this now – or never.

Why did we do that?
We spent the last couple of years explaining OpenID Connect and OAuth 2.0 based architectures to hundreds of students in training classes, attendees at conferences, fellow developers, and customers from all types of industries.

While most concepts are pretty clear and make total sense – scopes were the most confusing part for most people. The abstract nature of a scope as well as the fact that the term scope has a somewhat different meaning in OpenID Connect and OAuth 2.0, made this concept really hard to grasp.

Maybe it’s also partly our fault, that we stayed very close to the spec-speak with our object model and abstraction level, that we forced that concept onto every user of IdentityServer.

Long story short – every time I needed to explain scope, I said something like “A scope is a resource a client wants to access.”..and “there are two types of scopes: identity related and APIs…”.

This got us thinking if it would make more sense to introduce the notion of resources in IdentityServer, and get rid of scopes.

What did we do?
Before RC4 – our configuration object model had three main parts: users, client, and scopes (and there were two types of scopes – identity and resource – and some overlapping settings between them).

Starting with RC4 – the configuration model does not have scope anymore as a top-level concept, but rather identity resources and API resources.

terminology

We think this is a more natural way (and language) to model a typical token-based system.

From our new docs:

User
A user is a human that is using a registered client to access resources.

Client
A client is a piece of software that requests tokens from IdentityServer – either for authenticating a user (requesting an identity token)
or for accessing a resource (requesting an access token). A client must be first registered with IdentityServer before it can request tokens.

Resources
Resources are something you want to protect with IdentityServer – either identity data of your users (like user id, name, email..), or APIs.

Enough talk, show me the code!
Pre-RC4, you would have used a scope store to return a flat list of scopes. Now the new resource store deals with two different resource types: IdentityResource and ApiResource.

Let’s start with identity – standard scopes used to be defined like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        StandardScopes.OpenId,
        StandardScopes.Profile
    };
}

..and now:

public static IEnumerable<IdentityResource> GetIdentityResources()
{
    return new List<IdentityResource>
    {
        new IdentityResources.OpenId(),
        new IdentityResources.Profile()
    };
}

Not very different. Now let’s define a custom identity resource with associated claims:

var customerProfile = new IdentityResource(
    name:        "profile.customer",
    displayName: "Customer profile",
    claimTypes:  new[] { "name""status""location" });

This is all that’s needed for 90% of all identity resources you will ever define. If you need to tweak details, you can set various properties on the IdentityResource class.

Let’s have a look at the API resources. You used to define a resource-scope like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        new Scope
        {
            Name = "api1",
            DisplayName = "My API #1",
 
            Type = ScopeType.Resource
        }
    };
}

..and the new way:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource("api1""My API #1")
    };
}

Again – for the simple case there is not a huge difference. The ApiResource object model starts to become more powerful when you have advanced requirements like APIs with multiple scopes (and maybe different claims based on the scope) and support for introspection, e.g.:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource
        {
            Name = "calendar",
 
            // secret for introspection endpoint
            ApiSecrets =
            {
                new Secret("secret".Sha256())
            },
 
            // claims to include in access token
            UserClaims =
            {
                JwtClaimTypes.Name,
                JwtClaimTypes.Email
            },
 
            // API has multiple scopes
            Scopes =
            {
                new Scope
                {
                    Name = "calendar.read_only",
                    DisplayName = "Read only access to the calendar"
                },
                new Scope
                {
                    Name = "calendar.full_access",
                    DisplayName = "Full access to the calendar",
                    Emphasize = true,
 
                    // include additional claim for that scope
                    UserClaims =
                    {
                        "status"
                    }
                }
            }
        }
    };

IOW – We reversed the configuration approach, and you now model APIs (which might have scopes) – and not scopes (that happen to represent an API).

We like the new model much better as it reflects how you architect a token-based system much better. We hope you like it too – and sorry for moving the cheese ;)

As always – give us feedback on the issue tracker. RTM is very close.


Filed under: .NET Security, ASP.NET, OAuth, Uncategorized, WebAPI


Andrew Lock: Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

In this post, I show how you can use the 'middleware as filters' feature of ASP.NET Core 1.1.0 to easily add request localisation based on url segments.

The end goal we are aiming for is to easily specify the culture in the url, similar to the way Microsoft handle it on their public website. If you navigate to https://microsoft.com, then you'll be redirected to https://www.microsoft.com/en-gb/ (or similar for your culture)

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Using URL parameters is one of the approaches to localisation Google suggests as it is more user and SEO friendly than some of the other options.

Localisation in ASP.NET Core 1.0.0

The first step to localising your application is to associate the current request with a culture. Once you have that, you can customise the strings in your request to match the culture as required.

Localisation is already perfectly possible in ASP.NET Core 1.0.0 (and the subsequent patch versions). You can localise your application using the RequestLocalizationMiddleware, and you can use a variety of providers to obtain the culture from cookies, querystrings or the Accept-Language header out of the box.

It is also perfectly possible to write your own provider to obtain the culture from somewhere else, from the url for example. You could use the RoutingMiddleware to fork the pipeline, and extract a culture segment from it, and then run your MVC pipeline inside that fork, but you would still need to be sure to handle the other fork, where the cultured url pattern is not matched and a culture can't be extracted.

While possible, this is a little bit messy, and doesn't necessarily correspond to the desired behaviour. Luckily, in ASP.NET Core 1.1.0, Microsoft have added two features that make the process far simpler: middleware as filters, and the RouteDataRequestCultureProvider.

In my previous post, I looked at the middleware as filters feature in detail, showing how it is implemented; in this post I'll show how you can put the feature to use.

The other piece of the puzzle, the RouteDataRequestCultureProvider, does exactly what you would expect - it attempts to identify the current culture based on RouteData segments. You can use this as a drop-in provider if you are using the RoutingMiddleware approach mentioned previously, but I will show how to use it in the MVC pipeline in combination with the middleware as filters feature. To see how the provider can be used in a normal middleware pipeline, check out the tests in the localisation repository on GitHub.

Setting up the project

As I mentioned, these features are all available in the ASP.NET Core 1.1.0 release, so you will need to install the preview version of the .NET core framework. Just follow the instructions in the announcement blog post.

After installing (and fighting with a couple of issues), I started by scaffolding a new web project using

dotnet new -t web  

which creates a new MVC web application. For simplicity I stripped out most of the web pieces and added a single ValuesController, That would simply write out the current culture when you hit /Values/ShowMeTheCulture:

public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Adding localisation

The next step was to add the necessary localisation services and options to the project. This is the same as for version 1.0.0 so you can follow the same steps from the docs or my previous posts. The only difference is that we will add a new RequestCultureProvider.

First, add the Microsoft.AspNetCore.Localization.Routing package to your project.json. You may need to update some other packages too to ensure the versions align. Note that not all the packages will necessarily be 1.1.0, it depends on the latest versions of the packages that shipped.

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Options": "1.1.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0",
    "Microsoft.AspNetCore.Localization.Routing": "1.1.0"
  },

You can now configure the RequestLocalizationOptions in the ConfigureServices method of your Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();

    var supportedCultures = new[]
    {
        new CultureInfo("en-US"),
        new CultureInfo("en-GB"),
        new CultureInfo("de"),
        new CultureInfo("fr-FR"),
    };

    var options = new RequestLocalizationOptions()
    {
        DefaultRequestCulture = new RequestCulture(culture: "en-GB", uiCulture: "en-GB"),
        SupportedCultures = supportedCultures,
        SupportedUICultures = supportedCultures
    };
    options.RequestCultureProviders = new[] 
    { 
         new RouteDataRequestCultureProvider() { Options = options } 
    };

    services.AddSingleton(options);
}

This is all pretty standard up to this point. I have added the cultures I support, and defined the default culture to be en-GB. Finally, I have added the RouteDataRequestCultureProvider as the only provider I will support at this point, and registered the options in the DI container.

Adding localisation to the urls

Now we've setup our localisation options, we just need to actually try and extract the culture from the url. As a reminder, we are trying to add a culture prefix to our urls, so that /controller/action becomes /en-gb/controller/action or /fr/controller/action. There are a number of ways to achieve this, but if your are using attribute routing, one possibility is to add a {culture} routing parameter to your route:

[Route("{culture}/[controller]")]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

With the addition of this route, we can now hit the urls defined above, but we're not yet doing anything with the {culture} segment, so all our requests use the default culture:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

To actually convert that value to a culture we need the middleware as filters feature.

Adding localisation using a MiddlewareFilter

In order to extract the culture from the RouteData we need to run the RequestLocalisationMiddleware, which will use the RouteDataRequestCultureProvider. However, in this case, we can't run it as part of the normal middleware pipeline.

Middleware can only use data that has been added by preceding components in the pipeline, but we need access to routing information (the RouteData segments). Routing doesn't happen till the MVC middleware runs, which we need to run to extract the RouteData segments from the url. Therefore, we need request localisation to happen after action selection, but before the action executes; in other words, in the MVC filter pipeline.

To use a MiddlewareFilter, use first need to create a pipeline. This is like a mini Startup file in which you Configure an IApplicationBuilder to define the middleware that should run as part of the pipeline. You can configure several middleware to run in this way.

In this case, the pipeline is very simple, as we literally just need to run the RequestLocalisationMiddleware:

public class LocalizationPipeline  
{
    public void Configure(IApplicationBuilder app, RequestLocalizationOptions options)
    {
        app.UseRequestLocalization(options);
    }
}

We can then apply this pipeline using a MiddlewareFilterAttribute to our ValuesController:

[Route("{culture}/[controller]")]
[MiddlewareFilter(typeof(LocalizationPipeline))]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Now if we run the application, you can see the culture is resolved correctly from the url:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

And there you have it. You can now localise your application using urls instead of querystrings or cookie values. There is obviously more to getting a working solution together here. For example you need to provide an obvious route for the user to easily switch cultures. You also need to consider how this will affect your existing routes, as clearly your urls have changed!

Optional RouteDataRequestCultureProvider configuration

By default, the RouteDataRequestCultureProvider will look for a RouteData key with the value culture when determining the current culture. It also looks for a ui-culture key for setting the UI culture, but if that's missing then it will fallback to culture, as you can see in the previous screenshots. If we tweak the ValuesController, RouteAttribute to be

Route("{culture}/{ui-culture}/[controller]")]  

then we can specify the two separately:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

When configuring the provider, you can change the RouteData keys to something other that culture and ui-culture if you prefer. It will have no effect on the final result, it will just change the route tokens that are used to identify the culture. For example, we could change the culture RouteData parameter to be lang when configuring the provider:

options.RequestCultureProviders = new[] {  
    new RouteDataRequestCultureProvider() 
        { 
            RouteDataStringKey = "lang",
            Options = options
        } 
    };

We could then write our attribute routes as

Route("{lang}/[controller]")]  

Summary

In this post I showed how you could use the url to localise your application by making use of the MiddlewareFilter and RouteDataRequestCultureProvider that are provided in ASP.NET Core 1.1.0. I will write a couple more posts on using this approach in practical applications.

If you're interested in how the ASP.NET team implemented the feature, then check out my previous post. You can also see an example usage on the announcement page and on Hisham's blog.


Damien Bowden: Contributing to OSS projects on gitHub using fork and upstreams

This article is a simple guideline on how you could contribute to gitHub OSS projects using fork and upstream. This is not the only way to do it. git Extensions is used for this demo, but any git client can be used. In this example, aspnet/AspLabs from Microsoft is used as the target repository.

So you have something to contribute, cool, that’s the hard part.

Before you can make your contribution, you need to create a fork of the repository where you want to make your contribution. Open the project on github, and click the fork button in the top right corner.

githuboss_01

Now clone your forked repository

githuboss_02

In git Extensions, click the clone repository and select a folder somewhere on your computer.

githuboss_03

Now you have a master branch and also a server master branch of your forked repository. The next step is to configure the remote upstream branch. This is required to synchronize with the parent repository, as you might not be the only person contributing to the repository. Click the Repository menu in git Extensions and add a new remote repository with the url from the parent repository.

githuboss_04

Now you can pull from the upstream repository. You pull from the upstream/master branch to your local master branch. Due to this you should NEVER work on your master branch. Then you can also configure your git to rebase the local master with the upstream master if preferred.

githuboss_05

Once you have pulled from the upstream, you can push to your remote master, ie the forked master. Just to mention it again, NEVER WORK ON YOUR LOCAL FORKED MASTER, and you will save yourself hassle.

Now you’re ready to work. Create a new branch. A good recommendation is to use the following pattern for naming:

<gitHub username>/<reason-for-the-branch-in-lowercase>

Here’s an example:

damienbod/add-urls-check

By using your gitHub username, it makes it easier for the person reviewing the pull request.

When your work is finished on the branch, you are ready to create a pull request. Go to the parent repository and click on the ‘New pull request’ button:

githuboss_06

Choose your working branch and select the target branch on the parent repository, usually the master.

NOTE: if your branch was created from an older master commit than the actual master on the parent, you need to pull from the upstream and rebase your branch to the latest commit. This is easy as you do not work on the local master.

If you are contributing to an aspnet repository, you will need to sign an electronic agreement before you can contribute.

If you are working together with a maintainer of the repository, or your pull request is the result of an issue, you could add a comment with the github name of the person that will review and merge, so that he or she will be notified that you are ready. They will receive a notification on gitHub.

Now just wait and fix the issues as required. Once the pull request is merged, you need to pull from the upstream on your local forked repository and rebase if necessary to continue with you next pull request.

And who knows, you might even get a coin from Microsoft.

Thanks to Andrew Stanton-Nurse for his tips.

I am also grateful for tips from anyone on how to improve this guideline.

Links:

http://asp.net-hacker.rocks/2016/12/07/contributing-to-oss-projects-on-github-using-fork-and-upstreams.html

https://gitextensions.github.io/

gist from CristinaSolana



Andrew Lock: Exploring Middleware as MVC Filters in ASP.NET Core 1.1

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

One of the new features released in ASP.NET Core 1.1 is the ability to use middleware as an MVC Filter. In this post I'll take a look at how the feature is implemented by peering into the source code, rather than focusing on how you can use it. In the next post I'll look at how you can use the feature to allow greater code reuse.

Middleware vs Filters

The first step is to consider why you would choose to use middleware over filters, or vice versa. Both are designed to handle cross-cutting concerns of your application and both are used in a 'pipeline', so in some cases you could choose either successfully.

The main difference between them is their scope. Filters are a part of MVC, so they are scoped entirely to the MVC middleware. Middleware only has access to the HttpContext and anything added by preceding middleware. In contrast, filters have access to the wider MVC context, so can access routing data and model binding information for example.

Generally speaking, if you have a cross cutting concern that is independent of MVC then using middleware makes sense, if your cross cutting concern relies on MVC concepts, or must run midway through the MVC pipeline, then filters make sense.

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

So why would you want to use middleware as filters then? A couple of reasons come to mind for me.

First, you have some middleware that already does what you want, but you now need the behaviour to occur midway through the MVC middleware. You could rewrite your middleware as a filter, but it would be nicer to just be able to plug it in as-is. This is especially true if you are using a piece of third-party middleware and you don't have access to the source code.

Second, you have functionality that needs to logically run as both middleware and a filter. In that case you can just have the one implementation that is used in both places.

Using the MiddlewareFilterAttribute

On the announcement post, you will find an example of how to use Middleware as filters. Here I'll show a cut down example, in which I want to run MyCustomMiddleware when a specific MVC action is called.

There are two parts to the process, the first is to create a middleware pipeline object:

public class MyPipeline  
{
    public void Configure(IApplicationBuilder applicationBuilder) 
    {
        var options = // any additional configuration

        applicationBuilder.UseMyCustomMiddleware(options);
    }
}

and the second is to use an instance of the MiddlewareFilterAttribute on an action or a controller, wherever it is needed.

[MiddlewareFilter(typeof(MyPipeline))]
public IActionResult ActionThatNeedsCustomfilter()  
{
    return View();
}

With this setup, MyCustomMiddleware will run each time the action method ActionThatNeedsCustomfilter is called.

It's worth noting that the MiddlewareFilterAttribute on the action method does not take a type of the middleware component itself (MyCustomMiddleware), it actually takes a pipeline object which configures the middleware itself. Don't worry about this too much as we'll come back to it again later.

For the rest of this post, I'll dip into the MVC repository and show how the feature is implemented.

The MiddlewareFilterAttribute

As we've already seen, the middleware filter feature starts with the MiddlewareFilterAttribute applied to a controller or method. This attribute implements the IFilterFactory interface which is useful for injecting services into MVC filters. The implementation of this interface just requires one method, CreateInstance(IServiceProvider provider):

public class MiddlewareFilterAttribute : Attribute, IFilterFactory, IOrderedFilter  
{
    public MiddlewareFilterAttribute(Type configurationType)
    {
        ConfigurationType = configurationType;
    }

    public Type ConfigurationType { get; }

    public IFilterMetadata CreateInstance(IServiceProvider serviceProvider)
    {
        var middlewarePipelineService = serviceProvider.GetRequiredService<MiddlewareFilterBuilder>();
        var pipeline = middlewarePipelineService.GetPipeline(ConfigurationType);

        return new MiddlewareFilter(pipeline);
    }
}

The implementation of the attribute is fairly self explanatory. First a MiddlewareFilterBuilder object is obtained from the dependency injection container. Next, GetPipeline is called on the builder, passing in the ConfigurationType that was supplied when creating the attribute (MyPipeline in the previous example).

GetPipeline returns a RequestDelegate which represents a middleware pipeline which takes in an HttpContext and returns a Task:

public delegate Task RequestDelegate(HttpContext context);  

Finally, the delegate is used to create a new MiddlewareFilter, which is returned by the method. This pattern of using an IFilterFactory attribute to create an actual filter instance is very common in the MVC code base, and works around the problems of service injection into attributes, as well as ensuring each component sticks to the single responsibility principle.

Building the pipeline with the MiddlewareFilterBuilder

In the last snippet we saw the MiddlewareFilterBuilder being used to turn our MyPipeline type into an actual, runnable piece of middleware. Taking a look inside the MiddlewareFilterBuilder, you will see an interesting use case of a Lazy<> with a ConcurrentDictionary, to ensure that each pipeline Type passed in to the service is only ever created once. This was the usage I wrote about in my last post.

The call to GetPipeline initialises a pipeline for the provided type using the BuildPipeline method, shown below in abbreviated form:

private RequestDelegate BuildPipeline(Type middlewarePipelineProviderType)  
{
    var nestedAppBuilder = ApplicationBuilder.New();

    // Get the 'Configure' method from the user provided type.
    var configureDelegate = _configurationProvider.CreateConfigureDelegate(middlewarePipelineProviderType);
    configureDelegate(nestedAppBuilder);

    nestedAppBuilder.Run(async (httpContext) =>
    {
        // additional end-middleware, covered later
    });

    return nestedAppBuilder.Build();
}

This method creates a new IApplicationBuilder, and uses it to configure a middleware pipeline, using the custom pipeline supplied earlier (MyPipeline'). It then adds an additional piece of 'end-middleware' at the end of the pipeline which I'll come back to later, and builds the pipeline into a RequestDelegate.

Creating the pipeline from MyPipeline is performed by a MiddlewareFilterConfigurationProvider, which attempts to find an appropriate Configure method on it.

You can think of the MyPipeline class as a mini-Startup class. Just like the Startup class you need a Configure method to add middleware to an IApplicationBuilder, and just like in Startup, you can inject additional services into the method. One of the big differences is that you can't have environment-specific Configure methods like ConfigureDevelopment here - your class must have one, and only one, configuration method called Configure.

The MiddlewareFilter

So just to recap, you add a MiddlewareFilterAttribute to one of your action methods or controllers, passing in a pipeline to use as a filter, e.g. MyPipeline. This uses a MiddlewareFilterBuilder to create a RequestDelegate, which in turn is used to create a MiddlewareFilter. This is the object actually added to the MVC filter pipeline.

The MiddlewareFilter implements IAsyncResourceFilter, so it runs early in the filter pipeline - after AuthorizationFilters have run, but before Model Binding and Action filters. This allows you to potentially short-circuit requests completely should you need to.

The MiddlewareFilter implements the single required method OnResourceExecutionAsync. The execution is very simple. First it records the MVC ResourceExecutingContext context of the filter, as well as the next filter to execute ResourceExecutionDelegate, as a new MiddlewareFilterFeature. This feature is then stored against the HttpContext itself, so it can be accessed elsewhere. The middleware pipeline we created previously is then invoked using the HttpContext.

public class MiddlewareFilter : IAsyncResourceFilter  
{
    private readonly RequestDelegate _middlewarePipeline;
    public MiddlewareFilter(RequestDelegate middlewarePipeline)
    {
        _middlewarePipeline = middlewarePipeline;
    }

    public Task OnResourceExecutionAsync(ResourceExecutingContext context, ResourceExecutionDelegate next)
    {
        var httpContext = context.HttpContext;

        var feature = new MiddlewareFilterFeature()
        {
            ResourceExecutionDelegate = next,
            ResourceExecutingContext = context
        };
        httpContext.Features.Set<IMiddlewareFilterFeature>(feature);

        return _middlewarePipeline(httpContext);
    }

From the point of view of the middleware pipeline we created, it is as though it was called as part of the normal pipline; it just receives an HttpContext to work with. If needs be though, it can access the MVC context by accessing the MiddlewareFilterFeature.

If you have written any filters previously, something may seem a bit off with this code. Normally, you would call await next() to execute the next filter in the pipeline before returning, but we are just returning the Task from our RequestDelegate invocation. How does the pipeline continue? To see how, we'll skip back to the 'end-middleware' I glossed over in BuildPipeline

Using the end-middleware to continue the filter pipeline

The middleware added at the end of the BuildPipeline method is responsible for continuing the execution of the filter pipeline. An abbreviated form looks like this:

nestedAppBuilder.Run(async (httpContext) =>  
{
    var feature = httpContext.Features.Get<IMiddlewareFilterFeature>();

    var resourceExecutionDelegate = feature.ResourceExecutionDelegate;
    var resourceExecutedContext = await resourceExecutionDelegate();

    if (!resourceExecutedContext.ExceptionHandled && resourceExecutedContext.Exception != null)
    {
        throw resourceExecutedContext.Exception;
    }
});

There are two main functions of this middleware. The primary goal is ensuring the filter pipeline is continued after the MiddlewareFilter has executed. This is achieved by loading the IMiddlewareFeatureFeature which was saved to the HttpContext when the filter began executing. It can then access the next filter via the ResourceExecutionDelegate and await its execution as usual.

The second goal, is to behave like a middleware pipeline rather than a filter pipeline when exceptions are thrown. That is, if a later filter or action method throws an exception, and no filter handles the exception, then the end-middleware re-throws it, so that the middleware pipeline used in the filter can handle it as middleware normally would (with a try-catch).

Note that Get<IMiddlewareFilterFeature>() will be called before the end of each MiddlewareFilter. If you have multiple MiddlewareFilters in the pipeline, each one will set a new instance of IMiddlewareFilterFeature, overwriting the values saved earlier. I haven't dug into it, but that could potentially cause an issue if you have middleware in your MyCustomMiddleware that both operates on the response being sent back through the pipeline after other middleware has executed, and also tries to load the IMiddlewareFilterFeature. In that case, it will get the IMiddlewareFilterFeature associated with a different MiddlewareFilter. It's a pretty unlikely scenario I suspect, but still, just watch out for it.

Wrapping up

That brings us to the end of this look under the covers of middleware filters. hopefully you found it interesting, personally, I just enjoy looking at the repos as a source of inspiration should I ever need to implement something similar in the future. Hope you enjoyed it!


Ben Foster: Using .NET Core Configuration with legacy projects

In .NET Core, configuration has been re-engineered, throwing away the System.Configuration model that relied on XML-based configuration files and introducing a number of new configuration components offering more flexibility and better extensibility.

At its lowest level, the new configuration system still provides access to key/value based settings. However, it also supports multiple configuration sources such as JSON files, and probably my favourite feature, strongly typed binding to configuration classes.

Whilst the new configuration system sit unders the ASP.NET repository on GitHub, it doesn't actually have any dependency on any of the new ASP.NET components meaning it can also be used in your non .net core projects too.

In this post I'll cover how to use .NET Core Configuration in an ASP.NET Web API application.

Install the packages

The new .NET Core configuration components are published under Microsoft.Extensions.Configuration.* packages on NuGet. For this demo I've installed the following packages:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.Json (support for JSON configuration files)
  • Microsoft.Extensions.Configuration.Binder (strongly-typed binding of configuration settings)

Initialising configuration

To initialise the configuration system we use ConfigurationBuilder. When you install additional configuration sources the builder will be extended with a number of new methods for adding those sources. Finally call Build() to create a configuration instance:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Build();

Accessing configuration settings

Once you have the configuration instance, settings can be accessed using their key:

var applicationName = configuration["ApplicationName"];

If your configuration settings have a heirarchical structure (likely if you're using JSON or XML files) then each level in the heirarchy will be separated with a :.

To demonstrate I've added a appsettings.json.config file containing a few configuration settings:

{
  "connectionStrings": {
    "MyDb": "server=localhost;database=mydb;integrated security=true"
  },
  "apiSettings": {
    "url": "http://localhost/api",
    "apiKey": "sk_1234566",
    "useCache":  true
  }
}

Note: I'm using the .config extension as a simple way to prevent IIS serving these files directly. Alternatively you can set up IIS request filtering to prevent access to your JSON config files.

I've then wired up an endpoint in my controller to return the configuration, using keys to access my values:

public class ConfigurationController : ApiController
{
    public HttpResponseMessage Get()
    {
        var config = new
        {
            MyDbConnectionString = Startup.Config["ConnectionStrings:MyDb"],
            ApiSettings = new
            {
                Url = Startup.Config["ApiSettings:Url"],
                ApiKey = Startup.Config["ApiSettings:ApiKey"],
                UseCache = Startup.Config["ApiSettings:UseCache"],
            }
        };

        return Request.CreateResponse(config);
    }
}

When I hit my /configuration endpoint I get the following JSON response:

{
    "MyDbConnectionString": "server=localhost;database=mydb;integrated security=true",
    "ApiSettings": {
        "Url": "http://localhost/api",
        "ApiKey": "sk_1234566",
        "UseCache": "True"
    }
}

Strongly-typed configuration

Of course accessing settings in this way isn't a vast improvement over using ConfigurationManager and as you'll notice above, we're not getting the correct type for all of our settings.

Fortunately the new .NET Core configuration system supports strongly-typed binding of your configuration, using Microsoft.Extensions.Configuration.Binder.

I created the following class to bind my configuration to:

public class AppConfig
{
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

To bind to this class directly, use the Get<T> extensions provided by the binder package. Here's my updated controller:

public HttpResponseMessage Get()
{
    var config = Startup.Config.Get<AppConfig>();
    return Request.CreateResponse(config);
}

The response:

{
   "ConnectionStrings":{
      "MyDb":"server=localhost;database=mydb;integrated security=true"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}

Now I can access my application configuration in a much nicer way:

if (config.ApiSettings.UseCache)
{

}

What about Web/App.config?

So far I've demonstrated how to use some of the new configuration features in a legacy application. But what if you still rely on traditional XML based configuration files like web.config or app.config?

In my application I still have a few settings in app.config (I'm self-hosting the API) that I require in my application. Ideally I'd like to use the .NET core configuration system to bind these to my AppConfig class too:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <connectionStrings>
    <add name="MyLegacyDb" connectionString="server=localhost;database=legacy" />
  </connectionStrings>
  <appSettings>
    <add key="ApplicationName" value="CoreConfigurationDemo"/>
  </appSettings>
</configuration>

There is an XML configuration source for .NET Core. However, if you try and use this for appSettings or connectionStrings elements you'll find the generated keys are not really ideal for strongly typed binding:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .AddXmlFile("app.config")
    .Build();

If we inspect the configuration after calling Build() we get the following key/value for the MyLegacyDb connection string:

[connectionStrings:add:MyLegacyDb:connectionString, server=localhost;database=legacy]

This is due to how the XML source binds XML attributes.

Given that we still have access to the older System.Configuration system it makes sense to use this to access our XML config files and then plug the values into the new .NET core configuration system. We can do this by creating a custom configuration provider.

Creating a custom configuration provider.

To implement a custom configuration provider you implement the IConfigurationProvider and IConfigurationSource interfaces. You can also derive from the abstract class ConfigurationProvider which will save you writing some boilerplate code.

For a more advanced implementation that requires reading file contents and supports multiple files of the same type, check out Andrew Lock's write-up on how to add a YAML configuration provider.

Since I'm relying on System.Configuration.ConfigurationManager to read app.config and do not need to support multiple files, my implementation is quite simple:

public class LegacyConfigurationProvider : ConfigurationProvider, IConfigurationSource
{
    public override void Load()
    {
        foreach (ConnectionStringSettings connectionString in ConfigurationManager.ConnectionStrings)
        {
            Data.Add($"ConnectionStrings:{connectionString.Name}", connectionString.ConnectionString);
        }

        foreach (var settingKey in ConfigurationManager.AppSettings.AllKeys)
        {
            Data.Add(settingKey, ConfigurationManager.AppSettings[settingKey]);
        }
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return this;
    }
}

When ConfigurationBuilder.Build() is called the Build method of each configured source is executed, which returns a IConfigurationProvider used to get the configuration data. Since we're deriving from ConfigurationProvider we can override Load, adding each of the connection strings and application settings from app.config.

I updated my AppConfig to include the new setting and connection string as below:

public class AppConfig
{
    public string ApplicationName { get; set; }
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
        public string MyLegacyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

The only change I need to make to my application is to add the configuration provider to my configuration builder:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Add(new LegacyConfigurationProvider())
    .Build();

Then, when I hit my /configuration endpoint I get my complete configuration, bound from both my JSON file and app.config:

{
   "ApplicationName":"CoreConfigurationDemo",
   "ConnectionStrings":{
      "MyDb":"server=localhost;integrated security=true",
      "MyLegacyDb":"server=localhost;database=legacy"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}


Damien Bowden: Extending Identity in IdentityServer4 to manage users in ASP.NET Core

This article shows how Identity can be extended and used together with IdentityServer4 to implement application specific requirements. The application allows users to register and can access the application for 7 days. After this, the user cannot log in. Any admin can activate or deactivate a user using a custom user management API. Extra properties are added to the Identity user model to support this. Identity is persisted using EFCore and SQLite. The SPA application is implemented using Angular, Webpack 2 and Typescript 2.

Code: VS2017 msbuild | VS2015 project.json

2017.02.15: Updated to VS2017 csproj, angular 2.4.7, webpack 2.2.1
2017.01.07: Updated to IdentityServer4 1.0.0, webpack 2.2.0-rc.3, angular 2.4.1
2016.12.18: Updated to IdentityServer4 rc5, ASP.NET Core 1.1
2016.12.04: Updated to IdentityServer4 rc4

Other posts in this series:

Updating Identity

Updating Identity is pretty easy. The package provides the IdentityUser class implemented by the ApplicationUser. You can add any extra required properties to this class. This requires the Microsoft.AspNetCore.Identity.EntityFrameworkCore package which is included in the project as a NuGet package.

using System;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;

namespace IdentityServerWithAspNetIdentity.Models
{
    public class ApplicationUser : IdentityUser
    {
        public bool IsAdmin { get; set; }
        public string DataEventRecordsRole { get; set; }
        public string SecuredFilesRole { get; set; }
        public DateTime AccountExpires { get; set; }
    }
}

Identity needs to be added to the application. This is done in the startup class in the ConfigureServices method using the AddIdentity extension. SQLite is used to persist the data. The ApplicationDbContext which uses SQLite is then used as the store for Identity.

services.AddDbContext<ApplicationDbContext>(options =>
	options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

The configuration is read from the appsettings for the SQLite database. The configuration is read using the ConfigurationBuilder in the Startup constructor.

"ConnectionStrings": {
        "DefaultConnection": "Data Source=C:\\git\\damienbod\\AspNet5IdentityServerAngularImplicitFlow\\src\\ResourceWithIdentityServerWithClient\\usersdatabase.sqlite"
    },
   

The Identity store is then created using the EFCore migrations.

dotnet ef migrations add testMigration

dotnet ef database update

The new properties in the Identity are used in three ways; when creating a new user, when creating a token for a user and validating the token on a resource using policies.

Using Identity creating a new user

The Identity ApplicationUser is created in the Register method in the AccountController. The new extended properties which were added to the ApplicationUser can be used as required. In this example, a new user will have access for 7 days. If the user can set custom properties, the RegisterViewModel model needs to be extended and the corresponding view.

[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Register(RegisterViewModel model, string returnUrl = null)
{
	ViewData["ReturnUrl"] = returnUrl;
	if (ModelState.IsValid)
	{
		var dataEventsRole = "dataEventRecords.user";
		var securedFilesRole = "securedFiles.user";
		if (model.IsAdmin)
		{
			dataEventsRole = "dataEventRecords.admin";
			securedFilesRole = "securedFiles.admin";
		}

		var user = new ApplicationUser {
			UserName = model.Email,
			Email = model.Email,
			IsAdmin = model.IsAdmin,
			DataEventRecordsRole = dataEventsRole,
			SecuredFilesRole = securedFilesRole,
			AccountExpires = DateTime.UtcNow.AddDays(7.0)
		};

		var result = await _userManager.CreateAsync(user, model.Password);
		if (result.Succeeded)
		{
			// For more information on how to enable account confirmation and password reset please visit http://go.microsoft.com/fwlink/?LinkID=532713
			// Send an email with this link
			//var code = await _userManager.GenerateEmailConfirmationTokenAsync(user);
			//var callbackUrl = Url.Action("ConfirmEmail", "Account", new { userId = user.Id, code = code }, protocol: HttpContext.Request.Scheme);
			//await _emailSender.SendEmailAsync(model.Email, "Confirm your account",
			//    $"Please confirm your account by clicking this link: <a href='{callbackUrl}'>link</a>");
			await _signInManager.SignInAsync(user, isPersistent: false);
			_logger.LogInformation(3, "User created a new account with password.");
			return RedirectToLocal(returnUrl);
		}
		AddErrors(result);
	}

	// If we got this far, something failed, redisplay form
	return View(model);
}

Using Identity creating a token in IdentityServer4

The Identity properties need to be added to the claims so that the client SPA or whatever client it is can use the properties. In IdentityServer4, the IProfileService interface is used for this. Each custom ApplicationUser property is added as claims as required.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using IdentityModel;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using IdentityServerWithAspNetIdentity.Models;
using Microsoft.AspNetCore.Identity;

namespace IdentityServerWithAspNetIdentitySqlite
{
    using IdentityServer4;

    public class IdentityWithAdditionalClaimsProfileService : IProfileService
    {
        private readonly IUserClaimsPrincipalFactory<ApplicationUser> _claimsFactory;
        private readonly UserManager<ApplicationUser> _userManager;

        public IdentityWithAdditionalClaimsProfileService(UserManager<ApplicationUser> userManager,  IUserClaimsPrincipalFactory<ApplicationUser> claimsFactory)
        {
            _userManager = userManager;
            _claimsFactory = claimsFactory;
        }

        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            var principal = await _claimsFactory.CreateAsync(user);

            var claims = principal.Claims.ToList();
            claims = claims.Where(claim => context.RequestedClaimTypes.Contains(claim.Type)).ToList();
            claims.Add(new Claim(JwtClaimTypes.GivenName, user.UserName));

            if (user.IsAdmin)
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "admin"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "user"));
            }

            if (user.DataEventRecordsRole == "dataEventRecords.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "dataEventRecords"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "dataEventRecords"));
            }

            if (user.SecuredFilesRole == "securedFiles.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "securedFiles"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "securedFiles"));
            }

            claims.Add(new Claim(IdentityServerConstants.StandardScopes.Email, user.Email));

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            context.IsActive = user != null;
        }
    }
}

Using the Identity properties validating a token

The IsAdmin property is used to define whether a logged on user has the admin role. This was added to the token using the admin claim in the IProfileService. Now this can be used by defining a policy and validating the policy in a controller. The policies are added in the Startup class in the ConfigureServices method.

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("admin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role", "dataEventRecords.user");
	});
});

The policy can then be used for example in a MVC Controller using the Authorize attribute. The admin policy is used in the UserManagementController.

[Authorize("admin")]
[Produces("application/json")]
[Route("api/UserManagement")]
public class UserManagementController : Controller
{

Now that users can be admin users and expire after 7 days, the application requires a UI to manage this. This UI is implemented in the Angular 2 SPA. The UI requires a user management API to get all the users and also update the users. The Identity EFCore ApplicationDbContext context is used directly in the controller to simplify things, but usually this would be separated from the Controller, or if you have a lot of users, some type of search logic would need to be supported with a filtered result list. I like to have no logic in the MVC controller.

using System;
using System.Collections.Generic;
using System.Linq;
using IdentityServerWithAspNetIdentity.Data;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using ResourceWithIdentityServerWithClient.Model;

namespace ResourceWithIdentityServerWithClient.Controllers
{
    [Authorize("admin")]
    [Produces("application/json")]
    [Route("api/UserManagement")]
    public class UserManagementController : Controller
    {
        private readonly ApplicationDbContext _context;

        public UserManagementController(ApplicationDbContext context)
        {
            _context = context;
        }

        [HttpGet]
        public IActionResult Get()
        {
            var users = _context.Users.ToList();
            var result = new List<UserDto>();

            foreach(var applicationUser in users)
            {
                var user = new UserDto
                {
                    Id = applicationUser.Id,
                    Name = applicationUser.Email,
                    IsAdmin = applicationUser.IsAdmin,
                    IsActive = applicationUser.AccountExpires > DateTime.UtcNow
                };

                result.Add(user);
            }

            return Ok(result);
        }
        
        [HttpPut("{id}")]
        public void Put(string id, [FromBody]UserDto userDto)
        {
            var user = _context.Users.First(t => t.Id == id);

            user.IsAdmin = userDto.IsAdmin;
            if(userDto.IsActive)
            {
                if(user.AccountExpires < DateTime.UtcNow)
                {
                    user.AccountExpires = DateTime.UtcNow.AddDays(7.0);
                }
            }
            else
            {
                // deactivate user
                user.AccountExpires = new DateTime();
            }

            _context.Users.Update(user);
            _context.SaveChanges();
        }   
    }
}

Angular 2 User Management Component

The Angular 2 SPA is built using Webpack 2 with typescript. See https://github.com/damienbod/Angular2WebpackVisualStudio on how to setup a Angular 2, Webpack 2 app with ASP.NET Core.

The Angular 2 requires a service to access the ASP.NET Core MVC service. This is implemented in the UserManagementService which needs to be added to the app.module then.

import { Injectable } from '@angular/core';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import 'rxjs/add/operator/map';
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { SecurityService } from '../services/SecurityService';
import { User } from './models/User';

@Injectable()
export class UserManagementService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration, private _securityService: SecurityService) {
        this.actionUrl = `${_configuration.Server}/api/UserManagement/`;   
    }

    private setHeaders() {

        console.log("setHeaders started");

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        var token = this._securityService.GetToken();
        if (token !== "") {
            let tokenValue = 'Bearer ' + token;
            console.log("tokenValue:" + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

    public GetAll = (): Observable<User[]> => {
        this.setHeaders();
        let options = new RequestOptions({ headers: this.headers, body: '' });

        return this._http.get(this.actionUrl, options).map(res => res.json());
    }

    public Update = (id: string, itemToUpdate: User): Observable<Response> => {
        this.setHeaders();
        return this._http
            .put(this.actionUrl + id, JSON.stringify(itemToUpdate), { headers: this.headers });
    }
}

The UserManagementComponent uses the service and displays all the users, and provides a way of updating each user.

import { Component, OnInit } from '@angular/core';
import { SecurityService } from '../services/SecurityService';
import { Observable } from 'rxjs/Observable';
import { Router } from '@angular/router';

import { UserManagementService } from '../user-management/UserManagementService';
import { User } from './models/User';

@Component({
    selector: 'user-management',
    templateUrl: 'user-management.component.html'
})

export class UserManagementComponent implements OnInit {

    public message: string;
    public Users: User[];

    constructor(
        private _userManagementService: UserManagementService,
        public securityService: SecurityService,
        private _router: Router) {
        this.message = "user-management";
    }
    
    ngOnInit() {
        this.getData();
    }

    private getData() {
        console.log('User Management:getData starting...');
        this._userManagementService
            .GetAll()
            .subscribe(data => this.Users = data,
            error => this.securityService.HandleError(error),
            () => console.log('User Management Get all completed'));
    }

    public Update(user: User) {
        this._userManagementService.Update(user.id, user)
            .subscribe((() => console.log("subscribed")),
            error => this.securityService.HandleError(error),
            () => console.log("update request sent!"));
    }

}

The UserManagementComponent template uses the Users data to display, update etc.

<div class="col-md-12" *ngIf="securityService.IsAuthorized">
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body"  *ngIf="Users">
            <table class="table">
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>IsAdmin</th>
                        <th>IsActive</th>
                        <th></th>
                    </tr>
                </thead>
                <tbody>
                    <tr style="height:20px;" *ngFor="let user of Users">
                        <td>{{user.name}}</td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isAdmin" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isActive" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <button (click)="Update(user)" class="form-control">Update</button>
                        </td>
                    </tr>
                </tbody>
            </table>

        </div>
    </div>
</div>

The user-management component and the service need to be added to the module.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';

import { SecurityService } from './services/SecurityService';
import { DataEventRecordsService } from './dataeventrecords/DataEventRecordsService';
import { DataEventRecord } from './dataeventrecords/models/DataEventRecord';

import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { DataEventRecordsListComponent } from './dataeventrecords/dataeventrecords-list.component';
import { DataEventRecordsCreateComponent } from './dataeventrecords/dataeventrecords-create.component';
import { DataEventRecordsEditComponent } from './dataeventrecords/dataeventrecords-edit.component';

import { UserManagementComponent } from './user-management/user-management.component';


import { HasAdminRoleAuthenticationGuard } from './guards/hasAdminRoleAuthenticationGuard';
import { HasAdminRoleCanLoadGuard } from './guards/hasAdminRoleCanLoadGuard';
import { UserManagementService } from './user-management/UserManagementService';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent,
        DataEventRecordsListComponent,
        DataEventRecordsCreateComponent,
        DataEventRecordsEditComponent,
        UserManagementComponent
    ],
    providers: [
        SecurityService,
        DataEventRecordsService,
        UserManagementService,
        Configuration,
        HasAdminRoleAuthenticationGuard,
        HasAdminRoleCanLoadGuard
    ],
    bootstrap:    [AppComponent],
})

export class AppModule {}

Now the Identity users can be managed fro the Angular 2 UI.

extendingidentity_01

Links

https://github.com/IdentityServer/IdentityServer4

http://docs.identityserver.io/en/dev/

https://github.com/IdentityServer/IdentityServer4.Samples

https://docs.asp.net/en/latest/security/authentication/identity.html

https://github.com/IdentityServer/IdentityServer4/issues/349

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/



Andrew Lock: Troubleshooting ASP.NET Core 1.1.0 install problems

Troubleshooting ASP.NET Core 1.1.0 install problems

I was planning on playing with the latest .NET Core 1.1.0 preview recently, but I ran into a few issues getting it working on my Mac. As I suspected, this was entirely down to my mistakes and my machine's setup, but I'm documenting it here in case any one else runs into similar problem!

Note that as of yesterday the RTM release of 1.1.0 is out, so while not strictly applicable, I would probably have run into the same problems! I've updated the post to reflect the latest version numbers.

TL;DR; There were two issues I ran into. First, the global.json file I used specified an older version of the tooling. Second, I had an older version of the tooling installed that was, according to SemVer, newer than the version I had just installed!

Installing ASP.NET Core 1.1.0

I began by downloading the .NET Core 1.1.0 installer for macOS from the downloads page, following the instructions from the announcement blog post. The installation was quick and went smoothly, installing side-by side with the existing .NET Core 1.0 RTM install.

Troubleshooting ASP.NET Core 1.1.0 install problems

Creating a new 1.1.0 project

According to the blog post, once you've run the installer you should be able to start creating 1.1.0 applications. Running donet new with the .NET CLI should create a new 1.1.0 application, with a project.json that contains an updated Microsoft.NETCore.App dependency, looking something like:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0"
      }
    },
    "imports": "dnxcore50"
  }
}

So I created a sub folder for a test project, ran dotnet new and eagerly checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

Hmmm, that doesn't look right, we still seem to be getting a 1.0.0 project instead of 1.1.0…

Check the global.json

My first thought was that the install hadn't worked correctly - it is a preview after all (it was when i originally tried it!) so wouldn't be completely unheard of. Running dotnet --version to check the version of the CLI being run returned

$ dotnet --version
1.0.0-preview2-003121  

So the preview 2 tooling is being used, which corresponds to the .NET Core 1.0 RTM release, definitely the wrong version.

It was then I remembered a similar issue I had when moving from RC2 to the RTM release - check the global.json! When I had created my sub folder for testing dotnet new, I had automatically copied across a global.json from a previous project. Looking inside, this was what I found:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003121"
  }
}

Bingo! If an SDK version is specified in global.json then it will be used preferentially over the latest tooling. Updating the sdk section with the appropriate value, or removing the SDK section means the latest tooling should be used, which should let me create my 1.1.0 project.

Take two - preview fun

After removing the sdk section of the global.json, I ran dotnet new again, and checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

D'oh, that's still not right! At this point, I started to think something must have gone wrong with the installation, as I couldn't think of any other explanation. Luckily, it's easy to see which versions of the SDK are installed by checking the file system. On a Mac, you can see them at:

/usr/local/share/dotnet/sdk/

Checking the folder, this is what I saw, notice anything odd?

Troubleshooting ASP.NET Core 1.1.0 install problems

There's quite a few different versions of the SDK in there, including the 1.0.0 RTM version (1.0.0-preview2-003121) and also the 1.1.0 Preview 1 version (1.0.0-preview2.1-003155). However there's also a slightly odd one that stands out - 1.0.0-preview3-003213. (Note, with the 1.1 RTM there is a whole new version, 1.0.0-preview2-1-003177)

Most people installing the .NET Core SDK will not run into this issue, as they likely won't have this additional preview3 version. I only have it installed (I think) because I created a couple of pull requests to the ASP.NET Core repositories recently. The way versioning works in the ASP.NET Core repositories for development versions means that although there is a preview3 version of the tooling, it is actually older that the preview2.1 version of the tooling just released, and generates 1.0.0 projects.

When you run dotnet new, and in the absence of a global.json with an sdk section, the CLI will use the most recent version of the tooling as determined by SemVer. Consequently, it had been using the preview3 version and generating 1.0.0 projects!

The simple solution was to delete the 1.0.0-preview3-003213 folder, and re-run dotnet new:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0-preview1-001100-00"
      }
    },
    "imports": "dnxcore50"
  }
}

Lo and behold, a 1.1.0 project!

Summary

The final issue I ran into is not something that general users have to worry about. The only reason it was a problem for me was due to working directly with the GitHub repo, and the slightly screwy SemVer versions when using development packages.

The global.json issue is one that you might run into when upgrading projects. It's well documented that you need to update it when upgrading, but it's easy to overlook.

Anyway, the issues I experienced were entirely down to my setup and stupidity rather than the installer or documentation, so hopefully things go smoother for you. Now time to play with new features!


Andrew Lock: Making ConcurrentDictionary GetOrAdd thread safe using Lazy

Making ConcurrentDictionary GetOrAdd thread safe using Lazy

I was browsing the ASP.NET Core MVC GitHub repo the other day, checking out the new 1.1.0 Preview 1 code, when I spotted a usage of ConcurrentDictionary that I thought was interesting. This post explores the GetOrAdd function, the level of thread safety it provides, and ways to add additional threading constraints.

I was looking at the code that enables using middleware as MVC filters where they are building up a filter pipeline. This needs to be thread-safe, so they sensibly use a ConcurrentDictionary<>, but instead of a dictionary of RequestDelegate, they are using a dictionary of Lazy<RequestDelegate>. Along with the initialisation is this comment:

// 'GetOrAdd' call on the dictionary is not thread safe and we might end up creating the pipeline more
// once. To prevent this Lazy<> is used. In the worst case multiple Lazy<> objects are created for multiple
// threads but only one of the objects succeeds in creating a pipeline.
private readonly ConcurrentDictionary<Type, Lazy<RequestDelegate>> _pipelinesCache  
    = new ConcurrentDictionary<Type, Lazy<RequestDelegate>>();

This post will explore the pattern they are using and why you might want to use it in your code.

tl;dr; To make a ConcurrentDictionary only call a delegate once when using GetOrAdd, store your values as Lazy<T>, and use by calling GetOrAdd(key, valueFactory).Value.

The GetOrAdd function

The ConcurrentDictionary is a dictionary that allows you to add, fetch and remove items in a thread-safe way. If you're going to be accessing a dictionary from multiple threads, then it should be your go-to class.

The vast majority of methods it exposes are thread safe, with the notable exception of one of the GetOrAdd overloads:

TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory);  

This overload takes a key value, and checks whether the key already exists in the database. If the key already exists, then the associated value is returned; if the key does not exist, the provided delegate is run, the value is stored in the dictionary, and then returned to the caller.

For example, consider the following little program.

public static void Main(string[] args)  
{
    var dictionary = new ConcurrentDictionary<string, string>();

    var value = dictionary.GetOrAdd("key", x => "The first value");
    Console.WriteLine(value);

    value = dictionary.GetOrAdd("key", x => "The second value");
    Console.WriteLine(value);
}

The first time GetOrAdd is called, the dictionary is empty, so the value factory runs and returns the string "The first value", storing it against the key. On the second call, GetOrAdd finds the saved value and uses that instead of calling the factory. The output gives:

The first value  
The first value  

GetOrAdd and thread safety.

Internally, the ConcurrentDictionary uses locking to make it thread safe for most methods, but GetOrAdd does not lock while valueFactory is running. This is done to prevent unknown code from blocking all the threads, but it means that valueFactory might run more than once if it is called simultaneously from multiple threads. Thread safety kicks in when saving the returned value to the dictionary and when returning the generated value back to the caller however, so you will always get the same value back from each call.

For example, consider the program below, which uses tasks to run threads simultaneously. It works very similarly to before, but runs the GetOrAdd function on two separate threads. It also increments a counter every time the valueFactory is run.

public class Program  
{
    private static int _runCount = 0;
    private static readonly ConcurrentDictionary<string, string> _dictionary
        = new ConcurrentDictionary<string, string>();

    public static void Main(string[] args)
    {
        var task1 = Task.Run(() => PrintValue("The first value"));
        var task2 = Task.Run(() => PrintValue("The second value"));
        Task.WaitAll(task1, task2);

        PrintValue("The third value")

        Console.WriteLine($"Run count: {_runCount}");
    }

    public static void PrintValue(string valueToPrint)
    {
        var valueFound = _dictionary.GetOrAdd("key",
                    x =>
                    {
                        Interlocked.Increment(ref _runCount);
                        Thread.Sleep(100);
                        return valueToPrint;
                    });
        Console.WriteLine(valueFound);
    }
}

The PrintValue function again calls GetOrAdd on the ConcurrentDictionary, passing in a Func<> that increments the counter and returns a string. Running this program produces one of two outputs, depending on the order the threads are scheduled; either

The first value  
The first value  
The first value  
Run count: 2  

or

The second value  
The second value  
The second value  
Run count: 2  

As you can see, you will always get the same value when calling GetOrAdd, depending on which thread returns first. However the delegate is being run on both asynchronous calls, as shown by _runCount=2, as the value had not been stored from the first call before the second call runs. Stepping through, the interactions could look something like this:

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, and returns the value "The first value" back to the concurrent dictionary. The dictionary checks there is still no value for "key", and inserts the new KeyValuePair. Finally, it returns "The first value" to the caller.

  4. Thread B completes its invocation and returns the value "The second value" back to the concurrent dictionary. The dictionary sees the value for "key" stored by Thread A, so it discards the value it created and uses that one instead, returning the value back to the caller.

  5. Thread C calls GetOrAdd and finds the value already exists for "key", so returns the value, without having to invoke valueFactory

In this case, running the delegate more than once has no adverse effects - all we care about is that the same value is returned from each call to GetOrAdd. But what if the delegate has side effects such that we need to ensure it is only run once?

Ensuring the delegate only runs once with Lazy

As we've seen, there are no guarantees made by ConcurrentDictionary about the number of times the Func<> will be called. When building a middleware pipeline, however, we need to be sure that the middleware is only built once, as it could be doing some bootstrapping that is expensive or not thread safe. The solution that the ASP.NET Core team used is to use Lazy<> initialisation.

The output we are aiming for is

The first value  
The first value  
The first value  
Run count: 1  

or similarly for "The second value" - it doesn't matter which wins out, the important points are that the same value is returned every time, and that _runCount is always 1.

Looking back at our previous example, instead of using a ConcurrentDictionary<string, string>, we create a ConcurrentDictionary<string, Lazy<string>>, and we update the PrintValue() method to create a lazy object instead:

public static void PrintValueLazy(string valueToPrint)  
{
    var valueFound = _lazyDictionary.GetOrAdd("key",
                x => new Lazy<string>(
                    () =>
                        {
                            Interlocked.Increment(ref _runCount);
                            Thread.Sleep(100);
                            return valueToPrint;
                        }));
    Console.WriteLine(valueFound.Value);
}

There are only two changes we have made here. We have updated the GetOrAdd call to return a Lazy<string> rather than a string directly, and we are calling valueFound.Value to get the actual string value to write to the console. To see why this solves the problem, lets step through the example to see an example of what happens when we run the whole program.

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, returning an uninitialised Lazy<string> object. The delegate inside the Lazy<string> has not been run at this point, we've just created the Lazy<string> container. The dictionary checks there is still no value for "key", and so inserts the Lazy<string> against it, and finally, returns the Lazy<string> back to the caller.

  4. Thread B completes its invocation, similarly returning an uninitialised Lazy<string> object. As before, the dictionary sees the Lazy<string> object for "key" stored by Thread A, so it discards the Lazy<string> it just created and uses that one instead, returning it back to the caller.

  5. Thread A calls Lazy<string>.Value. This invokes the provided delegate in a thread safe way, such that if it is called simultaneously by two threads, it will only run the delegate once.

  6. Thread B calls Lazy<string>.Value. This is the same Lazy<string> object that Thread A just initialised (remember the dictionary ensures you always get the same value back.) If Thread A is still running the initialisation delegate, then Thread B just blocks until it finishes and it can access the result. We just get the final return string, without invoking the delegate for a second time. This is what gives us the run-once behaviour we need.

  7. Thread C calls GetOrAdd and finds the Lazy<string> object already exists for "key", so returns the value, without having to invoke valueFactory. The Lazy<string> has already been initialised, so the resulting value is returned directly.

We still get the same behaviour from the ConcurrentDictionary in that we might run the valueFactory more than once, but now we are just calling new Lazy<>() inside the factory. In the worst case, we create multiple Lazy<> objects, which get discarded by the ConcurrentDictionary when consolidating inside the GetOrAdd method.

It is the Lazy<> object which enforces that we only run our expensive delegate once. By calling Lazy<>.Value we trigger the delegate to run in a thread safe way, such that we can be sure it will only be run by one thread at a time. Other threads which call Lazy<>.Value simultaneously will be blocked until the first call completes, and then will use the same result.

Summary

When using GetOrAdd , if your valueFactory is idempotent and not expensive, then there is no need for this trick. You can be sure you will always get the same value with each call, but you need to be aware the valueFactory may run multiple times.

If you have an expensive operation that must be run only once as part of a call to GetOrAdd, then using Lazy<> is a great solution. The only caveat to be aware of is that Lazy<>.Value will block other threads trying to access the value until the first call completes. Depending on your use case, this may or may not be a problem, and is the reason GetOrAdd does not have these semantics by default.


Pedro Félix: Accessing the HTTP Context on ASP.NET Core

TL;DR

On ASP.NET Core, the access to the request context can be done via the new IHttpContextAccessor interface, which provides a HttpContext property with this information. The IHttpContextAccessor is obtained via dependency injection or directly from the service locator. However, it requires an explicit service collection registration, mapping the IHttpContextInterface to the HttpContextAccessor concrete class, with singleton scope.

Not so short version

System.Web

On classical ASP.NET, the current HTTP context, containing both request and response information, can be accessed anywhere via the omnipresent System.Web.HttpContext.Current static property. Internally, this property uses information stored in the CallContext object representing the current call flow. This CallContext is preserved even when the same flow crosses multiple threads, so it can handle async methods.

ASP.NET Web API

On ASP.NET Web API, obtaining the current HTTP context without having to flow it explicitly on every call is typically achieved with the help of the dependency injection container.
For instance, Autofac provides the RegisterHttpRequestMessage extension method on the ContainerBuilder, which allows classes to have HttpRequestMessage constructor dependencies.
This extension method configures a delegating handler that registers the input HttpRequestMessage instance into the current lifetime scope.

ASP.NET Core

ASP.NET Core uses a different approach. The access to the current context is provided via a IHttpContextAccessor service, containing a single HttpContext property with both a getter and a setter. So, instead of directly injecting the context, the solution is based on injecting an accessor to the context.
This apparently superfluous indirection provides one benefit: the accessor can have singleton scope, meaning that it can be injected into singleton components.
Notice that injecting a per HTTP request dependency, such as the request message, directly into another component is only possible if the component has the same lifetime scope.

In the current ASP.NET Core 1.0.0 implementation, the IHttpContextAccessor service is implemented by the HttpContextAccessor concrete class and must be configured as a singleton.
 

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}

Notice that this registration is not done by default and must be explicitly performed. If not, any IHttpContextAccessor dependency will result in an activation exception.
On the other hand, no additional configuration is need to capture the context at the beginning of each request, because this is automatically done.

The following implementation details shed some light on this behavior:

  • Each time a new request starts to be handled, a common IHttpContextFactory reference is used to create the HttpContext. This common reference is obtained by the WebHost during startup and used for all requests.

  • The used HttpContextFactory concrete implementation is initialized with an optional IHttpContextAccessor implementation. When available, this accessor is assigned with each created context. This means that if any accessor is registered on the services, then it will automatically be used to set all created contexts.

  • How can the same accessor instance hold different contexts, one for each call flow? The answer lies in the HttpContextAccessor concrete implementation and its use of AsyncLocal to store the context separately for each logical call flow. It is this characteristics that allows a singleton scoped accessor to provide request scoped contexts.

To conclude:

  • Everywhere the HTTP context is needed, declare an IHttpContextAccessor dependency and use it to fetch the context.

  • Don’t forget to explicitly register the IHttpContextAccessor interface on the service collection, mapping it to the concrete HttpContextAccessor type.

  • Also, don’t forget to make this registration with singleton scope.



Pedro Félix: Should I PUT or should I POST? (Darling you gotta let me know)

(yes, it doesn’t rhyme however I couldn’t resist the association)

Selecting the proper methods (e.g. GET, POST, PUT, …) to use when designing HTTP based APIS is typically a subject of much debate, and eventually some bike-shedding. In this post I briefly present the rules that I normally follow when presented with this design task.

Don’t go against the HTTP specification

First and foremost, make sure the properties of the chosen methods aren’t violated on the scenario under analysis. The typical offender is using GET for an interaction that requests a state change on the server.
This is because GET is defined to have the safe property, defined as

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.

Another example is choosing PUT for requests that aren’t idempotent, such as appending an item to a collection.
The idempotent property is defined by RFC 7231 as

A request method is considered “idempotent” if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.

Violating these properties is harmful because there may exist system components whose correct behavior depends on them being true. An example is a crawler program that freely follows all GET links in a document, assuming that no state change will be performed by these requests, and that ends up changing the system state.

Another example is an intermediary (e.g. reverse proxy) that automatically retries any failed PUT request (e.g. timeout), assuming they are idempotent. If the PUT is appending items to a collection (append is not idempotent), and the first PUT request was successfully performed and only the response message was lost, then the retry will end up adding two replicated items to the collection.

This violation can also have security implications. For instance, most server frameworks don’t protect GET requests agains CSRF (Cross-Site Request Forgery) because this method is not supposed to change state and reads are already protected by the same-origin browser policy.

Take advantage of the method properties

After ensuring the correctness concerns, i.e., ensuring requests don’t violate any property of chosen methods, we can revert our analysis and check if there aren’t any methods that best fit the intended functionality. After having ensured correctness, in this stage our main concern is going to be optimization.

For instance, if a request defines the complete state for a resource and is idempotent, perhaps a PUT is a best fit than a POST. This is not because a POST will produce incorrect behavior but because using a PUT may induce better system properties. For instance, an intermediary (e.g. reverse proxy or framework middleware) may automatically retry failed requests, and by this provide some fault recovery.

When nothing else fits, use POST

Contrary to some HTTP myths, the POST is not solely intended to create resources. In fact, the new RFC 7231 states

The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics

The “according to the resource’s own specific semantics” effectively allows us to use POST for requests with any semantics. However the fact that it allows us doesn’t mean that we always should. Again, if another method (e.g. GET or PUT) best fits the request purpose, not choosing it may mean throwing away interesting properties, such as caching or fault recovery.

Does my API look RESTful in this method?

One thing that I always avoid is deciding based on the apparent “RESTfullness” of the method – For instance, an API doesn’t have to use PUT to be RESTful.

First and foremost we should think in terms of system properties and use HTTP accordingly. That implies:

  • Not violating its rules – what can go wrong if I choose PUT for this request?
  • Taking advantage of its benefits – what do I loose if I don’t choose PUT for this request?

Hope this helps.
Cheers.



Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.