Andrew Lock: Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the path that generated an error with the StatusCodePages Middleware

In my previous post, I showed how to use the re-execute features of the StatusCodePagesMiddleware to generate custom error pages for status-code errors. This allows you to easily create custom error pages for common error status codes like 404 or 500.

Retrieving the path that generated an error with the StatusCodePages Middleware

The re-executing approach using UseStatusCodePagesWithReExecute is generally a better approach than using UseStatusCodePagesWithRedirects as it generates the custom error page in the same request that caused it. This allows you to return the correct error code in response to the original request. This is more 'correct' from an HTTP/SEO/semantic point of view, but it also means the context of the original request is maintained when you generate the error.

In this quick post, I show how you can use this context to obtain the original path that triggered the error status code when the middleware pipeline is re-executed.

Setting up the status code pages middleware

I'll start by adding the StatusCodePagesMiddleware as I did in my previous post. I'm using the same UseStatusCodePagesWithReExecute as before, and providing the error status code when the pipeline is re-executed using a statusCode querystring parameter:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

The corresponding action method that gets invoked is:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

This gives me customised error pages for 404 and 500 status codes:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving the original error path

This technique lets you customise the response returned when a URL generates an error status code, but on occasion you may want to know the original path that actually caused the error. From the flow diagram at the top of the page, I want to know the /Home/Problem URL when the HomeController.Error action is executing.

Luckily, the StatusCodePagesMiddleware stores a request-feature with the original path on the HttpContext. You can access it from the Features property:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();
        ViewData["ErrorUrl"] = feature?.OriginalPath;

        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

Adding this to the Error method means you can display or log the path, depending on your needs:

Retrieving the path that generated an error with the StatusCodePages Middleware

Note that I've used the null propagator syntax ?. to retrieve the path, as the feature will only be added if the StatusCodePagesMiddleware is re-executing the pipeline. This will avoid any null reference exceptions if the action is executed without using the StatusCodePagesMiddleware, for example by directly requesting /Home/Error?statusCode=404:

Retrieving the path that generated an error with the StatusCodePages Middleware

Retrieving additional information

The StatusCodePagesMiddleware sets an IStatusCodeReExecuteFeature on the HttpContext when it re-executes the pipeline. This interface exposes two properties; the original path, as you have already seen along with the PathBase

public interface IStatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
}

The one property it doesn't (currently) expose is the original querystring. However the concrete type that is actually set by the middleware is the StatusCodeReExecuteFeature. This contains an additional property OriginalQuerystring:

public interface StatusCodeReExecuteFeature  
{
    string OriginalPathBase { get; set; }
    string OriginalPath { get; set; }
    string OriginalPath { get; set; }
}

If you're willing to add some coupling to this implementation in your code, you can access these properties by safely casting the IStatusCodeReExecuteFeature to a StatusCodeReExecuteFeature. For example:

var feature = HttpContext.Features.Get<IStatusCodeReExecuteFeature>();  
var reExecuteFeature = feature as StatusCodeReExecuteFeature  
ViewData["ErrorPathBase"] = reExecuteFeature?.OriginalPathBase;  
ViewData["ErrorQuerystring"] = reExecuteFeature?.OriginalQueryString;  

This lets you display/log the complete path that gave you the error, including the querystring

Retrieving the path that generated an error with the StatusCodePages Middleware

Note: If you look at the dev branch in the Diagnostics GitHub repo you'll notice that the interface actually does contain OriginalQueryString. This will be coming with .NET Core 2.0 / ASP.NET Core 2.0, as it is a breaking change. It'll make the above scenario that little bit easier though

Summary

The StatusCodePagesMiddleware is just one of the pieces needed to provide graceful handling of errors in your application. The re-execute approach is a great way to include custom layouts in your application, but it can obscure the origin of the error. Obviously, logging the error where it is generated provides the best context, but the IStatusCodeReExecuteFeature can be useful for easily retrieving the source of the error when generating the final response.


Damien Bowden: .NET Core, ASP.NET Core logging with NLog and PostgreSQL

This article shows how .NET Core or ASP.NET Core applications can log to a PostgreSQL database using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

Other posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. .NET Core logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Setting up PostgreSQL

pgAdmin can be used to setup the PostgreSQL database which is used to save the logs. A log database was created for this demo, which matches the connection string in the nlog.config file.

Using the pgAdmin, open a query edit view and execute the following script to create a table in the log database.

CREATE TABLE logs
( 
    Id serial primary key,
    Application character varying(100) NULL,
    Logged text,
    Level character varying(100) NULL,
    Message character varying(8000) NULL,
    Logger character varying(8000) NULL, 
    Callsite character varying(8000) NULL, 
    Exception character varying(8000) NULL
)

At present it is not possible to log a date property to PostgreSQL using NLog, only text fields are supported. A github issue exists for this here. Due to this, the Logged field is defined as a text, and uses the DateTime value when the log is created.

.NET or ASP.NET Core Application

The required packages need to be added to the csproj file. For an ASP.NET Core aplication, add NLog.Web.AspNetCore and Npgsql, for a .NET Core application add NLog and Npgsql.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>ConsoleNLogPostgreSQL</AssemblyName>
    <OutputType>Exe</OutputType>
    <PackageId>ConsoleNLog</PackageId>
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
    <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
    <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
    <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.1" />
    <PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />
    <PackageReference Include="Npgsql" Version="3.2.2" />
    <PackageReference Include="System.Data.SqlClient" Version="4.3.0" />
  </ItemGroup>
</Project>

Or use the NuGet package manager in Visual Studio 2017.

The nlog.config file is then setup to log to PostgreSQL using the database target with the dbProvider configured for Npgsql and the connectionString for the required instance of PostgreSQL. The commandText must match the database setup in the TSQL script. If you add, for example extra properties from the NLog.Web.AspNetCore package to the logs, these also need to be added here.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
  
  <targets>
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

    <target name="database" xsi:type="Database"
              dbProvider="Npgsql.NpgsqlConnection, Npgsql"
              connectionString="User ID=damienbod;Password=damienbod;Host=localhost;Port=5432;Database=log;Pooling=true;"
             >

          <commandText>
              insert into logs (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
      
    <logger name="*" minlevel="Trace" writeTo="database" />
      
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

When using ASP.NET Core, the NLog.Web.AspNetCore can be added to the nlog.config file to use the extra properties provided here.

<extensions>
     <add assembly="NLog.Web.AspNetCore"/>
</extensions>
            

Using the log

The logger can be used using the LogManager or added to the NLog log configuration in the Startup class in an ASP.NET Core application.

Basic example:

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

var logger = LogManager.GetLogger("console");
logger.Warn("console logging is great");
logger.Error(new ArgumentException("oh no"));

Startup configuration in an ASP.NET Core application:

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
	// Add framework services.
	services.AddMvc();

	services.AddScoped<LogFilter>();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	//add NLog.Web
	app.AddNLogWeb();

	////foreach (DatabaseTarget target in LogManager.Configuration.AllTargets.Where(t => t is DatabaseTarget))
	////{
	////	target.ConnectionString = Configuration.GetConnectionString("NLogDb");
	////}
	
	////LogManager.ReconfigExistingLoggers();

	LogManager.Configuration.Variables["connectionString"] = Configuration.GetConnectionString("NLogDb");
	LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

	app.UseMvc();
}

When the application is run, the logs are added to the database.

Links

https://www.postgresql.org/

https://www.pgadmin.org/

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

By default, the ASP.NET Core templates include either the ExceptionHandlerMiddleware or the DeveloperExceptionPage. Both of these catch exceptions thrown by the middleware pipeline, but they don't handle error status codes that are returned by the pipeline (without throwing an exception). For that, there is the StatusCodePagesMiddleware.

There are a number of ways to use the StatusCodePagesMiddleware but in this post I will be focusing on the version that re-executes the pipeline.

Default Status Code Pages

I'll start with the default MVC template, but I'll add a helper method for returning a 500 error:

public class HomeController : Controller  
{
    public IActionResult Problem()
    {
        return StatusCode(500);
    }  
}

To start with, I'll just add the default StatusCodePagesMiddleware implementation:

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePages();

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

With this in place, making a request to an unknown URL gives the following response:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

The default StatusCodePagesMiddleware implementation will return the simple text response when it detects a status code between 400 and 599. Similarly, if you make a request to /Home/Problem, invoking the helper action method, then the 500 status code text is returned.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Re-execute vs Redirect

In reality, it's unlikely you'll want to use status code pages with this default setting in anything but a development environment. If you want to intercept status codes in production and return custom error pages, you'll want to use one of the alternative extension methods that use redirects or pipeline re-execution to return a user-friendly page:

  • UseStatusCodePagesWithRedirects
  • UseStatusCodePagesWithReExecute

These two methods have a similar outcome, in that they allow you to generate user-friendly custom error pages when an error occurs on the server. Personally, I would suggest always using the re-execute extension method rather than redirects.

The problem with redirects for error pages is that they somewhat abuse the return codes of HTTP, even though the end result for a user is essentially the same. With the redirect method, when an error occurs the pipeline will return a 302 response to the user, with a redirect to a provided error path. This will cause a second response to be made to the the URL that is used to generate the custom error page, which would then return a 200 OK code for the second request:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Semantically this isn't really correct, as you're triggering a second response, and ultimately returning a success code when an error actually occurred. This could also cause issues for SEO. By re-executing the pipeline you keep the correct (error) status code, you just return user-friendly HTML with it.

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

You are still in the context of the initial response, but the whole pipeline after the StatusCodePagesMiddleware is executed for a second time. The content generated by this second response is combined with the original Status Code to generate the final response that gets sent to the user. This provides a workflow that is overall more semantically correct, and means you don't completely lose the context of the original request.

Adding re-execute to your pipeline

Hopefully you're swayed by the re-execte approach; luckily it's easy to add this capability to your middleware pipeline. I'll start by updating the Startup class to use the re-execute extension instead of the basic one.

public void Configure(IApplicationBuilder app)  
{
    app.UseDeveloperExceptionPage();

    app.UseStatusCodePagesWithReExecute("/Home/Error", "?statusCode={0}");

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Note, the order of middleware in the pipeline is important. The StatusCodePagesMiddleware should be one of the earliest middleware in the pipeline, as it can only modify the response of middleware that comes after it in the pipeline

There are two arguments to the UseStatusCodePagesWithReExecute method. The first is a path that will be used to re-execute the request in the pipeline and the second is a querystring that will be used.

Both of these paths can include a placeholder {0} which will be replaced with the status code integer (e.g. 404, 500 etc) when the pipeline is re-executed. This allows you to either execute different action methods depending on the error that occurred, or to have a single method that can handle multiple errors.

The following example takes the latter approach, using a single action method to handle all the error status codes, but with special cases for 404 and 500 errors provided in the querystring:

public class HomeController : Controller  
{
    public IActionResult Error(int? statusCode = null)
    {
        if (statusCode.HasValue &&
        {
            if (statusCode == 404 || statusCode == 500)
            {
                var viewName = statusCode.ToString();
                return View(viewName);
            }
        }
        return View();
    }
}

When a 404 is generated (by an unknown path for example) the status code middleware catches it, and re-executes the pipeline using /Home/Error?StatusCode=404. The Error action is invoked, and executes the 404.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Similarly, a 500 error is special cased:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Any other error executes the default Error.cshtml template:

Re-execute the middleware pipeline with the StatusCodePages Middleware to create custom error pages

Summary

Congratulations, you now have custom error pages in your ASP.NET Core application. This post shows how simple it is to achieve by re-executing the pipeline. I strongly recommend you use this approach instead of trying to use the redirects overload. In the next post, I'll show how you can obtain the original URL that triggered the error code during the second pipeline execution.


Anuraj Parameswaran: Working with dependencies in dotnet core

This post is about working with nuget dependencies and project references in ASP.NET Core or .NET Core. In earlier versions of dotnet core, you can add dependencies by modifying the project.json file directly and project references via global.json. This post is about how to do this better with dotnet add command.


Andrew Lock: Deconstructors for non-tuple types in C# 7.0

Deconstructors for non-tuple types in C# 7.0

As well as finally seeing the RTM of the .NET Core tooling, Visual Studio 2017 brought a whole host of new things to the table. Among these is C# 7.0, which introduces a number of new features to the language.

Many of these features are essentially syntactic sugar over things that were already possible, but were harder work or more cumbersome in earlier versions of the language. Tuples feels like one of these features that I'm going to end up using quite a lot.

Deconstructing tuples

Often you'll find that you want to return more than one value from a method. There's a number of ways you can achieve this currently (out parameters, System.Tuple, custom class) but none of them are particularly smooth. If you really are just returning two pieces of data, without any associated behaviour, then the new tuples added in C# 7 are a great fit.

I won't go into much detail on tuples here, so I suggest you checkout one of the many recent articles introducing the feature if they're new to you. I'm just going to look at one of the associated features of tuples - the ability to deconstruct them.

In the following example, the method GetUser() returns a tuple consisting of an integer and a string:

(int id, string name) GetUser()
{
    return (123, "andrewlock");
}

If I call this method from my code, I can access the id and name values by name - so much cleaner than out parameters or the Item1, Item2 of System.Tuple.

Deconstructors for non-tuple types in C# 7.0

Another feature is the ability to automatically deconstruct the tuple values into separate variables. So for example, I could do:

(var userId, var username) = GetUser();
Console.WriteLine($"The user with id {userId} is {username}");  

This creates two variables, an integer called userId and a string called username. The tuple has been automatically deconstructed into these two variables.

Deconstructing non-tuples

This feature is great, but it is actually not limited to just tuples - you can add deconstructors to all your classes!

The following example shows a User class with a deconstructor that returns the FirstName and LastName properties:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

With this in place I can deconstruct any User object:

var user = new User  
{
    FirstName = "Joe",
    LastName = "Bloggs",
    Email = "joe.bloggs@example.com",
    Age = 23
};

(var firstName, var lastName) = user;

Console.WriteLine($"The user's name is {firstName} {lastName}");  
// The user's name is Joe Bloggs

We are creating a User object, and then deconstructing it into the firstName and lastName variables, which are declared as part of the deconstruction (they don't have to be declared inlcline, you can use existing variables too).

To create a deconstructor, create a function of the following form:

public void Deconstruct(out T var1, ..., out T2 var 2);  

The values that are produced are declared as out parameters. You can have as many arguments as you like, the caller just needs to provide the correct number of variables when calling the deconstructor. You can even have multiple overloads with different numbers of parameters:

public class User  
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string Email { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }

    public void Deconstruct(out string firstName, out string lastName, out int age)
    {
        firstName = FirstName;
        lastName = LastName;
        age = Age;
    }
}

The same user could be deconstructed in multiple ways, depending on the needs of the caller:

(var firstName1, var lastName1) = user;
(var firstName2, var lastName2, var age) = user;

Ambiguous overloads

One thing that might cross your mind is what happens if you have multiple overloads with the same number of parameters. In the following example I add an additional deconstructor also accepts three parameters, where the third parameter is a string rather than an int:

public partial class User  
{
    // remainder of class as before

    public void Deconstruct(out string firstName, out string lastName, out string email)
    {
        firstName = FirstName;
        lastName = LastName;
        email = Email;
    }
}

This code compiles, but if you try and actually deconstruct the object you'll get some red squigglies:

Deconstructors for non-tuple types in C# 7.0

At first this seems like it's just a standard C# type inference error - there are two candidate method calls so you need to disambiguate between them by providing explicit types instead of var. However, even explicitly declaring the type won't clear this one up:

Deconstructors for non-tuple types in C# 7.0

You'll still get the following error:

The call is ambiguous between the following methods or properties: 'Program.User.Deconstruct(out string, out string, out int)' and 'Program.User.Deconstruct(out string, out string, out string)'  

So make sure not to overload multiple Deconstruct methods in a type with the same numbers of parameters!

Bonus: Predefined type 'System.ValueTuple`2' is not defined or imported

When you first start using tuples, you might get this confusing error:

Predefined type 'System.ValueTuple`2' is not defined or imported  

But don't panic, you just need to add the System.ValueTuple NuGet package to your project, and all will be good again:

Deconstructors for non-tuple types in C# 7.0

Summary

This was just a quick look at the deconstruction feature that came in C# 7.0. For a more detailed look, check out some of the links below:


Andrew Lock: Preventing mass assignment or over posting in ASP.NET Core

Preventing mass assignment or over posting in ASP.NET Core

Mass assignment, also known as over-posting, is an attack used on websites that involve some sort of model-binding to a request. It is used to set values on the server that a developer did not expect to be set. This is a well known attack now, and has been discussed many times before, (it was a famous attack used against GitHub some years ago), but I wanted to go over some of the ways to prevent falling victim to it in your ASP.NET Core applications.

How does it work?

Mass assignment typically occurs during model binding as part of MVC. A simple example would be where you have a form on your website in which you are editing some data. You also have some properties on your model which are not editable as part of the form, but instead are used to control the display of the form, or may not be used at all.

For example, consider this simple model:

public class UserModel  
{
    public string Name { get; set; }
    public bool IsAdmin { get; set; }
}

It has two properties, but we only actually going to allow the user to edit the Name property - the IsAdmin property is just used to control the markup they see:

@model UserModel

<form asp-action="Vulnerable" asp-Controller="Home">  
    <div class="form-group">
        <label asp-for="Name"></label>
        <input class="form-control" type="TextBox" asp-for="Name" />
    </div>
    <div class="form-group">
        @if (Model.IsAdmin)
        {
            <i>You are an admin</i>
        }
        else
        {
            <i>You are a standard user</i>
        }
    </div>
    <button class="btn btn-sm" type="submit">Submit</button>
</form>  

So the idea here is that you only render a single input tag to the markup, but you post this to a method that uses the same model as you used for rendering:

[HttpPost]
public IActionResult Vulnerable(UserModel model)  
{
    return View("Index", model);
}

This might seem OK - in the normal browser flow, a user can only edit the Name field. When they submit the form, only the Name field will be sent to the server. When model binding occurs on the model parameter, the IsAdmin field will be unset, and the Name will have the correct value:

Preventing mass assignment or over posting in ASP.NET Core

However, with a simple bit of HTML manipulation, or by using Postman/Fiddler , a malicious user can set the IsAdmin field to true. The model binder will dutifully bind the value, and you have just fallen victim to mass assignment/over posting:

Preventing mass assignment or over posting in ASP.NET Core

Defending against the attack

So how can you prevent this attack? Luckily there's a whole host of different ways, and they are generally the same as the approaches you could use in the previous version of ASP.NET. I'll run through a number of your options here.

1. Use BindAttribute on the action method

Seeing as the vulnerability is due to model binding, our first option is to use the BindAttribute:

public IActionResult Safe1([Bind(nameof(UserModel.Name))] UserModel model)  
{
    return View("Index", model);
}

The BindAttribute lets you whitelist only those properties which should be bound from the incoming request. In our case, we have specified just Name, so even if a user provides a value for IsAdmin, it will not be bound. This approach works, but is not particularly elegant, as it requires you specify all the properties that you want to bind.

2. Use [Editable] or [BindNever] on the model

Instead of applying binding directives in the action method, you could use DataAnnotations on the model instead. DataAnnotations are often used to provide additional metadata on a model for both generating appropriate markup and for validation.

For example, our UserModel might actually be already decorated with some data annotations for the Name property:

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    [Editable(false)]
    public bool IsAdmin { get; set; }
}

Notice that as well as the Name attributes, I have also added an EditableAttribute. This will be respected by the model binder when the post is made, so an attempt to post to IsAdmin will be ignored.

The problem with this one is that although applying the EditableAttribute to the IsAdmin produces the correct output, it may not be semantically correct in general. What if you can edit the IsAdmin property in some cases? Things can just get a little messy sometimes.

As pointed out by Hamid in the comments, the [BindNever] attribute is a better fit here. Using [BindNever] in place of [Editable(false)] will prevent binding without additional implications.

3. Use two different models

Instead of trying to retrofit safety to our models, often the better approach is conceptually a more simple one. That is to say that our binding/input model contains different data to our view/output model. Yes, they both have a Name property, but they are encapsulating different parts of the system so it could be argued they should be two different classes:

public class BindingModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }
}

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    [Editable(false)]
    public bool IsAdmin { get; set; }
}

Here our BindingModel is the model actually provided to the action method during model binding, while the UserModel is the model used by the View during HTML generation:

public IActionResult Safe3(BindingModel bindingModel)  
{
    var model = new UserModel();

    // can be simplified using AutoMapper
    model.Name = bindingModel.Name;

    return View("Index", model);
}

Even if the IsAdmin property is posted, it will not be bound as there is no IsAdmin property on BindingModel. The obvious disadvantage to this simplistic approach is the duplication this brings, especially when it comes to the data annotations used for validation and input generation. Any time you need to, for example, update the max string length, you need to remember to do it in two different places.

This brings us on to a variant of this approach:

4. Use a base class

Where you have common properties like this, an obvious choice would be to make one of the models inherit from the other, like so:

public class BindingModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }
}

public class UserModel : BindingModel  
{
    public bool IsAdmin { get; set; }
}

This approach keeps your models safe from mass assignment attacks by using different models for model binding and for View generation. But compared to the previous approach, you keep your validation logic DRY.

public IActionResult Safe4(BindingModel bindingModel)  
{
    // do something with the binding model
    // when ready to display HTML, create a new view model
    var model = new UserModel();

    // can be simplified using e.g. AutoMapper
    model.Name = bindingModel.Name;

    return View("Index", model);
}

There is also a variation of this approach which keeps your models completely separate, but allows you to avoid duplicating all your data annotation attributes by using the ModelMetadataTypeAttribute.

5. Use ModelMetadataTypeAttribute

The purpose of this attribute is to allow you defer all the data annotations and additional metadata about you model to a different class. If you want to keep your BindingModel and UserModel hierarchically distinct, but also son't want to duplicate all the [MaxLength(200)] attributes etc, you can use this approach:

[ModelMetadataType(typeof(UserModel))]
public class BindingModel  
{
    public string Name { get; set; }
}

public class UserModel  
{
    [MaxLength(200)]
    [Display(Name = "Full name")]
    [Required]
    public string Name { get; set; }

    public bool IsAdmin { get; set; }
}

Note that only the UserModel contains any metadata attributes, and that there is no class hierarchy between the models. However the MVC model binder will use the metadata of the equivalent properties in the UserModel when binding or validating the BindingModel.

The main thing to be aware of here is that there is an implicit contract between the two models now - if you were to rename Name on the UserModel, the BindingModel would no longer have a matching contract. There wouldn't be an error, but the validation attributes would no longer be applied to BindingModel.

Summary

This was a very quick run down of some of the options available to you to prevent mass assignment. Which approach you take is up to you, though I would definitely suggest using one of the latter 2-model approaches. There are other options too, such as doing explicit binding via TryUpdateModelAsync<> but the options I've shown represent some of the most common approaches. Whatever you do, don't just blindly bind your view models if you have properties that should not be edited by a user, or you could be in for a nasty surprise.

And whatever you do, don't bind directly to your EntityFramework models. Pretty please.


Andrew Lock: Git integration improvements in Visual Studio 2017 - git-hooks

Git integration improvements in Visual Studio 2017 - git-hooks

Visual Studio 2017 includes a whole host of improvements to its Git integration. Among other things, SSH support is built it, you can push --force-with-lease, and easily diff commits.

Some of these improvements are due to a switch in the way the Git integration works - instead of relying on libgit, VS2017 has switched to using Git.exe for all your Git interactions. That might seem like a minor change, but it actually solves a number of the issues I had in using git in my daily VS2015 work.

Git integration in Visual Studio

For those that don't know, Visual Studio has come with built in support for Git repositories for some time. You can do a lot from within VS, including staging and commiting obviously, but also merging, rebasing, managing branches (both local and remote), viewing the commit history and a whole host of other options.

Git integration improvements in Visual Studio 2017 - git-hooks

I know the concept of a git UI is an abomination for some people, but for some things I really like it. Don't get me wrong, I'm very comfortable at the command line, but the built in diff view in VS works really well for me, and sometimes it's just handy to stay in your IDE.

One of the windows I use the most is the Changes window within Team Explorer. This shows you all the files you have waiting to be committed - it's essentially a visual git status. Having that there while I'm working is great, and easily lets me flick back to a file I was editing. I find it sometimes easier to work with than ctrl + Tabing through the sea of tabs I inevitably have open:

Git integration improvements in Visual Studio 2017 - git-hooks

Git integration limitations in VS 2015

Generally speaking, the changes window has served me well, but there were a couple of niggles. One of the problems I often ran into was when you have files in your repo that are not part of the Visual Studio solution. Normally, hitting Commit All should be equivalent to running git commit -Am "My commit message" i.e. it should commit all unstaged files. However, occasionally I find that it leaves out some files that are part of the repository but not part of the solution. I've seen it do it with word files in particular.

By calling out to the underlying git.exe executable, you can be sure that the details shown in the Changes window match those you'd get from git status; a much smoother experience.

Another feature that works from the command line, but not from the Changes window in VS 2015 was client-side git-hooks.

Using git hooks in Visual Studio 2017

I dabbled with using git-hooks on the client side a while back. These run in your checked-out repository, rather than on the git-server. You can use them for a whole variety of things, but a common reason is to validate and enforce the format of commit messages before the commit is actually made. These hooks aren't full-proof, and they aren't installed by default when you clone a repository, but they can be handy none the less.

An occasional requirement for commit messages is that they should always start with an issue number. For example, if your issue tracker, such as JIRA, produces issues prefixed with EV-, e.g. EV-123 or EV-345, you might require that all commit messages start with such an EV- issue label to ensure commits are tracked correctly.

If you create a commit-msg file inside the .git/hooks directory of your repository, then you can create a file that validates the format of your commit message before the commit is made. For example, I used this simple script to run a regex on the commit message to check it starts with an issue number:

#!C:/Program\ Files/Git/usr/bin/sh.exe

COMMIT_MESSAGE_FILE=$1  
COMMIT_MESSAGE_LINE1=$(head -n 1 $COMMIT_MESSAGE_FILE)  
ERR_MSG='Aborting commit. Your commit message is missing a JIRA Issue (''EV-1111'')'

MATCH_RESULT=$(echo $COMMIT_MESSAGE_LINE1 | grep -E '^EV-[[:digit:]]+.*')

if [[ ! -n "$MATCH_RESULT" ]]; then  
    echo "ERR_MSG" >&2
    exit 1
fi

exit 0  

You can also use powershell and other scripting languages if you like and have them available. The commit-msg file above is specific to my windows machine and location of Git.

With this file in place, when you try and make a commit, it will be rejected with the message:

Aborting commit. Your commit message is missing a JIRA Issue (''EV-1111'')  

Git integration improvements in Visual Studio 2017 - git-hooks

Good, if I forget to add an issue, the message will let me know.

This might seem like a handy feature, but the big problem I had was VS 2015's use of libgit. Unfortunately, this doesn't support git hooks, which means that all our good work was for nought. VS 2015 would just ignore the hook and commit the files anyway. doh.

Enter VS 2017. With no other changes, when I click 'Commit All' from the Changes dialog, I get the warning message, and the commit is aborted!

Git integration improvements in Visual Studio 2017 - git-hooks

Once I've fixed my commit message, I can happily commit without leaving my IDE

Git integration improvements in Visual Studio 2017 - git-hooks

This is just one of a whole plethora of updates to VS 2017, but as someone who uses the git integration a fair amount, it's definitely a welcome one.


Damien Bowden: ASP.NET Core Error Management with elmah.io

This article shows how to use elmah.io error management with an ASP.NET Core application. The error, log data is added to elmah.io using different elmah.io nuget packages, directly from ASP.NET Core and also using an NLog elmah.io target.

Code: https://github.com/damienbod/AspNetCoreElmah

elmah.io is an error management system which can help you monitor, find and fix application problems fast. While structured logging is supported, the main focus of elmah.io is handling errors.

Getting started with Elmah.Io

Before you can start logging to elmah.io, you need to create an account and setup a log. Refer to the documentation here.

Logging exceptions, errors with Elmah.Io.AspNetCore and Elmah.Io.Extensions.Logging

You can add logs, exceptions to elmah.io directly from an ASP.NET Core application using the Elmah.Io.AspNetCore and the Elmah.Io.Extensions.Logging nuget packages. These packages can be added to the project using the nuget package manager.

Or you can just add the packages directly in the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <PropertyGroup>
    <UserSecretsId>AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4</UserSecretsId>
  </PropertyGroup>
  
  <ItemGroup>
    <Content Include="wwwroot\index.html" />
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="Elmah.Io.AspNetCore" Version="3.2.39-pre" />
    <PackageReference Include="Elmah.Io.Extensions.Logging" Version="3.1.22-pre" />
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="1.1.1" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0" />
  </ItemGroup>

</Project>

The Elmah.Io.AspNetCore package is used to catch unhandled exceptions in the application. This is configured in the Startup class. The OnMessage method is used to set specific properties in the messages which are sent to elmah.io. Setting the Hostname and the Application properties are very useful when evaluating the logs in elmah.io.

app.UseElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId),
	new ElmahIoSettings()
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

The Elmah.Io.Extensions.Logging package is used to log messages using the built in ILoggerFactory. You should only send warning, errors, critical messages and not just log everything to elmah.io, but it is possible to do this. Again the OnMessage method can be used to set the Hostname and the Application name for each log.

loggerFactory.AddElmahIo(
	_elmahAppKey, 
	new Guid(_elmahLogId), 
	new FilterLoggerSettings
	{
		{"ValuesController", LogLevel.Information}
	},
	new ElmahIoProviderOptions
	{
		OnMessage = msg =>
		{
			msg.Version = "1.0.0";
			msg.Hostname = "dev";
			msg.Application = "AspNetCoreElmah";
		}
	});

Using User Secrets for the elmah.io API-KEY and LogID

ASP.NET Core user secrets can be used to set the elmah.io API-KEY and the LogID as you don’t want to commit these to your source. The AddUserSecrets method can be used to set this.

private string _elmahAppKey;
private string _elmahLogId;

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddEnvironmentVariables();

	if (env.IsDevelopment())
	{
		builder.AddUserSecrets("AspNetCoreElmah-c23d2237a4-eb8832a1-452ac4");
	}

	Configuration = builder.Build();
}

The user secret properties can then be used in the ConfigureServices method.

 public void ConfigureServices(IServiceCollection services)
{
	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];
	// Add framework services.
	services.AddMvc();
}

A dummy exception is thrown in this example, which then sends the data to elmah.io.

[HttpGet("{id}")]
public string Get(int id)
{
	throw new System.Exception("something terrible bad here!");
	return "value";
}

Logging exceptions, errors to elmah.io using NLog

NLog using the Elmah.Io.NLog target can also be used in ASP.NET Core to send messages to elmah.io. This can be added using the nuget package manager.

Or you can just add it to the csproj file.

<PackageReference Include="Elmah.Io.NLog" Version="3.1.28-pre" />
<PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />

NLog for ASP.NET Core applications can be configured in the Startup class. You need to set the target properties with the elmah.io API-KEY and also the LogId. You could also do this in the nlog.config file.

loggerFactory.AddNLog();
app.AddNLogWeb();

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreElmah\\Logs";

foreach (ElmahIoTarget target in LogManager.Configuration.AllTargets.Where(t => t is ElmahIoTarget))
{
	target.ApiKey = _elmahAppKey;
	target.LogId = _elmahLogId;
}

LogManager.ReconfigExistingLoggers();

The IHttpContextAccessor and the HttpContextAccessor also need to be registered to the default IoC in ASP.NET Core to get the extra information from the web requests.

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();

	_elmahAppKey = Configuration["ElmahAppKey"];
	_elmahLogId = Configuration["ElmahLogId"];

	// Add framework services.
	services.AddMvc();
}

The nlog.config file can then be configured for the target with the elmah.io type. The application property is also set which is useful in elmah.io.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreElmah\Logs\internal-nlog.txt">

  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
    <add assembly="Elmah.Io.NLog"/>    
  </extensions>

  
  <targets>
    <target name="elmahio" type="elmah.io" apiKey="API_KEY" logId="LOG_ID" application="AspNetCoreElmahUI"/>
    
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} |${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|TraceId=${aspnet-traceidentifier}| url: ${aspnet-request-url} | action: ${aspnet-mvc-action} | ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

  </targets>

  <rules>
    <logger name="*" minlevel="Warn" writeTo="elmahio" />
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>


The About method calls the AspNetCoreElmah application method which throws the dummy exception, so we send exceptions from both applications.

public async Task<IActionResult> About()
{
	_logger.LogInformation("HomeController About called");
	// throws exception
	HttpClient _client = new HttpClient();
	var response = await _client.GetAsync("http://localhost:37209/api/values/1");
	response.EnsureSuccessStatusCode();
	var responseString = System.Text.Encoding.UTF8.GetString(
		await response.Content.ReadAsByteArrayAsync()
	);
	ViewData["Message"] = "Your application description page.";

	return View();
}

Now both applications can be started, and the errors can be viewed in the elmah.io dashboard.

Where you open the dashboard in elmah.io and access you logs, you can view the exceptions.

Here’s the log sent from the AspNetCoreElmah application.

Here’s the log sent from the AspNetCoreElmahUI application using NLog with Elmah.Io.


Links

https://elmah.io/

https://github.com/elmahio/elmah.io.nlog

http://nlog-project.org/



Andrew Lock: What is the NETStandard.Library metapackage?

What is the NETStandard.Library metapackage?

In my last post, I took a quick look at the Microsoft.AspNetCore meta package. One of the libraries referenced by the package, is the NETStandard.Library NuGet package. In this post I take a quick look at this package and what it contains.

If you're reading this post, you have hopefully already heard of .NET Standard. This is acts as an interface to .NET Platforms, and aims to define a unified set of APIs that those platforms must implement. It is the spiritual successor to PCLs, and allow you to target .NET Framework, .NET Core, and other .NET platforms with the same library code base.

The NETStandard.Library metapackage references a set of NuGet packages that define the .NET Standard library. Like the Microsoft.AspNetCore package from my last post, the package does not contain dlls itself, but rather references a number of other packages, hence the name metapackage. Depending on the target platform of your project, different packages will be added to the project, in line with the appropriate version of .NET Standard the platform implements.

For example, the .NET Standard 1.3 dependencies for the NETStandard.Library package includes the System.Security.Cryptography.X509Certificates package, but this does not appear in the 1.0, 1.1 or 1.2 target platforms. You can also see this on the nuget.org web page for the package.

It's worth noting that the NETStandard.Library package will typically be referenced by projects, though not by libraries. It's also worth noting that the version number of the package does not correspond to the version of .NET Standard, it is just the package version.

So even if your project is targeting .NET Standard version 1.3 (or multi-targeting), you can still use the latest NETStandard.Library package version (1.6.1 at the time of writing). The package itself is versioned primarily because it also contains various tooling support such as the list of .NET Standard versions.

It's also worth bearing in mind that the NETStandard.Library is essentially only an API definition, it does not contain the actual implementation itself - that comes from the underlying platform that implements the standard, such as the .NET Framework or .NET Core.

If you download one of the packages referenced in the NETStandard.Library package, System.Collections for example, and open up the nuget package as before, you'll see there's a lot more too it than the Microsoft.AspNetCore metapackage. In particular, there's a lib folder and a ref folder:

What is the NETStandard.Library metapackage?

In a typical NuGet package, lib is where the actual dlls for the package would live. However, if we do a search for all the files in the lib folder, you can see that there aren't actually any dlls, just a whole load of empty placeholder files called _._ :

What is the NETStandard.Library metapackage?

So if there aren't any dlls in here, where are they? Taking a look through the ref folder you find a similar thing - mostly _._ placeholders. However that's not entirely the case. The netstandard1.0 and netstandard 1.3 folders do contain a dll (and a load of xml metadata files):

What is the NETStandard.Library metapackage?

But look at the size of that System.Collections.dll - only 42kb! Remember, the NETStandard.Library only includes reference assemblies, not the actual implementations. The implementation comes from the final platform you target; for example .NET Framework 4.6.1, .NET Core or Mono etc. The reference dlls just define the various APIs that these platforms must expose for a given version of .NET Standard.

You can see this for yourself by decompiling the contained dll using something like ILSpy. If you do that, you can see what looks likes the source code for System.Collections, but without any method bodies, showing that this really is just a reference assembly:

What is the NETStandard.Library metapackage?

These placeholder assemblies are are a key part of the the .NET Standard infrastructure. They provide concrete APIs against which you can compile your projects, without tying you to a specific implementation (i.e. .NET Framework or .NET Core).

Final thoughts

If this all seems confusing and convoluted, that's because it is! It doesn't that every time you think you've got your head around it, things have moved on, are being changed or improved…

Having said that, most of this is more detail than you'll need. Generally, it's enough to understand the broad concept of .NET Standard, and the fact that it allows you to share code between multiple platforms.

There's a whole host of bits I haven't gone into, such as type forwarding, so if you want to get further into the details, and really try to understand what's going on, I suggest checking out the links below. In particular, I highly recommend the video series by Immo Landwerth on the subject.

Of course, when .NET Standard 2.0 is out, all this will change again, so brace yourself!


Anuraj Parameswaran: .editorconfig support in Visual Studio 2017

This post is about .editorconfig support in Visual Studio 2017. EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. As part of productivity improvements in Visual Studio, Microsoft introduced support for .editorconfig file in Visual Studio 2017.


Anuraj Parameswaran: Live Unit Testing in Visual Studio 2017

This post is about Live Unit Testing in Visual Studio 2017. With VS2017, Microsoft released Live Unit Testing. Live Unit Testing automatically runs the impacted unit tests in the background as you edit code, and visualizes the results and code coverage, live in the editor.


Anuraj Parameswaran: Create a dotnet new project template in dotnet core

This post is about creating project template for the dotnet new command. As part of the new dotnet command, now you can create Empty Web app, API app, MS Test and Solution file as part of dotnet new command. This post is about creating a Web API template with Swagger support.


Damien Bowden: Testing an ASP.NET Core MVC Protobuf API using HTTPClient and xUnit

The article shows how to test an ASP.NET Core MVC API using xUnit and a HTTPClient client using Protobuf for the content formatters.

Code: https://github.com/damienbod/AspNetMvc6ProtobufFormatters

Posts in this series:

The test project tests the ASP.NET Core API produced here. xUnit is used as a test framework. The xUnit dependencies can be added to the test project using NuGet in Visual Studio 2017 as well as the Microsoft.AspNetCore.TestHost package. Microsoft provide nice docs about Integration testing ASP.NET Core.

When the NuGet packages have been added, you can view these in the csproj file, or install and update directly in this file. A reference to the project containg the API is also added to the test project.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>AspNetCoreProtobuf.IntegrationTests</AssemblyName>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" />
    <PackageReference Include="xunit.runner.console" Version="2.2.0" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
    <PackageReference Include="xunit" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="1.1.1" />
    <PackageReference Include="protobuf-net" Version="2.1.0" />
    <PackageReference Include="xunit.runners" Version="2.0.0" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\AspNetCoreProtobuf\AspNetCoreProtobuf.csproj" />
  </ItemGroup>

  <ItemGroup>
    <Service Include="{82a7f48d-3b50-4b1e-b82e-3ada8210c358}" />
  </ItemGroup>

</Project>

The TestServer is used to test the ASP.NET Core API. This is setup for all the API tests.

private readonly TestServer _server;
private readonly HttpClient _client;

public ProtobufApiTests()
{
	_server = new TestServer(
		new WebHostBuilder()
		.UseKestrel()
		.UseStartup<Startup>());
	_client = _server.CreateClient();
}

HTTP GET request test

The GetProtobufDataAndCheckProtobufContentTypeMediaType test sends a HTTP GET to the test server, and requests the content as application/x-protobuf. The result is deserialized using protobuf and the header and the expected result is checked.

[Fact]
public async Task GetProtobufDataAndCheckProtobufContentTypeMediaType()
{
	// Act
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	var response = await _client.GetAsync("/api/values/1");
	response.EnsureSuccessStatusCode();

	var result = ProtoBuf.Serializer.Deserialize<ProtobufModelDto>(await response.Content.ReadAsStreamAsync());

	// Assert
	Assert.Equal("application/x-protobuf", response.Content.Headers.ContentType.MediaType );
	Assert.Equal("My first MVC 6 Protobuf service", result.StringValue);
}
		

HTTP POST request test

The PostProtobufData test method sends a HTTP POST request to the test server with a protobuf serialized content. The status code of the request is validated.

[Fact]
public void PostProtobufData()
{
	// HTTP GET with Protobuf Response Body
	_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/x-protobuf"));
	
	MemoryStream stream = new MemoryStream();
	ProtoBuf.Serializer.Serialize<ProtobufModelDto>(stream, new ProtobufModelDto
	{
		Id = 2,
		Name= "lovely data",
		StringValue = "amazing this ah"
	
	});

	HttpContent data = new StreamContent(stream);

	// HTTP POST with Protobuf Request Body
	var responseForPost = _client.PostAsync("api/Values", data).Result;

	Assert.True(responseForPost.IsSuccessStatusCode);
}

The tests can be executed or debugged in Visual Studio using the Test Explorer

The tests can also be run with dotnet test in the commandline.

C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests>dotnet test
Build started, please wait...
Build completed.

Test run for C:\git\damienbod\AspNetCoreProtobufFormatters\src\AspNetCoreProtobuf.IntegrationTests\bin\Debug\netcoreapp1.1\AspNetCoreProtobuf.IntegrationTests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Testausführungs-Befehlszeilentool Version 15.0.0.0
Copyright (c) Microsoft Corporation. Alle Rechte vorbehalten.

Die Testausf├╝hrung wird gestartet, bitte warten...
[xUnit.net 00:00:00.5821132]   Discovering: AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.6841246]   Discovered:  AspNetCoreProtobuf.IntegrationTests
[xUnit.net 00:00:00.7273897]   Starting:    AspNetCoreProtobuf.IntegrationTests
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) with arguments (AspNetCoreProtobuf.Model.ProtobufModelDto) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Post (AspNetCoreProtobuf) in 137.2264ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 346.8796ms 200
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 39.0599ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 43.2983ms 200 application/x-protobuf
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 GET http://
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) with arguments (1) - ModelState is Valid
info: Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor[1]
      Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action AspNetCoreProtobuf.Controllers.ValuesController.Get (AspNetCoreProtobuf) in 1.4974ms
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 3.6715ms 200 application/x-protobuf
[xUnit.net 00:00:01.5669956]   Finished:    AspNetCoreProtobuf.IntegrationTests

Tests gesamt: 3. Bestanden: 3. Fehler: 0. Übersprungen: 0.
Der Testlauf war erfolgreich.
Testausführungszeit: 2.7499 Sekunden

appveyor CI

The project can then be connected to any build server. Appveyor is a easy one to setup and works well with github projects. Create an account and select the github repository to build. Add an appveyor.yml file to the root of your project and configure as required. Docs can be found here:
https://www.appveyor.com/docs/build-configuration/

image: Visual Studio 2017
init:
  - git config --global core.autocrlf true
install:
  - ECHO %APPVEYOR_BUILD_WORKER_IMAGE%
  - dotnet --version
  - dotnet restore
build_script:
- dotnet build
before_build:
- appveyor-retry dotnet restore -v Minimal
test_script:
- cd src/AspNetCoreProtobuf.IntegrationTests
- dotnet test

The appveyor badges can then be used in your project md file.

|                           | Build                                                                                                                                                             |       
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| .NET Core                 | [![Build status](https://ci.appveyor.com/api/projects/status/ihtrq4u81rtsty9k?svg=true)](https://ci.appveyor.com/project/damienbod/aspnetmvc6protobufformatters)  |

This would then be displayed in github as follows:

Links

https://developers.google.com/protocol-buffers/docs/csharptutorial

http://www.stackoverflow.com/questions/7774155/deserialize-long-string-with-protobuf-for-c-sharp-doesnt-work-properly-for-me

https://xunit.github.io/

https://www.appveyor.com/docs/build-configuration/

https://www.nuget.org/packages/protobuf-net/

https://github.com/mgravell/protobuf-net

http://teelahti.fi/using-google-proto3-with-aspnet-mvc/

https://github.com/damienpontifex/ProtobufFormatter/tree/master/src/ProtobufFormatter

http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/

http://blogs.msdn.com/b/webdev/archive/2014/11/24/content-negotiation-in-mvc-5-or-how-can-i-just-write-json.aspx

https://github.com/WebApiContrib/WebApiContrib.Formatting.ProtoBuf

https://damienbod.wordpress.com/2014/01/11/using-protobuf-net-media-formatter-with-web-api-2/

https://docs.microsoft.com/en-us/aspnet/core/testing/integration-testing



Andrew Lock: What is the Microsoft.AspNetCore metapackage?

What is the Microsoft.AspNetCore  metapackage?

One of the packages added to many ASP.NET Core templates is Microsoft.AspNetCore. This post takes a quick look at that package and what it contains.

The Microsoft.AspNetCore package is often included as one of the standard project dependencies when starting a new ASP.NET Core project. It provides many of the packages necessary to stand up a basic ASP.NET Core application.

However this package does not contain any actual code or dlls itself. Instead, it simply contains a series of dependencies on other packages. By adding the package to your project you bring in all the packages it depends on, along with their dlls. This is called a metapackage.

You can see this for yourself by downloading the package and taking a look inside. Nupkg files are essentially just zip files, so you can download them and open them up. Just change the file extension to zip and open in Windows Explorer, or open them with your archive browser of choice:

What is the Microsoft.AspNetCore  metapackage?

As you can see, there's really not a lot of files inside. The main one you can see is the Microsoft.AspNetCore.nuspec. This contains the metadata details for the package, including all the package dependencies (you can also see the dependencies listed on nuget.org.

Specifically, the packages it lists are:

  • Microsoft.AspNetCore.Diagnostics
  • Microsoft.AspNetCore.Hosting
  • Microsoft.AspNetCore.Routing
  • Microsoft.AspNetCore.Server.IISIntegration
  • Microsoft.AspNetCore.Server.Kestrel
  • Microsoft.Extensions.Configuration.EnvironmentVariables
  • Microsoft.Extensions.Configuration.FileExtensions
  • Microsoft.Extensions.Configuration.Json
  • Microsoft.Extensions.Logging
  • Microsoft.Extensions.Logging.Console
  • Microsoft.Extensions.Options.ConfigurationExtensions
  • NETStandard.Library

Which versions of these packages you will receive depends on which version of the Microsoft.AspNetCore package you install. If you are working on the 'LTS' release version of ASP.NET Core, you will (currently) need the 1.0.4 version of the package. If on the 'Current' release version, you will want version 1.1.1

These dependencies provide the initial basic libraries for setting up a basic ASP.NET Core server that uses the Kestrel web server and includes IIS Integration.

In terms of the application itself, with this package alone you can load application settings and environment variables into configuration, use the IOptions interface, and configure logging to the console.

For middleware, only the Microsoft.AspNetCore.Diagnostics package is included, which would allow adding middleware such as the ExceptionHandlerMiddleware, the DeveloperExceptionPageMiddleware and the StatusCodePagesMiddleware.

As you can see, the meta package is generally not sufficient by itself to build a complete application. You would typically use at least the Microsoft.AspNetCore.Mvc or Microsoft.AspNetCore.MvcCore package to add MVC capabilities to your application, and would often need a variety of other packages.

The meta package is a trade off between trying to find a useful collection of packages, that are applicable to a wide range of applications, without bringing in a whole load of dependencies to projects that don't need them. It mostly just serves to reduce the number of explicit dependencies you need to add to your .csproj file. Obviously, as the metapackage takes dependencies on other packages it doesn't reduce the actual dependencies of your project, just how many of them are listed in the project file.

One of the dependencies on which the Microsoft.AspNetCore depends is the NETStandard.Library package, which is itself a metapackage. As that package is a bit complex, I'll discuss it in more detail in a follow up post.


Anuraj Parameswaran: What is new in Visual Studio 2017 for web developers?

This post is about new features of Visual Studio 2017 for Web Developers. The new features inclues ASP.NET Core tooling, CSProj support, Migration option from project.json to csproj, client side debugging improvements etc.


Anuraj Parameswaran: Create an offline installer for Visual Studio 2017

This post is about building an offline installer for Visual Studio 2017. On March 7th 2017, Microsoft introduced Visual Studio 2017. Unlike earlier versions of Visual Studio, Microsoft don’t offer an ISO image. This post will help you to install Visual Studio when you’re offline.


Andrew Lock: Supporting both LTS and Current releases for ASP.NET Core

Supporting both LTS and Current releases for ASP.NET Core

Some time ago, I wrote a post on how to use custom middleware to set various security headers in an ASP.NET Core application. This formed the basis for a small package on GitHub and NuGet that does just that, it adds standard headers to your responses like X-Frame-Options and X-XSS-Protection.

I recently updated the package to include the Referrer-Policy header, after seeing Scott Helme's great post on it. When I was doing so, I was reminded of a Pull Request made some time ago to the repo, that I had completely forgotten about (oops 😳):

Supporting both LTS and Current releases for ASP.NET Core

As you can see, this PR was upgrading the packages used in the package to the 'Current' Release of ASP.NET Core at the time. Discovering this again got me thinking about the new versioning approach to .NET Core, and how to support both versions of the framework as a library author.

The two tracks of .NET Core

.NET Core (and hence, ASP.NET Core) currently has two different release cadences. On the one hand, there is the Long Term Support (LTS) branch, which has a slow release cycle, and will only see bug fixes over its lifetime, no extra features. Only when a new (major) version of .NET Core ships will you see new features. The plus sides to using LTS are that it will be the most stable, and is supported by Microsoft for three years.

On the other hand, there is the Current branch, which is updated at a much faster cadence. This branch does see features added with subsequent releases, but you have to make sure you keep up with the releases to remain supported. Each release is only supported for 3 months once the next version is released, so you have to be sure to update your apps in a timely fashion.

You can think of the LTS branch as a sub-set of the Current branch, though this is not strictly true as patch releases are made to fix bugs in the LTS branch. So for the (hypothetical) Current branch releases:

  • 1.0.0 - First LTS release
  • 1.1.0
  • 1.1.1
  • 1.2.0
  • 2.0.0 - Second LTS release
  • 2.1.0

only the major versions will be be LTS releases.

Package versioning

One of the complexities introduced by adopting the more modular approach to development taken in .NET Core, where everything is delivered as individual packages, is the fact the individual libraries that go into a .NET Core release don't necessarily have the same package version as the release version.

I looked at this in a post about a patch release to the LTS branch (version 1.0.3). The upshot is that the actual packages that go into a release could be a variety of different values. For example, in the 1.0.3 release, the following packages were all current:

"Microsoft.ApplicationInsights.AspNetCore" : "1.0.2",
"Microsoft.AspNet.Identity.AspNetCoreCompat" : "0.1.1",
"Microsoft.AspNet.WebApi.Client" : "5.2.2",
"Microsoft.AspNetCore.Antiforgery" : "1.0.2",
"Microsoft.Extensions.SecretManager.Tools" : "1.0.0-preview4-final",
"Microsoft.Extensions.WebEncoders" : "1.0.1",
"Microsoft.IdentityModel.Protocols.OpenIdConnect" : "2.0.0",

It's clear that versioning is a complex beast...

Picking package versions for a library

With this in mind, I was faced with deciding whether to upgrade the package versions of the various ASP.NET Core packages that the security headers library depends on. Specifically, these were originally:

"Microsoft.Extensions.Options": "1.0.0",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.0",
"Microsoft.AspNetCore.Http.Abstractions": "1.0.0"

The library itself uses some of the ASP.NET Core abstractions around dependency injection and IOption, hence the dependencies on these libraries. However the version of the packages it was using were all 1.0.0. These all correspond to the first release on the LTS branch. The question was whether to upgrade these packages to a newer LTS version, to upgrade them to the latest Current branch package versions, or to just leave them as they were.

To be clear, the library itself does not depend on anything that is specific to any particular package version; it is using the types defined in the first LTS release and nothing from later releases.

The previous pull request I mentioned was to update the packages to match those on the Current release branch. My hesitation with doing so is that this could cause problems for users who are currently sticking to the LTS release branch, as I'll explain shortly.

NuGet dependency resolution

The problem all stems from the way NuGet resolves dependencies for packages, where different versions of a package are referenced by others. This is a complex problem, and there are some great docs covering it on the website which are well worth a read, but I'll try and explain the basic problem here.

Imagine you have two packages that provide you some middleware, say my SecurityHeadersMiddleware package, and the HttpCacheHeaders package (check it out on GitHub!). Both of these packages depend on the Microsoft.AspNetCore.Http.Abstractions package. Just considering these packages and your application, the dependency chain looks something like the following:

Supporting both LTS and Current releases for ASP.NET Core

Now, if both of the middleware packages depend on the same LTS version of Microsoft.AspNetCore.Http.Abstractions then there is no problem, NuGet knows which version to restore and everything is great. In reality though, the chances of that are relatively slim.

So what happens if I have updated the SecurityHeadersMiddleware package to depend on the Current release branch, say version 1.1.0? This is where the different NuGet rules kick in. (Honestly, check out the docs!)

Package dependencies are normally specified as a minimum version, so I would be saying I need at least version 1.1.0. NuGet tries to satisfy all the requirements, so if one package requires at least 1.0.0 and another requires at least 1.1.0, then it knows it can use the 1.1.0 package to satisfy all the requirements.

However, it's not quite as simple as that. NuGet uses a rule whereby the first package to specify a version for a package wins. So the package 'closest' to your application will 'win' when it comes to picking which version of a package is installed.

For example, in the version below, even though the HTTP Cache Headers package specifies a higher minimum version of Microsoft.AspNetCore.Http.Abstractions than in the SecurityHeadersMiddleware, the lower version or 1.0.0 will be chosen, as it is further to the left in the graph.

Supporting both LTS and Current releases for ASP.NET Core

This behaviour can obviously cause problems, as it means packages could end up using an older version of a package than they specify as a dependency! Obviously it can also end up using a newer version of a package than it might expect. This theoretically should not be a problem, but in some cases it can cause issues.

Handling dependency graphs is tricky stuff…

Implications for users

So all this leads me back to my initial question - should I upgrade the package versions of the NetEscapades.AspNetCore.SecurityHeaders package? What implications would that have for people's code?

An important point to be aware of when using the Microsoft ASP.NET Core packages is that you must use all your packages on the same version - all LTS or all Current.

If I upgraded the package to use Current branch packages, and you used it in a project on the LTS branch, then the NuGet graph resolution rules could mean that you ended up using Current release version of the packages I referenced. That is not supported and could result in weird bugs.

For that reason, I decided to stay on the LTS packages. Now, having said that, if you use the package in a Current release project, it could technically be possible for this to result in a downgrade of the packages I reference. Also not good…

Luckily, if you get a downgrade, then you will be warned with a warning/error when you do a dotnet restore. You can easily fix this by adding an explicit reference to the offending package in your project. For example, if you had a warning about a downgrade from 1.1.0 to 1.0.0 with the Microsoft.AspNetCore.Http.Abstractions package, you could update your dependencies to include it explicitly:

{
    "NetEscapades.AspNetCore.SecurityHeaders" : "0.3.0", //<- depends on v1.0.0 ... 
    "Microsoft.AspNetCore.Http.Abstractions"  : "1.1.0"  //<- but v1.1.0 will win 
}

The explicit reference puts the dependent package further left on the dependency graph, and so that will be preferentially selected - version 1.1.0 will be installed even though NetEscapades.AspNetCore.SecurityHeaders depends on version 1.0.0.

Creating multiple versions

So does this approach make sense? For simple packages like my middleware, I think so. I don't need any of the features from later releases and it seems the easiest approach to manage.

Another obvious alternative would be to keep two concurrent versions of the package, one for the LTS branch, and another for the Current branch. After all, that's what happens with the actual packages that make up ASP.NET Core itself. I could have a version 1.0.0 for the LTS stream, and a version 1.1.0 for the Current stream.

The problem with that in my eyes, is that you are combining two separate streams - which are logically distinct - into a single version stream. It's not obvious that they are distinct, and things like the GUI for NuGet package manager in Visual Studio would not know they are distinct, so would always be prompting you to upgrade the LTS version packages.

Another alternative which fixes this might be to have two separate packages, say NetEscapades.AspNetCore.SecurityHeaders.LTS and NetEscapades.AspNetCore.SecurityHeaders.Current. That would play nicer in terms of keeping the streams separate, but just adds such an overhead to managing and releasing the project, that it doesn't seem worth the hassle.

Conclusion

So to surmise, I think I'm going to stick with targeting the LTS version of packages in any libraries on GitHub, but I'd be interested to hear what other people think. Different maintainers seem to be taking different tacks, so I'm not sure there's an obvious best practice yet. If there is, and I've just missed it, do let me know!


Damien Bowden: .NET Core logging to MySQL using NLog

This article shows how to log to MySQL in a .NET Core application using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. ASP.NET Core, logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Set up the MySQL database

MySQL Workbench can be used to add the schema ‘nlog’ which will be used for logging to the MySQL database. The user ‘damienbod’ is also required, which must match the defined user in the connection string. If you configure the MySQL database differently, then you need to change the connection string in the nlog.config file.

nlogmysql_02

You also need to create a log table. The following script can be used. If you decide to use NLog.Web in a ASP.NET Core application and add some extra properties, fields to the logs, then this script needs to be extended and also the database target in the nlog.config.

CREATE TABLE `log` (
  `Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `Application` varchar(50) DEFAULT NULL,
  `Logged` datetime DEFAULT NULL,
  `Level` varchar(50) DEFAULT NULL,
  `Message` varchar(512) DEFAULT NULL,
  `Logger` varchar(250) DEFAULT NULL,
  `Callsite` varchar(512) DEFAULT NULL,
  `Exception` varchar(512) DEFAULT NULL,
  PRIMARY KEY (`Id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

Add NLog and the MySQL provider to the project.

The MySql.Data pre release NuGet package can be used to log to MySQL. Add this to your project.

nlogmysql

The NLog.Web.AspNetCore package also needs to be added or just NLog if you do not require any web extensions.

nlog.config

The database target needs to be configured to log to MySQL. The database provider is set to use the MySQL.Data package which was downloaded using NuGet. If your using a different MySQL provider, this needs to be changed. The connection string is also set here, which matches what was configured previously in the MySQL database using Workbench. If you read the connection string from the app settings, a NLog variable can be used here.

  <target name="database" xsi:type="Database"
              dbProvider="MySql.Data.MySqlClient.MySqlConnection, MySql.Data"
              connectionString="server=localhost;Database=nlog;user id=damienbod;password=1234"
             >

          <commandText>
              insert into nlog.log (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>

NLog can then be used in the application.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using NLog;
using NLog.Targets;

namespace ConsoleNLog
{
    public class Program
    {
        public static void Main(string[] args)
        {

            LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

            var logger = LogManager.GetLogger("console");
            logger.Warn("console logging is great");

            Console.WriteLine("log sent");
            Console.ReadKey();
        }
    }
}

Full nlog.config file:
https://github.com/damienbod/AspNetCoreNlog/blob/master/src/ConsoleNLogMySQL/nlog.config

Links

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://github.com/NLog/NLog/blob/38aef000f916bd5ffd8b80a5576afa2423192e84/examples/targets/Configuration%20API/Database/MSSQL/Example.cs

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Andrew Lock: Using routing DataTokens in ASP.NET Core

Using routing DataTokens in ASP.NET Core

ASP.NET Core uses routing to map incoming URLs to controllers and action methods, and also to generate URLs when you provide route parameters.

One of the lesser known features of the routing infrastructure is data tokens. These are additional values that can be associated with a particular route, but don't affect the process of URL matching or generation at all.

This post takes a brief look at data tokens and how to use them in your applications for providing supplementary information about a route, but generally speaking I recommend avoiding them if possible.

How to add data tokens to a route

Data tokens are specified when you define your global convention-based routes, in the call to UseMvc. For example, the following route adds a data token called Name to the default route:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: null,
        constraints: null,
        dataTokens: new { Name = "default_route" });
});

This route just adds the standard default conventional route to the route collection but it also specifies the Name data token. Note that due to the available overloads, you have to explicitly provide values for defaults and constraints.

This data token is functionally identical to the MapRoute version without dataTokens; the data tokens do not modify the way URLs are routed at all.

Accessing the data tokens from an action method

Whenever a route is used to map an incoming URL to an action method, the data tokens associated with the route are set. These can be accessed from the RouteData.DataTokens property on the Controller base class. This exposes the values as a RouteValueDictionary so you can access them by name. For example, you could retrieve and display the above data token as follows:

public class ProductController : Controller  
{
    public string Index()
    {
        var nameTokenValue = (string)RouteData.DataTokens["Name"];
        return nameTokenValue;
    }
}

As you can see, the data token needs to be cast to the appropriate Type it was defined as, in this case string.

This behaviour is different to that of the route parameter values. Route values are stored as strings, so the values need be convertible to a string. Data tokens don't have this restriction, so you can store the values as any type you like and just cast when retrieving it.

Using data tokens to identify the selected route

So what can data tokens actually be used for? Well, fundamentally they are designed to help you associate state data with a specific route. The values aren't dynamic, so they don't change depending on the URL; instead, they are fixed for a given route.

This means you can use data tokens to determine which route was selected during routing. This may be useful if you have multiple routes that map to the same action method, and you need to know which route was selected.

Consider the following couple of routes. They are for two different URLs, but they match to the same action method, HomeController.Index:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "otherRoute",
        template: "fancy-other-route",
        defaults: new { controller = "Home", action = "Index" },
        constraints: null,
        dataTokens: new { routeOrigin = new RouteOrigin { Name = "fancy route" } });

    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: null,
        constraints: null,
        dataTokens: new { routeOrigin = new RouteOrigin { Name = "default route" } });
});

Both routes set a data token of type RouteOrigin which is just a simple class, to demonstrate that data tokens can be complex types:

public class RouteOrigin  
{
    public string Name { get; set; }
}

So, if we make a request to the app at URL, /, /Home, or /Home/Index, a data token is set with a Name of "default route". If we make a request to /fancy-other-route, then the same action method will be executed, but the data token will have the value "fancy route". To easily visualise these values, I created the HomeController as follows:

public class HomeController : Controller  
{
    public string Index()
    {
        var origin = (RouteOrigin)RouteData.DataTokens["routeOrigin"];
        return $"This is the Home controller.\nThe route data is '{origin.Name}'";
    }
}

If we hit the app at the two different paths, you can easily see the different data token values:

Using routing DataTokens in ASP.NET Core

This works for our global convention-based routes, but what if you are using attribute-based routing? How do we use data tokens then?

How to use data tokens with RouteAttributes?

The short answer is, you can't! You can use constraints and defaults when you define your routes using RouteAttributes by including them inline as part of the route template. But you can't define data tokens inline, so you can't use them with attribute routing.

The good news is that it really shouldn't be a problem. Attribute routing is often used when you are designing an API for consumption by various clients. It's good practice to have a well defined URL space when designing your APIs; that's one of the reasons attribute routing is suggested over conventional routing in this case.

A "well defined" URL space could mean a lot of things, but one of those would probably be not having multiple different URLs all executing the same action. If there's only one route that can be used to execute an action, then data tokens use their value. For example, the following API defines a route attribute for invoking the Get action.

public class InstrumentController  
{
    [HttpGet("/instruments")]
    public IList<string> Get()
    {
        return new List<string> { "Guitar", "Bass", "Drums" };
    }
}

Associating a data token with the route wouldn't give you any more information when this method is invoked. We know which route it came from, as there is only one possibility - the HttpGet Route attribute!

Note: If an action has a route attribute, it can not be routed using conventional routing. That's how we know it's not routed from anywhere else.

When should I use data tokens?

I confess, I'm struggling with this section. Data tokens create a direct coupling between the routes and the action methods being executed. It seems like if your action methods are written in such a way as to depend on this route, you have bigger problems.

Also, the coupling is pretty insidious, as the data tokens are a hidden dependency that you have to know how to access. A more explicit approach might be to just set the values of appropriate route parameters.

For example, we could achieve virtually the same behaviour using explicit route parameters instead of data tokens. We could rewrite the routes as the following:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "otherRoute",
        template: "fancy-route-with-param",
        defaults: new
        {
            controller = "Home",
            action = "Other",
            routeOrigin = "fancy route"
        });

    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}",
        defaults: new { routeOrigin = "default" });
});

Here we are providing a route value for each route for the routeOrigin parameter. This will be explicitly bound to our action method if we define it like the following:

public class HomeController : Controller  
{
    public string Other(string routeOrigin)
    {
        return $"This is the Other action.\nThe route param is '{routeOrigin}'";
    }
}

We now have an explicit dependency on the routeOrigin parameter which is automatically populated for us:

Using routing DataTokens in ASP.NET Core

Now, I know this behaviour is not the same as when we used dataTokens. In this case, the routeOrigin parameter is actually bound using the normal model binding mechanism, and you can only use values that can be converted to/from strings. But personally, as I say I don't really see a need for data tokens. Either using the route value approach seems preferable, or alternatively straight dependency injection, depending on your requirements.

Do let me know in the comments if there's a use case I've missed here, as currently I can't really see it!


Damien Bowden: Implementing an Audit Trail using ASP.NET Core and Elasticsearch with NEST

This article shows how an audit trail can be implemented in ASP.NET Core which saves the audit documents to Elasticsearch using NEST.

Code: https://github.com/damienbod/AspNetCoreElasticsearchNestAuditTrail

Should I just use a logger?

Depends. If you just need to save requests, responses and application events, then a logger would be a better solution for this use case. I would use NLog as it provides everything you need, or could need, when working with ASP.NET Core.

If you only need to save business events/data of the application in the audit trail, then this solution could fit.

Using the Audit Trail

The audit trail is implemented so that it can be used easily. In the Startup class of the ASP.NET Core application, it is added to the application in the ConfigureServices method. The class library provides an extension method, AddAuditTrail, which can be configured as required. It takes 2 parameters, a bool parameter which defines if a new index is created per day or per month to save the audit trail documents, and a second int parameter which defines how many of the previous indices are included in the alias used to select the audit trail items. If this is 0, all indices are included for the search.

Because the audit trail documents are grouped into different indices per day or per month, the amount of documents can be controlled in each index. Usually the application user requires only the last n days, or last 2 months of the audit trails, and so the search does not need to search through all audit trails documents since the application began. This makes it possible to optimize the data as required, or even remove, archive old unused audit trail indices.

public void ConfigureServices(IServiceCollection services)
{
	var indexPerMonth = false;
	var amountOfPreviousIndicesUsedInAlias = 3;
	services.AddAuditTrail<CustomAuditTrailLog>(options => 
		options.UseSettings(indexPerMonth, amountOfPreviousIndicesUsedInAlias)
	);

	services.AddMvc();
}

The AddAuditTrail extension method requires a model definition which will be used to save or retrieve the documents in Elasticsearch. The model must implement the IAuditTrailLog interface. This interface just forces you to implement the property Timestamp which is required for the audit logs.

The model can then be designed, defined as required. NEST attributes can be used for each of the properties in the model. Use the keyword attribute, if the text field should not be analyzed. If you must use enums, then save the string value and NOT the integer value to the persistent layer. If integer values are saved for the enums, then it cannot be used without the knowledge of what each integer value represents, making it dependent on the code.

using AuditTrail.Model;
using Nest;
using System;

namespace AspNetCoreElasticsearchNestAuditTrail
{
    public class CustomAuditTrailLog : IAuditTrailLog
    {
        public CustomAuditTrailLog()
        {
            Timestamp = DateTime.UtcNow;
        }

        public DateTime Timestamp { get; set; }

        [Keyword]
        public string Action { get; set; }

        public string Log { get; set; }

        public string Origin { get; set; }

        public string User { get; set; }

        public string Extra { get; set; }
    }
}

The audit trail can then be used anywhere in the application. The IAuditTrailProvider can be added in the constructor of the class and an audit document can be created using the AddLog method.

private readonly IAuditTrailProvider<CustomAuditTrailLog> _auditTrailProvider;

public HomeController(IAuditTrailProvider<CustomAuditTrailLog> auditTrailProvider)
{
	_auditTrailProvider = auditTrailProvider;
}

public IActionResult Index()
{
	var auditTrailLog = new CustomAuditTrailLog()
	{
		User = User.ToString(),
		Origin = "HomeController:Index",
		Action = "Home GET",
		Log = "home page called doing something important enough to be added to the audit log.",
		Extra = "yep"
	};

	_auditTrailProvider.AddLog(auditTrailLog);
	return View();
}

The audit trail documents can be viewed using QueryAuditLogs which supports paging and uses a simple query search which accepts wildcards. The AuditTrailSearch method returns a MVC view with the audit trail items in the model.

public IActionResult AuditTrailSearch(string searchString, int skip, int amount)
{

	var auditTrailViewModel = new AuditTrailViewModel
	{
		Filter = searchString,
		Skip = skip,
		Size = amount
	};

	if (skip > 0 || amount > 0)
	{
		var paging = new AuditTrailPaging
		{
			Size = amount,
			Skip = skip
		};

		auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString, paging).ToList();
		
		return View(auditTrailViewModel);
	}

	auditTrailViewModel.AuditTrailLogs = _auditTrailProvider.QueryAuditLogs(searchString).ToList();
	return View(auditTrailViewModel);
}

How is the Audit Trail implemented?

The AuditTrailExtensions class implements the extension methods used to initialize the audit trail implementations. This class accepts the options and registers the interfaces, classes with the IoC used by ASP.NET Core.

Generics are used so that any model class can be used to save the audit trail data. This changes always with each project, application. The type T must implement the interface IAuditTrailLog.

using System;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Localization;
using AuditTrail;
using AuditTrail.Model;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class AuditTrailExtensions
    {
        public static IServiceCollection AddAuditTrail<T>(this IServiceCollection services) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            return AddAuditTrail<T>(services, setupAction: null);
        }

        public static IServiceCollection AddAuditTrail<T>(
            this IServiceCollection services,
            Action<AuditTrailOptions> setupAction) where T : class, IAuditTrailLog
        {
            if (services == null)
            {
                throw new ArgumentNullException(nameof(services));
            }

            services.TryAdd(new ServiceDescriptor(
                typeof(IAuditTrailProvider<T>),
                typeof(AuditTrailProvider<T>),
                ServiceLifetime.Transient));

            if (setupAction != null)
            {
                services.Configure(setupAction);
            }
            return services;
        }
    }
}

When a new audit trail log is added, it uses the index defined in the _indexName field.

public void AddLog(T auditTrailLog)
{
	var index = new IndexName()
	{
		Name = _indexName
	};

	var indexRequest = new IndexRequest<T>(auditTrailLog, index);

	var response = _elasticClient.Index(indexRequest);
	if (!response.IsValid)
	{
		throw new ElasticsearchClientException("Add auditlog disaster!");
	}
}

The _indexName field is defined using the date pattern, either days or months depending on your options.

private const string _alias = "auditlog";
private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";

index definition per month:

if(_options.Value.IndexPerMonth)
{
	_indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
}

When quering the audit trail logs, a simple query search query is used to find, select the audit trial documents required for the view. This is used so that wildcards can be used. The method accepts a query filter and paging options. If you search without any filter, all documents are returned which are defined in the alias (used indices). By using the simple query, the filter can accept options like AND, OR for the search.

public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
{
	var from = 0;
	var size = 10;
	EnsureAlias();
	if(auditTrailPaging != null)
	{
		from = auditTrailPaging.Skip;
		size = auditTrailPaging.Size;
		if(size > 1000)
		{
			// max limit 1000 items
			size = 1000;
		}
	}
	var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
	{
		Size = size,
		From = from,
		Query = new QueryContainer(
			new SimpleQueryStringQuery
			{
				Query = filter
			}
		),
		Sort = new List<ISort>
			{
				new SortField { Field = TimestampField, Order = SortOrder.Descending }
			}
	};

	var searchResponse = _elasticClient.Search<T>(searchRequest);

	return searchResponse.Documents;
}

The alias is also updated in the search query, if required. Depending on you configuration, the alias uses all the audit trail indices or just the last n days, or n months. This check uses a static field. If the alias needs to be updated, the new alias is created, which also deletes the old one.

private void EnsureAlias()
{
	if (_options.Value.IndexPerMonth)
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}
	else
	{
		if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
		{
			aliasUpdated = DateTime.UtcNow;
			CreateAlias();
		}
	}           
}

Here’s how the alias is created for all indices of the audit trail.

private void CreateAliasForAllIndices()
{
	var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
	if (!response.IsValid)
	{
		throw response.OriginalException;
	}

	if (response.Exists)
	{
		_elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	}

	var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
	if (!responseCreateIndex.IsValid)
	{
		throw response.OriginalException;
	}
}

The full AuditTrailProvider class which implements the audit trail.

using AuditTrail.Model;
using Elasticsearch.Net;
using Microsoft.Extensions.Options;
using Nest;
using Newtonsoft.Json.Converters;
using System;
using System.Collections.Generic;
using System.Linq;

namespace AuditTrail
{
    public class AuditTrailProvider<T> : IAuditTrailProvider<T> where T : class
    {
        private const string _alias = "auditlog";
        private string _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM-dd")}";
        private static Field TimestampField = new Field("timestamp");
        private readonly IOptions<AuditTrailOptions> _options;

        private ElasticClient _elasticClient { get; }

        public AuditTrailProvider(
           IOptions<AuditTrailOptions> auditTrailOptions)
        {
            _options = auditTrailOptions ?? throw new ArgumentNullException(nameof(auditTrailOptions));

            if(_options.Value.IndexPerMonth)
            {
                _indexName = $"{_alias}-{DateTime.UtcNow.ToString("yyyy-MM")}";
            }

            var pool = new StaticConnectionPool(new List<Uri> { new Uri("http://localhost:9200") });
            var connectionSettings = new ConnectionSettings(
                pool,
                new HttpConnection(),
                new SerializerFactory((jsonSettings, nestSettings) => jsonSettings.Converters.Add(new StringEnumConverter())))
              .DisableDirectStreaming();

            _elasticClient = new ElasticClient(connectionSettings);
        }

        public void AddLog(T auditTrailLog)
        {
            var index = new IndexName()
            {
                Name = _indexName
            };

            var indexRequest = new IndexRequest<T>(auditTrailLog, index);

            var response = _elasticClient.Index(indexRequest);
            if (!response.IsValid)
            {
                throw new ElasticsearchClientException("Add auditlog disaster!");
            }
        }

        public long Count(string filter = "*")
        {
            EnsureAlias();
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = 0,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<AuditTrailLog>(searchRequest);

            return searchResponse.Total;
        }

        public IEnumerable<T> QueryAuditLogs(string filter = "*", AuditTrailPaging auditTrailPaging = null)
        {
            var from = 0;
            var size = 10;
            EnsureAlias();
            if(auditTrailPaging != null)
            {
                from = auditTrailPaging.Skip;
                size = auditTrailPaging.Size;
                if(size > 1000)
                {
                    // max limit 1000 items
                    size = 1000;
                }
            }
            var searchRequest = new SearchRequest<T>(Indices.Parse(_alias))
            {
                Size = size,
                From = from,
                Query = new QueryContainer(
                    new SimpleQueryStringQuery
                    {
                        Query = filter
                    }
                ),
                Sort = new List<ISort>
                    {
                        new SortField { Field = TimestampField, Order = SortOrder.Descending }
                    }
            };

            var searchResponse = _elasticClient.Search<T>(searchRequest);

            return searchResponse.Documents;
        }

        private void CreateAliasForAllIndices()
        {
            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw response.OriginalException;
            }
        }

        private void CreateAlias()
        {
            if (_options.Value.AmountOfPreviousIndicesUsedInAlias > 0)
            {
                CreateAliasForLastNIndices(_options.Value.AmountOfPreviousIndicesUsedInAlias);
            }
            else
            {
                CreateAliasForAllIndices();
            }
        }

        private void CreateAliasForLastNIndices(int amount)
        {
            var responseCatIndices = _elasticClient.CatIndices(new CatIndicesRequest(Indices.Parse($"{_alias}-*")));
            var records = responseCatIndices.Records.ToList();
            List<string> indicesToAddToAlias = new List<string>();
            for(int i = amount;i>0;i--)
            {
                if (_options.Value.IndexPerMonth)
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddMonths(-i + 1).ToString("yyyy-MM")}";
                    if(records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
                else
                {
                    var indexName = $"{_alias}-{DateTime.UtcNow.AddDays(-i + 1).ToString("yyyy-MM-dd")}";                   
                    if (records.Exists(t => t.Index == indexName))
                    {
                        indicesToAddToAlias.Add(indexName);
                    }
                }
            }

            var response = _elasticClient.AliasExists(new AliasExistsRequest(new Names(new List<string> { _alias })));
            if (!response.IsValid)
            {
                throw response.OriginalException;
            }

            if (response.Exists)
            {
                _elasticClient.DeleteAlias(new DeleteAliasRequest(Indices.Parse($"{_alias}-*"), _alias));
            }

            Indices multipleIndicesFromStringArray = indicesToAddToAlias.ToArray();
            var responseCreateIndex = _elasticClient.PutAlias(new PutAliasRequest(multipleIndicesFromStringArray, _alias));
            if (!responseCreateIndex.IsValid)
            {
                throw responseCreateIndex.OriginalException;
            }
        }

        private static DateTime aliasUpdated = DateTime.UtcNow.AddYears(-50);

        private void EnsureAlias()
        {
            if (_options.Value.IndexPerMonth)
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddMonths(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }
            else
            {
                if (aliasUpdated.Date < DateTime.UtcNow.AddDays(-1).Date)
                {
                    aliasUpdated = DateTime.UtcNow;
                    CreateAlias();
                }
            }           
        }
    }
}

Testing the audit log

The created audit trails can be checked using the following HTTP GET requests:

Counts all the audit trail entries in the alias.
http://localhost:9200/auditlog/_count

Shows all the audit trail indices. You can count all the documents from the indices used in the alias and it must match the count from the alias.
http://localhost:9200/_cat/indices/auditlog*

You can also start the application and the AuditTrail logs can be displayed in the Audit Trail logs MVC view.

01_audittrailview

This view is just a quick test, if implementing properly, you would have to localize the timestamp display and add proper paging in the view.

Notes, improvements

If lots of audit trail documents are written at once, maybe a bulk insert could be used to add the documents in batches, like most of the loggers implement this. You should also define a strategy on how the old audit trails, indices should be cleaned up, archived or whatever. The creating of the alias could be optimized depending on you audit trail data, and how you clean up old audit trail indices.

Links:

https://www.elastic.co/guide/en/elasticsearch/reference/5.2/indices-aliases.html

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html

https://docs.microsoft.com/en-us/aspnet/core/

https://www.elastic.co/products/elasticsearch

https://github.com/elastic/elasticsearch-net

https://www.nuget.org/packages/NLog.Web.AspNetCore/



Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Anuraj Parameswaran: Aspect oriented programming with ASP.NET Core

This post is about implementing simple AOP (Aspect Oriented Programming) with ASP.NET Core. AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself. An example of crosscutting concerns is “logging,” which is frequently used in distributed applications to aid debugging by tracing method calls. AOP helps you to implement logging without affecting you actual code.


Anuraj Parameswaran: Hosting ASP.NET Core applications on Heroku using Docker

This post is about hosting ASP.NET Core applications on Heroku using Docker. Heroku is a cloud Platform-as-a-Service (PaaS) supporting several programming languages that is used as a web application deployment model. Heroku, one of the first cloud platforms, has been in development since June 2007, when it supported only the Ruby programming language, but now supports Java, Node.js, Scala, Clojure, Python, PHP, and Go. Heroku doesn’t support .NET or .NET Core natively, but recently they started supporting Docker. In this post I am using Docker to deploy my application to Heroku, there is build pack option is also available (Build Pack is the deployment mechanism which is supported by Heroku natively.), but there is no official build pack for .NET available yet.


Andrew Lock: Under the hood of the Middleware Analysis package

Under the hood of the Middleware Analysis package

In my last post I showed how you could use the Microsoft.AspNetCore.MiddlewareAnalysis package to analyse your middleware pipeline. In this post I take a look at the source code behind the package, to see how it's implemented.

What can you use the package for?

After you have added the MiddlewareAnalysis package to your ASP.NET Core application, you can use a DiagnosticSource to log arbitrary details about each middleware component when it starts and stops, or when an exception occurs. You can use this to create some very powerfull logs, where you can inspect the raw HttpContext and exception and log any pertinent details. At the simplest level though, you get a log when each middleware component starts or stops.

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: HelloWorld; /  
MiddlewareFinished: HelloWorld; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

Check out my previous post for details on how to add the middleware to your project, as well as how to create a diagnostic source.

How does it work?

In my last post I mention that the analysis package uses the IStartupFilter interface. I discussed this interface in a previous post, but in essence, it allows you to insert middleware into the pipeline without using the normal Startup.Configure approach. By using this filter, the MiddlewareAnalysis package can customise the pipeline by adding services to the DI container. It uses this to insert additional middleware that it uses to log to a DiagnosticSource.

In the rest of this post I'll take a look at the various internal classes that make up the Microsoft.AspNetCore.MiddlewareAnalysis package (don't worry, there's only 4 of them!).

The service collection extensions

First, the easy bit.

As is the case with most packages, there is an extension method to allow you to easily add the necessary services to your application:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMiddlewareAnalysis();
}

This extension method registers a single instance of an IStartupFilter, the AnalysisStartupFilter. This startup filter will be invoked before your Startup.Configure method is run. In fact, when invoked, the AnalysisStartupFilter will receive your Startup.Configure method as an argument.

The AnalysisStartupFilter

The AnalysisStartupFilter that is added by the previous call to AddMiddlewareAnalysis() acts as a "wrapper" around the Startup.Configure method. It both takes and returns an Action<IApplicationBuilder>, letting you customise the way the middleware pipeline is constructed:

public class AnalysisStartupFilter : IStartupFilter  
{
    public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
    {
        return builder =>
        {
            var wrappedBuilder = new AnalysisBuilder(builder);
            next(wrappedBuilder);

            // There's a couple of other bits here I'll gloss over for now
        };
    }
}

This is an interesting class. Rather than simply adding some fancy middleware to the pipeline, the startup filter creates a new instance of an AnalysisBuilder, passes in the ApplicationBuilder, and then invokes the next method, passing the wrappedBuilder.

This means that when the filter is run (on app startup), it creates a custom IApplicationBuilder, the AnalysisBuilder and passes that to all subsequent filters and the Startup.Configure method. Consequently, all the calls you make in a typical Configure method, are made on the AnalysisBuilder, instead of the original IApplicationBuilder instance:

public void Configure(IApplicationBuilder app)  
{
    // app is now the AnalysisBuilder, so all calls are made
    // on that, instead of the original IApplicationBuilder
    app.UseExceptionHandler("/error");
    app.UseStaticFiles();
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Let's take a look at the AnalysisBuilder and figure out what it's playing at.

Intercepting builder calls with the AnalysisBuilder

We now know that the AnalysisBuilder is intercepting all the calls that add middleware to the pipeline. What you may or may not know is that behind the scenes, all the extension method like UseStaticFiles, UseMvc, and even UseMiddleware are ultimately calling a single method on IApplicationBuilder, Use. This makes it relatively easy to implement a wrapper around the default builder:

public IApplicationBuilder Use(Func<RequestDelegate, RequestDelegate> middleware)  
{
    // You can set a custom name for the middleware by setting 
    // app.Properties["analysis.NextMiddlewareName"] = "Name";

    string middlewareName = string.Empty;
    object middlewareNameObj;
    if (Properties.TryGetValue(NextMiddlewareName, out middlewareNameObj))
    {
        middlewareName = middlewareNameObj?.ToString();
        Properties.Remove(NextMiddlewareName);
    }

    return InnerBuilder.UseMiddleware<AnalysisMiddleware>(middlewareName)
        .Use(middleware);
}

The bulk of this method is taken up attempting to see if you have set a value in the Properties collection, which allows you to specify a custom name for the next middleware in the pipeline (see my previous post).

The interesting bit occurs right at the end of the method. As well as adding the provided middleware to the underlying InnerBuilder, an instance of the AnalysisMiddleware is added to the pipeline. That means for every middleware added, an instance of the analysis middleware is added.

So for the example Configure method I showed earlier, that means our actual pipeline looks something like the following:

Under the hood of the Middleware Analysis package

You may notice that as well as starting with an AnalysisMiddleware instance, the pipeline adds an AnalysisMiddleware after the MVC middleware too. This is thanks to the definition of the Build function in the AnalysisBuilder:

public RequestDelegate Build()  
{
    // Add one maker at the end before the default 404 middleware (or any fancy Join middleware).
    return InnerBuilder.UseMiddleware<AnalysisMiddleware>("EndOfPipeline")
        .Build();
}

As the comment says, this ensures a final instance of the AnalysisMiddleware, "EndOfPipeline", is added before the end of the pipeline (and the default 404 middleware is added).

At the end of setup, we have a middleware pipeline configured as I showed in the previous image, where all the middleware we added to the pipeline in Startup.Configure is interspersed with AnalysisMiddleware.

This brings us onto the final piece of the puzzle, the AnalysisMiddleware itself.

Logging to DiagnosticSource with the AnalysisMiddleware

Up to this point, we haven't used DiagnosticSource anywhere in the package. It's all been about injecting the additional middleware into the pipeline. Inside this middleware is where we do the actual logging.

I'll show the code for the AnalysisMiddleware in a second, but essentially it is just doing three things:

  1. Logging to DiagnosticSource before the next middleware is invoked
  2. Logging to DiagnosticSource after the next middleware has finished being invoked
  3. Catching any exceptions, logging them to DiagnosticSource and rethrowing them.

For details on how DiagnosticSource works, check out my previous post. In brief, you can log to a source using Write(), providing a key and an anonymous object. You can see this is exactly what the middleware is doing in the code:

public class AnalysisMiddleware  
{
    private readonly Guid _instanceId = Guid.NewGuid();
    private readonly RequestDelegate _next;
    private readonly DiagnosticSource _diagnostics;
    private readonly string _middlewareName;

    public AnalysisMiddleware(RequestDelegate next, DiagnosticSource diagnosticSource, string middlewareName)
    {
        _next = next;
        _diagnostics = diagnosticSource;
        if (string.IsNullOrEmpty(middlewareName))
        {
            middlewareName = next.Target.GetType().FullName;
        }
        _middlewareName = middlewareName;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        var startTimestamp = Stopwatch.GetTimestamp();
        if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"))
        {
            _diagnostics.Write(
                "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting",
                new
                {
                    name = _middlewareName,
                    httpContext = httpContext,
                    instanceId = _instanceId,
                    timestamp = startTimestamp,
                });
        }

        try
        {
            await _next(httpContext);

            if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"))
            {
                var currentTimestamp = Stopwatch.GetTimestamp();
                _diagnostics.Write(
                    "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished", 
                    new
                    {
                        name = _middlewareName,
                        httpContext = httpContext,
                        instanceId = _instanceId,
                        timestamp = currentTimestamp,
                        duration = currentTimestamp - startTimestamp,
                    });
            }
        }
        catch (Exception ex)
        {
            if (_diagnostics.IsEnabled("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"))
            {
                var currentTimestamp = Stopwatch.GetTimestamp();
                _diagnostics.Write(
                    "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException", 
                    new
                    {
                        name = _middlewareName,
                        httpContext = httpContext,
                        instanceId = _instanceId,
                        timestamp = currentTimestamp,
                        duration = currentTimestamp - startTimestamp,
                        exception = ex,
                    });
            }
            throw;
        }
    }
}

In the constructor, we are passed a reference to the next middleware in the pipeline, next. If we haven't already been passed an explicit middleware name (see the ApplicationBuilder section) then a name is obtained from the type of next. This will generally return something like "Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware".

The Invoke method is then called when the middleware is executed. The code is a little hard to read due to the various parameters passed around: to make it a little easier on your eyes, the overall psuedo-code for the class might look something like:

public class AnalysisMiddleware  
{
    private readonly RequestDelegate _next;

    public AnalysisMiddleware(RequestDelegate next, )
    {
        _next = next;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        Diagnostics.Log("MiddlewareStarting")

        try
        {
            await _next(httpContext);
            Diagnostics.Log("MiddlewareFinished")
        }
        catch ()
        {
            Diagnostics.Log("MiddlewareException")
            throw;
        }
    }
}

Hopefully the simplicity of the middleware is more apparent for this latter version. It really is just writing to the the diagnostics source and executing the next middleware in the pipeline.

Summary

And that's all there is to it! Just four classes, providing the ability to log lots of details about your middleware pipeline. The analysis middleware itself uses DiagnosticSource to expose details about the request HttpContext (or exception) currently executing.

The most interesting piece of the package is the way it uses AnalysisBuilder. This shows that you can get complete control over your middleware pipeline by using a simple wrapper class, and by injecting an IStartupFilter. If you haven't, already I really recommend checking out the code on GitHub. It's only four files after all! If you want to see how to actually use the package, check out my previous post on how to use the package in your projects, including setting up a Diagnosticlistener.


Anuraj Parameswaran: How to use Log4Net with ASP.NET Core for logging

This post is about using Log4Net with ASP.NET Core for implementing logging. The Apache log4net library is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent Apache log4j™ framework to the Microsoft® .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime.


Anuraj Parameswaran: Implementing the Repository and Unit of Work Patterns in ASP.NET Core

This post is about implementing the Repository and Unit of Work Patterns in ASP.NET Core. The repository and unit of work patterns are intended to create an abstraction layer between the data access layer and the business logic layer of an application. Implementing these patterns can help insulate your application from changes in the data store and can facilitate automated unit testing or test-driven development (TDD). Long back I wrote a post on implementing a generic repository in ASP.NET 5 (Yes in ASP.NET 5 days, which can be used in ASP.NET Core as well.). So I am not explaining more on Repository pattern. The UnitOfWork pattern is a design for grouping a set of tasks into a single group of transactional work. The UnitOfWork pattern is the solution to sharing the Entity Framework data context across multiple managers and repositories.


Andrew Lock: Understanding your middleware pipeline with the Middleware Analysis package

Understanding your middleware pipeline with the Middleware Analysis package

Edited 18th Feb 17, to add section on adding a custom name for anonymous middleware

In a recent post I took a look at the IStartupFilter interface, a little known feature that can be used to add middleware to your configured pipeline through a different route than the standard Configure method.

I have also previously looked at the DiagnosticSource logging framework, which provides a mechanism for logging rich data, as opposed to the strings that are logged using the ILogger infrastructure.

In this post, I will take a look at the Microsoft.AspNetCore.MiddlewareAnalysis package, which uses an IStartupFilter and DiagnosticSource to provide insights into your middleware pipeline.

Analysing your middleware pipeline

Before we dig into details, lets take a look at what you can expect when you use the MiddlewareAnalysis package in your solution.

I've started with a simple 'Hello world' ASP.NET Core application that just prints to the console for every request. The initial Startup.Configure method looks like the following:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.Run(async (context) =>
    {
        await context.Response.WriteAsync("Hello World!");
    });
}

When you run the app, you just get the Hello World as expected:

Understanding your middleware pipeline with the Middleware Analysis package

By default, we have a console logger set up, so the request to the root url will provide some details about the request:

Understanding your middleware pipeline with the Middleware Analysis package

Nothing particularly surprising there. The interesting stuff happens after we've added the analysis filter to our project. After doing so, we'll see a whole load of additional information is logged to the console, describing when each of the middleware components start and stop:

Understanding your middleware pipeline with the Middleware Analysis package

At first blush, this might not seem that useful, it doesn't appear to be logging anything especially useful. But as you'll see later, the real power comes from using the DiagnosticSource adapter, which allows you to log arbitrary details.

It could also be very handy in diagnosing complex branching middleware pipelines, when trying to figure out why a particular request takes a given route through your app.

Now you've seen what to expect, we'll add middleware analysis to our pipeline.

1. Add the required packages

There are a couple of packages needed for this example. The AnalysisStartupFilter we are going to use from the Microsoft.AspNetCore.MiddlewareAnalysis package writes events using DiagnosticSource. One of the easiest ways to consume these events is using the Microsoft.Extensions.DiagnosticAdapter package, as you'll see shortly.

First add the MiddlewareAnalysis and DiagnosticAdapter packages to either your project.json or .csproj file (depending if you've moved to the new msbuild format):

{
  "dependencies" : {
  ...
  "Microsoft.AspNetCore.MiddlewareAnalysis": "1.1.0",
  "Microsoft.Extensions.DiagnosticAdapter": "1.1.0"
  }
}

Next, we will create an adapter to consume the events generated by the MiddlewareAnalysis package.

2. Creating a diagnostic adapter

In order to consume events from a DiagnosticSource, we need to subscribe to the event stream. For further details on how DiagnosticSource works, check out my previous post or the user guide.

A diagnostic adapter is typically a standard POCO class with methods for each event you are interested in, decorated with a DiagnosticNameAttribute.

You can create a simple adapter for the MiddlewareAnalysis package such as the following (taken from the sample on GitHub):

public class TestDiagnosticListener  
{
    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting")]
    public virtual void OnMiddlewareStarting(HttpContext httpContext, string name)
    {
        Console.WriteLine($"MiddlewareStarting: {name}; {httpContext.Request.Path}");
    }

    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException")]
    public virtual void OnMiddlewareException(Exception exception, string name)
    {
        Console.WriteLine($"MiddlewareException: {name}; {exception.Message}");
    }

    [DiagnosticName("Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished")]
    public virtual void OnMiddlewareFinished(HttpContext httpContext, string name)
    {
        Console.WriteLine($"MiddlewareFinished: {name}; {httpContext.Response.StatusCode}");
    }
}

This adapter creates a separate method for each event that the analysis middleware exposes:

  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"

Each event exposes a particular set of named parameters which you can use in your method. For example, the MiddlewareStarting event exposes a number of parameters, though we are only using two in our TestDiagnosticListener methods :

  • string name: The name of the currently executing middleware
  • HttpContext httpContext: The HttpContext for the current request
  • Guid instanceId: A unique guid for the analysis middleware
  • long timestamp: The timestamp at which the middleware started to run, given by Stopwatch.GetTimestamp()

Warning: The name of the methods in our adapter are not important, but the name of the parameters are. If you don't name them correctly, you'll get exceptions or nulls passed to your methods at runtime.

The fact that the whole HttpContext is available to the logger is one of the really powerful points of the DiagnosticSource infrastructure. Instead of the decision about what to log being made at the calling site, it is made by the logging code itself, which has access to the full context of the event.

In the example, we are doing some very trivial logging, writing straight to the console and just noting down basic features of the request like the request path and status code, but the possibility is there to do far more interesting things as needs be: inspecting headers; query parameters; writing to other data sinks etc.

With an adapter created we can look at wiring it up in our application.

3. Add the necessary services

As with most standard middleware packages in ASP.NET Core, you must register some required services with the dependency injection container in ConfigureServices. Luckily, as is convention, the package contains an extension method to make this easy:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMiddlewareAnalysis();
}

With the services registered, the final step is to wire up our listener.

4. Wiring up the diagnostic listener

In order for the TestDiagnosticListener we created in step 2. to collect events, we must register it with a DiagnosticListener. Again, check the user guide if you are interested in the details.

public void Configure(IApplicationBuilder app, DiagnosticListener diagnosticListener, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    var listener = new TestDiagnosticListener();
    diagnosticListener.SubscribeWithAdapter(listener);

    // ... remainder of the existing Configure method
}

The DiagnosticListener can be injected into the Configure method using standard dependency injection, and an instance of the TestDiagnosticListener can just be created directly.

The registering of our listener is achieved using the SubscribeWithAdapter extension method that is exposed by the Microsoft.Extensions.DiagnosticAdapter package. This performs all the wiring up of the TestDiagnosticListener methods to the DiagnosticListener for us.

That is all there is to it. No additional middleware to add or modifying of our pipeline, just add the listener and give it a test!

5. Run your application and look at the output

With everything setup, you can run your application, make a request, and check the output. If all is setup correctly, you should see the same "Hello World" response, but your console will be peppered with extra details about the middleware being run:

Understanding your middleware pipeline with the Middleware Analysis package

Understanding the analysis output

Assuming this has all worked correctly you should have a series of "MiddlewareStarting" and "MiddlewareFinished" entries in your console. In my case, running in a development environment, and ignoring the Ilogger messages, my sample app gives me the following output when I make a request to the root path /:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: UnderstandingMiddlewarePipeline.Startup+<>c; /  
MiddlewareFinished: UnderstandingMiddlewarePipeline.Startup+<>c; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

There are two calls to "MiddlewareStarting" in here, and two corresponding calls to "MiddlewareFinished".

The first call is fairly self explanatory, as it lists the name of the middleware as DeveloperExceptionPageMiddleware. This was the first middleware I added to my pipeline, so it was the first middleware called for the request.

The second call is slightly more cryptic, as it lists the name of the middleware as UnderstandingMiddlewarePipeline.Startup+<>c. This is because I used an inline app.Run method to generate the "Hello world" response in the browser (as I showed right at the beginning of the post).

Using app.Run adds an additional logical piece of middleware to the pipeline, one that executes the provided lambda. That lambda obviously doesn't have a friendly name, so the analysis middleware package passes the automatically generated type name to the listener.

As expected, the MiddlewareFinished logs occur in the reverse order to the MiddlewareStarting logs, as the response passes back down through the middleware pipeline. At each stage it lists the status code for the request generated by the completing middleware. This would allows you to see, for example, at exactly which point in the pipeline the status code for a request switched from 200 to an error.

Adding a custom name for anonymous middleware

While it's fairly obvious what the anonymous middleware is in this case, what if you had used multiple app.Run or app.Use calls in your method? It could be confusing if your pipeline has multiple branches or generally anonymous methods. This rather defeats the point of using the middleware analysis middleware!

Luckily, there's an easy way to give a name to these anonymous methods if you wish, by setting a property on the IApplicationBulider called "analysis.NextMiddlewareName". For example, we could rewrite our middleware pipeline as the following:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.Properties["analysis.NextMiddlewareName"] = "HelloWorld";
    app.Run(async (context) =>
    {
        await context.Response.WriteAsync("Hello World!");
    });
}

If we ran the request again with the addition of our property, the logs would look like the following:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /  
MiddlewareStarting: HelloWorld; /  
MiddlewareFinished: HelloWorld; 200  
MiddlewareFinished: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; 200  

Much clearer!

Listening for errors

Previously I said that the analysis middleware generates three different events, one of which is MiddlewareException. To see this in action, the easiest approach is to view the demo for the middleware analysis package. This lets you test each of the different types of events you might get using simple links:

Understanding your middleware pipeline with the Middleware Analysis package

By clicking on the "throw" option, the sample app will throw an exception as part of the middleware pipeline execution, and you can inspect the logs. If you do, you'll see a number of "MiddlewareException" entries. Obviously the details shown in the sample are scant, but as I've already described, you have a huge amount of flexibility in your diagnostic adapter to log any details you need:

MiddlewareStarting: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.UseExtensions+<>c__DisplayClass0_0; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; /throw  
MiddlewareStarting: MiddlewareAnaysisSample.Startup+<>c__DisplayClass1_0; /throw  
MiddlewareStarting: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; /throw  
MiddlewareStarting: MiddlewareAnaysisSample.Startup+<>c;  
MiddlewareException: MiddlewareAnaysisSample.Startup+<>c; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; Application Exception  
MiddlewareException: MiddlewareAnaysisSample.Startup+<>c__DisplayClass1_0; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.Extensions.MapMiddleware; Application Exception  
MiddlewareException: Microsoft.AspNetCore.Builder.UseExtensions+<>c__DisplayClass0_0; Application Exception  

Hopefully you now have a feel for the benefit of being able to get such detailed insight into exactly what your middleware pipeline is doing. I really encourage you to have a play with the sample app, and tweak it to see how powerful it could be for diagnosing issues in your own apps.

Event parameters reference

One of the slightly annoying things with the DiagnosticSource infrastructure is the lack of documentation around the events and parameters that a package exposes. You can always look through the source code but that's not exactly user friendly.

As of writing, the current version of Microsoft.AspNetCore.MiddlewareAnalysis is 1.1.0, which exposes three events, with the following parameters:

  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareStarting"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The current ticks timestamp at which the middleware started to run, given by Stopwatch.GetTimestamp()
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareFinished"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The timestamp at which the middleware finished running
    • long duration: The duration in ticks that the middleware took to run, given by the finish timestamp - the start timestamp.
  • "Microsoft.AspNetCore.MiddlewareAnalysis.MiddlewareException"
    • string name: The name of the currently executing middleware
    • HttpContext httpContext: The HttpContext for the current request
    • Guid instanceId: A unique guid for the analysis middleware
    • long timestamp: The timestamp at which the middleware finished running
    • long duration: The duration in ticks that the middleware took to run, given by the finish timestamp - the start timestamp.
    • Exception ex: The exception that occurred during execution of the middleware

Given the names of the events and the parameters must match these values, if you find you logger isn't working, it's worth checking back in the source code to see if things have changed.

Note: The event names must be an exact match, including case, but the parameter names are not case sensitive.

Under the hood

This post is already pretty long, so I'll save the details of how the middleware analysis filter works for a later post, but at it's core it is using two pieces of infrastructure, IStartupFilter and DiagnosticSource. Of course, if you can't wait, you can always check out the source code on GitHub!


Andrew Lock: Exploring IStartupFilter in ASP.NET Core

Exploring IStartupFilter in ASP.NET Core

Note The MEAP preview of my book, ASP.NET Core in Action is now available from Manning! Use the discount code mllock to get 50% off, valid through February 13.

I was spelunking through the ASP.NET Core source code the other day, when I came across something I hadn't seen before - the IStartupFilter interface. This lives in the Hosting repository in ASP.NET Core and is generally used by a number of framework services rather than by ASP.NET Core applications themselves.

In this post, I'll take a look at what the IStartupFilter is and how it is used in the ASP.NET Core infrastructure. In the next post I'll take a look at an external middleware implementation that makes use of it.

The IStartupFilter interface

The IStartupFilter interface lives in the Microsoft.AspNetCore.Hosting.Abstractions package in the Hosting repository on GitHub. It is very simple, and implements just a single method:

namespace Microsoft.AspNetCore.Hosting  
{
    public interface IStartupFilter
    {
        Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next);
    }
}

The single Configure method that IStartupFilter implements takes and returns a single parameter, an Action<IApplicationBuilder>. That's a pretty generic signature for a class, and doesn't reveal a lot of intent but we'll just go with it for now.

The IApplicationBuilder is what you use to configure a middleware pipeline when building an ASP.NET Core application. For example, a simple Startup.Configure method in an MVC app might look something like the following:

public void Configure(IApplicationBuilder app)  
{
    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

In this method, you are directly provided an instance of the IApplicationBuilder, and can add middleware to it. With the IStartupFilter, you are specifying and returning an Action<IApplicationBuilder>, that is, you are provided a method for configuring an IApplicationBuilder and you must return one too.

Consider this again for a second - the IStartupFilter.Configure method accepts a method for configuring an IApplicationBuilder. In other words, the IStartupFilter.Configure accepts a method such as Startup.Configure:

Startup _startup = new Startup();  
Action<IApplicationBuilder> startupConfigure = _startup.Configure;

IStartupFilter filter1 = new StartupFilter1(); //I'll show an example filter later on  
Action<IApplicationBuilder> filter1Configure = filter1.Configure(startupConfigure)

IStartupFilter filter2 = new StartupFilter2(); //I'll show an example filter later on  
Action<IApplicationBuilder> filter2Configure = filter2.Configure(filter1Configure)  

This may or may not start seeming somewhat familiar… We are building up another pipeline; but instead of a middleware pipeline, we are building a pipeline of Configure methods. This is the purpose of the IStartupFilter, to allow creating a pipeline of Configure methods in your application.

When are IStartupFilters called?

Now we better understand the signature of IStartupFilter, we can take a look at its usage in the ASP.NET Core framework.

To see IStartupFilter in action, you can take a look at the WebHost class in the Microsoft.AspNetCore.Hosting package, in the method BuildApplication. This method is called as part of the general initialisation that takes place when you call Build on a WebHostBuilder. This typically takes place in your program.cs file, e.g.:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()    
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup<Startup>()
            .Build();  // this will result in a call to BuildApplication()

        host.Run(); 
    }
}

Taking a look at BuildApplication in elided form (below), you can see that this method is responsible for instantiating the middleware pipeline. The RequestDelegate it returns represents a complete pipeline, and can be called by the server (Kestrel) when a request arrives.

private RequestDelegate BuildApplication()  
{
    //some additional setup not shown
    IApplicationBuilder builder = builderFactory.CreateBuilder(Server.Features);
    builder.ApplicationServices = _applicationServices;

    var startupFilters = _applicationServices.GetService<IEnumerable<IStartupFilter>>();
    Action<IApplicationBuilder> configure = _startup.Configure;
    foreach (var filter in startupFilters.Reverse())
    {
        configure = filter.Configure(configure);
    }

    configure(builder);

    return builder.Build();

First, this method creates an instance of an IApplicationBuilder, which will be used to build the middleware pipeline, and sets the ApplicationServices to a configured DI container.

The next block is the interesting part. First, an IEnumerable<IStartupFilter> is fetched from the DI container. As I've already hinted, we can configure multiple IStartupFilters to form a pipeline, so this method just fetches them all from the container. Also, the Startup.Configure method is captured into a local variable, configure. This is the Configure method that you typically write in your Startup class to configure your middleware pipeline.

Now we create the pipeline of Configure methods by looping through each IStartupFilter (in reverse order), passing in the Startup.Configure method, and then updating the local variable. This has the effect of creating a nested pipeline of Configure methods. For example, if we have three instances of IStartupFilter, you will end up with something a little like this, where the the inner configure methods are passed in the parameter to the outer methods:

Exploring IStartupFilter in ASP.NET Core

The final value of configure is then used to perform the actual middleware pipeline configuration by invoking it with the prepared IApplicationBuilder. Calling builder.Build() generates the RequestDelegate required for handling HTTP requests.

What does an implementation look like?

We've described in general what IStartupFilter is for, but it's always easier to have a concrete implementation to look at. By default, the WebHostBuilder registers a single IStartupFilter when it initialises - the AutoRequestServicesStartupFilter:

public class AutoRequestServicesStartupFilter : IStartupFilter  
{
    public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
    {
        return builder =>
        {
            builder.UseMiddleware<RequestServicesContainerMiddleware>();
            next(builder);
        };
    }
}

Hopefully, the behaviour of this class is fairly obvious. Essentially it adds an additional piece of middleware, the RequestServicesContainerMiddleware, at the start of your middleware pipeline.

This is the only IStartupFilter registered by default, and so in that case the parameter next will be the Configure method of your Startup class.

And that is essentially all there is to IStartupFilter - it is a way to add additional middleware (or other configuration) at the beginning or end of the configured pipeline.

How are they registered?

Registering an IStartupFilter is simple, just register it in your ConfigureServices call as usual. The AutoRequestServicesStartupFilter is registered by default in the WebHostBuilder as part of its initialisation:

private IServiceCollection BuildHostingServices()  
{
    ...
    services.AddTransient<IStartupFilter, AutoRequestServicesStartupFilter>();
    ...
}

The RequestServicesContainerMiddleware

On a slightly tangential point, but just for interest, the RequestServicesContainerMiddleware (that is registered by the AutoRequestServicesStartupFilter) is shown in reduced format below:

public class RequestServicesContainerMiddleware  
{
    private readonly RequestDelegate _next;
    private IServiceScopeFactory _scopeFactory;

    public RequestServicesContainerMiddleware(RequestDelegate next, IServiceScopeFactory scopeFactory)
    {
        _scopeFactory = scopeFactory;
        _next = next;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        var existingFeature = httpContext.Features.Get<IServiceProvidersFeature>();

        // All done if request services is set
        if (existingFeature?.RequestServices != null)
        {
            await _next.Invoke(httpContext);
            return;
        }

        using (var feature = new RequestServicesFeature(_scopeFactory))
        {
            try
            {
                httpContext.Features.Set<IServiceProvidersFeature>(feature);
                await _next.Invoke(httpContext);
            }
            finally
            {
                httpContext.Features.Set(existingFeature);
            }
        }
    }
}

This middleware is responsible for setting the IServiceProvidersFeature. When created, the RequestServicesFeature creates a new IServiceScope and IServiceProvider for the request. This handles the creation and disposing of dependencies added to the dependency injection controller with a Scoped lifecycle.

Hopefully it's clear why it's important that this middleware is added at the beginning of the pipeline - subsequent middleware may need access to the scoped services it manages.

By using an IStartupFilter, the framework can be sure the middleware is added at the start of the pipeline, doing it an extensible, self contained way.

When should you use it?

Generally speaking, I would not imagine that there will much need for IStartupFilter to be used in user's applications. By their nature, users can define the middleware pipeline as they like in the Configure method, so IStartupFilter is rather unnecessary.

I can see a couple of situations in which IStartupFilter would be useful to implement:

  1. You are a library author, and you need to ensure your middleware runs at the beginning (or end) of the middleware pipeline.
  2. You are using a library which makes use of the IStartupFilter and you need to make sure your middleware runs before its does.

Considering the first point, you may have some middleware that absolutely needs to run at a particular point in the middleware pipeline. This is effectively the use case for the RequestServicesContainerMiddleware shown previously.

Currently, the order in which services T are registered with the DI container controls the order they will be returned when you fetch an IEnumerable<T> using GetServices(). As the AutoRequestServicesStartupFilter is added first, it will be returned first when fetched as part of an IEnumerable<IStartupFilter>. Thanks to the call to Reverse() in the WebHost.BuildApplication() method, its Configure method will be the last one called, and hence the outermost method.

If you register additional IStartupFilters in your ConfigureServices method, they will be run prior to the AutoRequestServicesStartupFilter, in the reverse order that you register them. The earlier they are registered with the container, the closer to the beginning of the pipeline any middleware they define will be.

This means you can control the order of middleware added by IStartupFilters in your application. If you use a library that registers an IStartupFilter in its 'Add' method, you can choose whether your own IStartupFilter should run before or after it by whether it is registered before or after in your ConfigureServices method.

The whole concept of IStartupFilters is a little confusing and somewhat esoteric, but it's nice to know it's there as an option should it be required!

Summary

In this post I discussed the IStartupFilter and its use by the WebHost when building a middleware pipeline. In the next post I'll explore a specific usage of the IStartupFilter.


Damien Bowden: Hot Module Replacement with Angular and Webpack

This article shows how HMR, or Hot Module Replacement can be used together with Angular and Webpack.

Code: VS2017 angular 4.x | VS2017 angular 2.x

Blogs in this series:

2017.03.18: Updated to angular 4.0.0

See here for full history:
https://github.com/damienbod/AngularWebpackVisualStudio/blob/master/CHANGELOG.md

package.json npm file

The webpack-dev-server from Kees Kluskens is added to the devDependencies in the npm package.json file. The webpack-dev-server package implements and supports the HMR feature.

"devDependencies": {
  ...
  "webpack": "^2.2.1",
  "webpack-dev-server": "2.2.1"
},

In the scripts section of the package.json, the start command is configured to start the dotnet server and also the webpack-dev-server with the –hot and the –inline parameters.

See the webpack-dev-server documentation for more information about the possible parameters.

The dotnet server is only required because this demo application uses a Web API service implemented in ASP.NET Core.

"start": "concurrently \"webpack-dev-server --hot --inline --port 8080\" \"dotnet run\" "

webpack dev configuration

The devServer is added to the module.exports in the webpack.dev.js. This configures the webpack-dev-server as required. The webpack-dev-server configuration can be set here as well as the command line options, so you as a developer can decide which is better for you.

devServer: {
	historyApiFallback: true,
	contentBase: path.join(__dirname, '/wwwroot/'),
	watchOptions: {
		aggregateTimeout: 300,
		poll: 1000
	}
},

The output in the module.exports also needs to be configured correctly for the webpack-dev-server to work correctly. If the ‘./’ path is used in the path option of the output section, the webpack-dev-server will not start.

output: {
	path: __dirname +  '/wwwroot/',
	filename: 'dist/[name].bundle.js',
	chunkFilename: 'dist/[id].chunk.js',
	publicPath: '/'
},

The module should be declared and the module.hot needs to be added the the main.ts.

// Entry point for JiT compilation.
declare var System: any;

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { AppModule } from './app/app.module';

// Enables Hot Module Replacement.
declare var module: any;
if (module.hot) {
    module.hot.accept();
}

platformBrowserDynamic().bootstrapModule(AppModule);

Running the application

Build the application using the webpack dev build. This can be done in the command line. Before building, you need to install all the npm packages using npm install.

$ npm run build-dev

The npm script build-dev is defined in the package.json file and uses the webpack-dev script which does a development build.

"build-dev": "npm run webpack-dev",
"webpack-dev": "set NODE_ENV=development && webpack",

Now the server can be started using the start script.

$ npm start

hmr_angular_01

The application is now running on localhost with port 8080 as defined.

http://localhost:8080/home

If for example, the color is changed in the app.scss, the bundles will be reloaded in the browser without refreshing.
hmr_angular2_03

Links

https://webpack.js.org/concepts/hot-module-replacement/

https://webpack.js.org/configuration/dev-server/#devserver

https://github.com/webpack/webpack-dev-server

https://www.sitepoint.com/beginners-guide-to-webpack-2-and-module-bundling/

View story at Medium.com



Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Andrew Lock: Logging using DiagnosticSource in ASP.NET Core

Logging using DiagnosticSource in ASP.NET Core

Logging in the ASP.NET Core framework is implemented as an extensible set of providers that allows you to easily plug in new providers without having to change your logging code itself. The docs give a great summary of how to use the ILogger and ILoggerFactory in your application and how to pipe the output to the console, to Serilog, to Azure etc. However, the ILogger isn't the only logging possibility in ASP.NET Core.

In this post, I'll show how to use the DiagnosticSource logging system in your ASP.NET Core application.

ASP.NET Core logging systems

There are actually three logging system in ASP.NET Core:

  1. EventSource - Fast and strongly typed. Designed to interface with OS logging systems.
  2. ILogger - An extensible logging system designed to allow you to plug in additional consumers of logging events.
  3. DiagnosticSource - Similar in design to EventSource, but does not require the logged data be serialisable.

EventSource has been available since the .NET Framework 4.5 and is used extensively by the framework to instrument itself. The data that gets logged is strongly typed, but must be serialisable as the data is sent out of the process to be logged. Ultimately, EventSource is designed to interface with the underlying operating system's logging infrastructure, e.g. Event Tracing for Windows (ETW) or LTTng on Linux.

The ILogger infrastructure is the most commonly used logging ASP.NET Core infrastructure. You can log to the infrastructure by injecting an instance of ILogger into your classes, and calling, for example, ILogger.LogInformation(). The infrastructure is designed for logging strings only, but does allow you to pass objects as additional parameters which can be used for structured logging (such as that provided by SeriLog). Generally speaking, the ILogger implementation will be the infrastructure you want to use in your applications, so check out the documentation if you are not familiar with it.

The DiagnosticSource infrastructure is very similar to the EventSource infrastructure, but the data being logged does not leave the process, so it does not need to be serialisable. There is also an adapter to allow converting DiagnosticSource events to ETW events which can be useful in some cases. It is worth reading the users guide for DiagnosticSource on GitHub if you wish to use it in your code.

When to use DiagnosticSource vs ILogger?

The ASP.NET Core internals use both the ILogger and the DiagnosticSource infrastructure to instrument itself. Generally speaking, and unsurprisingly, DiagnosticSource is used strictly for diagnostics. It records events such as "Microsoft.AspNetCore.Mvc.BeforeViewComponent" and "Microsoft.AspNetCore.Mvc.ViewNotFound".

In contrast, the ILogger is used to log more specific information such as "Executing JsonResult, writing value {Value}." or when an error occurs such as ""JSON input formatter threw an exception.".

So in essence, you should only use DiagnosticSource for infrastructure related events, for tracing the flow of your application process. Generally, ILogger will be the appropriate interface in almost all cases.

An example project using DiagnosticSource

For the rest of this post I'll show an example of how to log events to DiagnosticSource, and how to write a listener to consume them. This example will simply log to the DiagnosticSource when some custom middleware executes, and the listener will write details about the current request to the console. You can find the example project here.

Adding the necessary dependencies.

We'll start by adding the NuGet packages we're going to need for our DiagnosticSource to our project.json (I haven't moved to csproj based projects yet):

{
  dependencies: {
    ...
    "Microsoft.Extensions.DiagnosticAdapter": "1.1.0",
    "System.Diagnostics.DiagnosticSource": "4.3.0"
  }
}

Strictly speaking, the System.Diagnostics.DiagnosticSource package is the only one required, but we will add the adapter to give us an easier way to write a listener later.

Logging to the DiagnosticSource from middleware

Next, we'll create the custom middleware. This middleware doesn't do anything other than log to the diagnostic source:

public class DemoMiddleware  
{
    private readonly RequestDelegate _next;
    private readonly DiagnosticSource _diagnostics;

    public DemoMiddleware(RequestDelegate next, DiagnosticSource diagnosticSource)
    {
        _next = next;
        _diagnostics = diagnosticSource;
    }

    public async Task Invoke(HttpContext context)
    {
        if (_diagnostics.IsEnabled("DiagnosticListenerExample.MiddlewareStarting"))
        {
            _diagnostics.Write("DiagnosticListenerExample.MiddlewareStarting",
                new
                {
                    httpContext = context
                });
        }

        await _next.Invoke(context);
    }
}

This shows the standard way to log using a DiagnosticSource. You inject the DiagnosticSource into the constructor of the middleware for use when the middleware executes.

When you intend to log an event, you first check that there is a listener for the specific event. This approach keeps the logger lightweight, as the code contained within the body of the if statement is only executed if a listener is attached.

In order to create the log, you use the Write method, providing the event name and the data that should be logged. The data to be logged is generally passed as an anonymous object. In this case, the HttpContext is passed to the attached listeners, which they can use to log the data in any ways they sees fit.

Creating a diagnostic listener

There are a number of ways to create a listener that consumes DiagnosticSource events, but one of the easiest approaches is to use the functionality provided by the Microsoft.Extensions.DiagnosticAdapter package.

To create a listener, you can create a POCO class that contains a method designed to accept parameters of the appropriate type. You then decorate the method with a [DiagnosticName] attribute, providing the event name to listen for:

public class DemoDiagnosticListener  
{
    [DiagnosticName("DiagnosticListenerExample.MiddlewareStarting")]
    public virtual void OnMiddlewareStarting(HttpContext httpContext)
    {
        Console.WriteLine($"Demo Middleware Starting, path: {httpContext.Request.Path}");
    }
}

In this example, the OnMiddlewareStarting() method is configured to handle the "DiagnosticListenerExample.MiddlewareStarting" diagnostic event. The HttpContext, that is provided when the event is logged is passed to the method as it has the same name, httpContext that was provided when the event was logged.

Hopefully one of the advantages of the DiagnosticSource infrastructure is apparent in that you can log anything provided as data. We have access to the full HttpContext object that was passed, so we can choose to log anything it contains (just the request path in this case).

Wiring up the DiagnosticListener

All that remains is to hook up our listener and middleware pipeline in our Startup.Configure method:

public class Startup  
{
    public void Configure(IApplicationBuilder app, DiagnosticListener diagnosticListener)
    {
        // Listen for middleware events and log them to the console.
        var listener = new DemoDiagnosticListener();
        diagnosticListener.SubscribeWithAdapter(listener);

        app.UseMiddleware<DemoMiddleware>();
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("Hello World!");
        });
    }
}

A DiagnosticListener is injected into the Configure method from the DI container. This is the actual class that is used to subscribe to diagnostic events. We use the SubscribeWithAdapter extension method from the Microsoft.Extensions.DiagnosticAdapter package to register our DemoDiagnosticListener. This hooks into the [DiagnosticName] attribute to register our events, so that the listener is invoked when the event is written.

Finally, we configure the middleware pipeline with out demo middleware, and a simple 'Hello world' endpoint to the pipeline.

Running the example

At this point we're all set to run the example. If we hit any page, we just get the 'Hello world' output, no matter the path.

Logging using DiagnosticSource in ASP.NET Core

However, if we check the console, we can see the DemoMiddleware has been raising diagnostic events. These have been captured by the DemoDiagnosticListener which logs the path to the console:

Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
Demo Middleware Starting, path: /  
Demo Middleware Starting, path: /a/path  
Demo Middleware Starting, path: /another/path  
Demo Middleware Starting, path: /one/more  

Summary

And that's it, we have successfully written and consumed a DiagnosticSource. As I stated earlier, you are more likely to use the ILogger in your applications than DiagnosticSource, but hopefully now you will able to use it should you need to. Do let me know in the comments if there's anything I've missed or got wrong!


Damien Bowden: Docker compose with ASP.NET Core, EF Core and the PostgreSQL image

This article show how an ASP.NET Core application with a PostgreSQL database can be setup together using docker as the deployment containers for both web and database parts of the application. docker-compose is used to connect the 2 containers and the application is build using Visual Studio 2017.

Code: https://github.com/damienbod/AspNetCorePostgreSQLDocker

Setting up the PostgreSQL docker container from the command line

2017.02.03: Updated to VS2017 RC3 msbuild3

The PostgreSQL docker image can be started or setup from the command line simple by defining the required environment parameters, and the port which can be used to connect with PostgreSQL. A named volume called pgdata is also defined in the following command. The container is called postgres-server.

$ docker run -d -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=damienbod 
 --name postgres-server -p 5432:5432 -v pgdata:/var/lib/postgresql/data 
 --restart=always postgres

You can check all your local volumes with the following docker command:

$ docker volume ls

The docker containers can be viewed by running the docker ps -a:

$ docker ps -a

Then you can check the docker container for the postgres-server by using the logs command and the id of the container. Only the first few characters from the container id is required for docker to find the container.

$ docker logs <docker_id>

If you would like to view the docker container configuration and its properties, the inspect command can be used:

$ docker inspect <docker_id>

When developing docker applications, you will regularly need to clean up the images, containers and volumes. Here’s some quick commands which are used regularly.

If you need to find the dangling volumes:

$ docker volume ls -qf dangling=true

A volume can be removed using the volume id:

$ docker volume rm <volume id>

Clean up container and volume (dangerous as you might not want to remove the data):

$ docker rm -fv <docker id>

Configure the database using pgAdmin

Open pgAdmin to configure a new user in PostgreSQL, which will be used for the application.

EF7_PostgreSQL_01

Right click your user and click properties to set the password

EF7_PostgreSQL_02

Now a PostgreSQL database using docker is ready to be used. This is not the only way to do this, a better way would be to use a Dockerfile and and docker-compose.

Creating the PostgreSQL docker image using a Dockerfile

Usually you do not want to create the application from hand. You can do everything described above using a Dockerfile and docker-compose. The PostgresSQL docker image for this project is created using a Dockerfile and docker-compose. The Dockerfile uses the latest offical postgres docker image and adds the required database to the docker-entrypoint-initdb.d folder inside the container. When the PostgreSQL inits, it executes these scripts.

FROM postgres:latest
EXPOSE 5432
COPY dbscripts/10-init.sql /docker-entrypoint-initdb.d/10-init.sql
COPY dbscripts/20-damienbod.sql /docker-entrypoint-initdb.d/20-database.sql

The docker-compose defines the image, ports and a named volume for this image. The POSTGRES_PASSWORD is required.

version: '2'

services:
  damienbodpostgres:
     image: damienbodpostgres
     restart: always
     build:
       context: .
       dockerfile: Dockerfile
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Now switch to the directory where the docker-compose file is and build.

$ docker-compose build

If you want to deploy, you could create a new docker tag on the postgres container. Use your docker hub name if you have.

$ docker ps -a
$ docker tag damienbodpostgres damienbod/postgres-server

You can check your images and should see it in your list.

$ docker images

Creating the ASP.NET Core application

An ASP.NET Core application was created in VS2017. The EF Core and the PostgreSQL nuget packages were added as required. The Docker support was also added using the Visual Studio tooling.

<Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Relational" Version="1.1.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.0.0-msbuild3-final" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="1.1.0" />
    <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="1.0.0-msbuild3-final" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0-msbuild3-final" />
  </ItemGroup>
</Project>

The EF Core context is setup to access the 2 tables defined in PostgreSQL.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AspNetCorePostgreSQLDocker
{
    // >dotnet ef migration add testMigration in AspNet5MultipleProject
    public class DomainModelPostgreSqlContext : DbContext
    {
        public DomainModelPostgreSqlContext(DbContextOptions<DomainModelPostgreSqlContext> options) :base(options)
        {
        }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

The used database was created using the dockerfile scripts executed in the docker container init. This could also be done with EF Core migrations.

$ dotnet ef migrations add postgres-scripts

$ dotnet ef database update

The connection string used in the application must use the network name defined for the database in the docker-compose file. When debugging locally using IIS without docker, you would have so supply a way of switching the connection string hosts. The host postgresserver is defined in this demo, and so used in the connection string.

 "DataAccessPostgreSqlProvider": "User ID=damienbod;Password=damienbod;Host=postgresserver;Port=5432;Database=damienbod;Pooling=true;"

Now the application can be built. You need to check that it can be published to the release bin folder, which is used by the docker-compose.

Setup the docker-compose

The docker-compose for the application defines the web tier, database server and the network settings for docker. The postgresserver service is built using the damienbodpostgres image. It exposes the PostgreSQL standard post like we have defined before. The aspnetcorepostgresqldocker web application runs on post 5001 and depends on postgresserver. This is the ASP.NET Core application in Visual studio 2017.

version: '2'

services:
  postgresserver:
     image: damienbodpostgres
     restart: always
     ports:
       - 5432:5432
     environment:
         POSTGRES_PASSWORD: damienbod
     volumes:
       - pgdata:/var/lib/postgresql/data
     networks:
       - mynetwork

  aspnetcorepostgresqldocker:
     image: aspnetcorepostgresqldocker
     ports:
       - 5001:80
     build:
       context: ./src/AspNetCorePostgreSQLDocker
       dockerfile: Dockerfile
     links:
       - postgresserver
     depends_on:
       - "postgresserver"
     networks:
       - mynetwork

volumes:
  pgdata:

networks:
  mynetwork:
     driver: bridge

Now the application can be started, deployed or tested. The following command will start the application in detached mode.

$ docker-compose -d up

Once the application is started, you can test it using:

http://localhost:5001/index.html

01_postgresqldocker

You can add some data using Postman
02_postgresqldocker

POST http://localhost:5001/api/dataeventrecords
{
  "DataEventRecordId":3,
  "Name":"Funny data",
  "Description":"yes",
  "Timestamp":"2015-12-27T08:31:35Z",
   "SourceInfo":
  { 
    "SourceInfoId":0,
    "Name":"Beauty",
    "Description":"second Source",
    "Timestamp":"2015-12-23T08:31:35+01:00",
    "DataEventRecords":[]
  },
 "SourceInfoId":0 
}

And the data can be viewed using

http://localhost:5001/api/dataeventrecords

03_postgresqldocker

Or you can view the data using pgAdmin

04_postgresqldocker
Links

https://hub.docker.com/_/postgres/

https://www.andreagrandi.it/2015/02/21/how-to-create-a-docker-image-for-postgresql-and-persist-data/

https://docs.docker.com/engine/examples/postgresql_service/

http://stackoverflow.com/questions/25540711/docker-postgres-pgadmin-local-connection

http://www.postgresql.org

http://www.pgadmin.org/

https://github.com/npgsql/npgsql

https://docs.docker.com/engine/tutorials/dockervolumes/



Andrew Lock: Reloading strongly typed options in ASP.NET Core 1.1.0

Reloading strongly typed options in ASP.NET Core 1.1.0

Back in June, when ASP.NET Core was still in RC2, I wrote a post about reloading strongly typed Options when the underlying configuration sources (e.g. a JSON) file changes. As I noted in that post, this functionality was removed prior to the release of ASP.NET Core 1.0.0, as the experience was a little confusing. With ASP.NET Core 1.1.0, it's back, and much simpler to use.

In this post, I'll show how you can use the new IOptionsSnapshot<> interface to simplify reloading strongly typed options. I'll provide a very brief summary of using strongly typed configuration in ASP.NET Core, and touch on the approach that used to be required with RC2 to show how much simpler it is now!

tl;dr; To have your options reload when the underlying file / IConfigurationRoot changes, just replace any usages of IOptions<> with IOptionsSnapshot<>

The ASP.NET Core configuration system

The configuration system in ASP.NET Core is rather different to the approach taken in ASP.NET 4.X. Previously, you would typically store your configuration in the AppSettings section of the XML web.config file, and you would load these settings using a static helper class. Any changes to web.config would cause the app pool to recycle, so changing settings on the fly this way wasn't really feasible.

In ASP.NET Core, configuration of app settings is a more dynamic affair. App settings are still essentially key-value pairs, but they can be obtained from a wide array of sources. You can still load settings from XML files, but also JSON files, from the command line, from environment variables, and many others. Writing your own custom configuration provider is also possible if you have another source you wish to use to configure your application.

Configuration is typically performed in the constructor of Startup, loading from multiple sources:

public Startup(IHostingEnvironment env)  
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }  

This constructor creates a configuration object, loading the configuration found in each of the file sources (two JSON files and Environment Variables in this case). Each source supplies a set of key-value pairs, and each subsequent source overwrites values found in earlier sources. The final IConfigurationRoot is essentially a dictionary of all the final key-value pairs from all of your configuration sources.

It is perfectly possible to use this IConfigurationRoot directly in your application, but the suggested approach is to use strongly typed settings instead. Rather than injecting the whole dictionary of settings whenever you need to access a single value, you take a dependency on a strongly typed POCO C# class. This can be bound to your configuration values and used directly.

For example, imagine I have the following values in appsettings.json:

{
  "MyValues": {
    "DefaultValue" : "first"
  }
}

This could be bound to the following class:

public class MyValues  
{
    public string DefaultValue { get; set; }
}

The binding is setup when you are configuring your application for dependency injection in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
}

With this approach, you can inject an instance of IOptions<MyValues> into your controllers and access the settings values using the strongly typed object. For example, a simple web API controller that just displays the setting value:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptions<MyValues> values)
    {
        _myValues = values.Value;
    }

    // GET api/values
    [HttpGet]
    public string Get()
    {
        return _myValues.DefaultValue;
    }
}

would give the following output when the url /api/Values is hit:

Reloading strongly typed options in ASP.NET Core 1.1.0

Reloading strongly typed options in ASP.NET Core RC2

Now that you know how to read settings in ASP.NET Core, we get to the interesting bit - reloading options. You may have noticed that there is a reloadOnChange parameter on the AddJsonFile method when building your configuration object in Startup. Based on this parameter it would seem like any changes to the underlying file should propagate into your project.

Unfortunately, as I explored in a previous post, you can't just expect that functionality to happen magically. While it is possible to achieve, it takes a bit of work.

The problem lies in the fact that although the IConfigurationRoot is automatically updated whenever the underlying appsettings.json file changes, the strongly typed configuration IOptions<> is not. Instead, the IOptions<> is created as a singleton when first requested and is never updated again.

To get around this, RC2 provided the IOptionsMonitor<> interface. In principle, this could be used almost identically to the IOptions<> interface, but it would be updated when the underlying IConfigurationRoot changed. So, for example, you should be able to modify your constructor to take an instance of IOptionsMonitor<MyValues> instead, and to use the CurrentValue property:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptionsMonitor<MyValues> values)
    {
        _myValues = values.CurrentValue;
    }
}

Unfortunately, as written, this does not have quite the desired effect - there is an additional step required. As well as injecting an instance of IOptionsMonitor you must also configure an event handler for when the underlying configuration changes. This doesn't have to actually do anything, it just has to be set. So for example, you could set the monitor to just create a log whenever the underlying file changes:

public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory, IOptionsMonitor<MyValues> monitor)  
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    monitor.OnChange(
        vals =>
        {
            loggerFactory
                .CreateLogger<IOptionsMonitor<MyValues>>()
                .LogDebug($"Config changed: {string.Join(", ", vals)}");
        });

    app.UseMvc();
}

With this in place, changes to the underlying appsettings.json file will be reflected each time you request an instance of IOptionsMonitor<MyValues> from the dependency injection container.

The new way in ASP.NET Core 1.1.0

The approach required for RC2 felt a bit convoluted and was very easy to miss. Microsoft clearly thought the same, as they removed IOptionsMonitor<> from the public package when they went RTM with 1.0.0. Luckily, a new improved approach is back with version 1.1.0 of ASP.NET Core.

No additional setup is required to have your strongly typed options reload when the IConfigurationRoot changes. All you need to do is inject IOptionsSnapshot<> instead of IOptions<>:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptionsSnapshot<MyValues> values)
    {
        _myValues = values.Value;
    }
}

No additional faffing in the ConfigureMethod, no need to setup additional services to make use of IOptionsSnapshot - it is all setup and works out of the box once you configure your strongly typed class using

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
}

Trying it out

To make sure it really did work as expected, I created a simple project using the values described in this post, and injected both an IOptions<MyValues> object and an IOptionsSnapshot<MyValues> object into a web API controller:

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    private readonly MyValues _snapshot;
    public ValuesController(IOptions<MyValues> optionsValue, IOptionsSnapshot<MyValues> snapshotValue)
    {
        _myValues = optionsValue.Value;
        _snapshot = snapshotValue.Value;
    }

    // GET api/values
    [HttpGet]
    public string Get()
    {
        return $@"
IOptions<>:         {_myValues.DefaultValue}  
IOptionsSnapshot<>: {_snapshot.DefaultValue},  
Are same:           {_myValues == _snapshot}";  
    }
}

When you hit /api/Values this simply writes out the values stored in the current IOptions and IOptionsSnapshot<> values as plaintext:

Reloading strongly typed options in ASP.NET Core 1.1.0

With the application still running, I edited the appsettings.json file:

{
  "MyValues": {
    "DefaultValue" : "The second value"
  }
}

I then reloaded the web page (without restarting the app), and voila, the value contained in IOptionsSnapshot<> has updated while the IOptions value remains the same:

Reloading strongly typed options in ASP.NET Core 1.1.0

One point of note here - although the initial values are the same for both IOptions<> and IOptionsSnapshot<>, they are not actually the same object. If I had injected two IOptions<> objects, they would have been the same object, but that is not the case when one is an IOptionsSnapshot<>. (This makes sense if you think about it - you couldn't have them both be the same object and have one change while the other stayed the same).

If you don't like to use IOptions

Some people don't like polluting their controllers by using the IOptions<> interface everywhere they want to inject settings. There are a number of ways around this, such as those described by Khalid here and Filip from StrathWeb here. You can easily extend those techniques to use the IOptionsSnapshot<> approach, so that all of your strongly typed options classes are reloaded when an underlying file changes.

A simple solution is to just delegate the request for the MyValues object to the IOptionsSnapshot<MyValues>.Value value, by setting up a delegate in ConfigureServices:

public void ConfigureServices(IServiceCollection services)  
{
    services.Configure<MyValues>(Configuration.GetSection("MyValues"));
    services.AddScoped(cfg => cfg.GetService<IOptionsSnapshot<MyValues>>().Value);
}

With this approach, you can have reloading of the MyValues object in the ValuesController, without needing to explicitly specify the IOptionsSnapshot<> interface - just use MyValues directly:

public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(MyValues values)
    {
        _myValues = values;
    }
}

Summary

Reloading strongly typed options in ASP.NET Core when the underlying configuration file changes is easy when you are using ASP.NET Core 1.1.0. Simply replace your usages of IOptions<> with IOptionsSnapshot<>.


Damien Bowden: Creating an ASP.NET Core Docker application and deploying to Azure

This blog is a simple step through, which creates an ASP.NET Core Docker image using Visual Studio 2017, deploys it to Docker Hub and then deploys the image to Azure.

Thanks to Malte Lantin for his fantastic posts on MSDN. See the links at the end of this post.

Code: https://github.com/damienbod/AspNetCoreDockerAzureDemo

2017.02.03: Updated to VS2017 RC3 msbuild3

Step 1: Create a Docker project in Visual Studio 2017 using ASP.NET Core

In the example, an ASP.NET Core Visual Studio 2017 project using msbuild is used as the demo application. Then the Docker support is added to the project using Visual Studio 2017.

Right click the project, Add/Docker Project Support
firstazuredocker_01

Update the docker files to ASPNET.Core and the correct docker version as required. More information can be found here:

http://www.jeffreyfritz.com/2017/01/docker-compose-api-too-old-for-windows/

https://damienbod.com/2016/12/24/creating-an-asp-net-core-1-1-vs2017-docker-application/

Now the application will be built in a layer on top of the microsoft/aspnetcore image.

Dockerfile:

FROM microsoft/aspnetcore:1.0.3
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-bin/Release/PublishOutput} .
ENTRYPOINT ["dotnet", "AngularClient.dll"]

docker-compose.yml

version: '2'

services:
  angularclient:
    image: angularclient
    build:
      context: .
      dockerfile: Dockerfile

Once the project is built and ready, it can be deployed to docker hub. Do a release build of the projects.

Step 2: Build a docker image and deploy to docker hub

Before you can deploy the docker image to docker hub, you need to have, or create a docker hub account.

Then open up the console and create a docker tag for your application. Replace damienbod with your docker hub user name. The docker image angularclient, created from Visual Studio 2017, will be tagged to damienbod/aspnetcorethingsclient.

docker tag angularclient damienbod/aspnetcorethingsclient

Now login to docker hub in the command line:

docker login

Once logged in, the image can be pushed to docker hub. Again replace damienbod with your docker hub name.

docker push damienbod/aspnetcorethingsclient

Once deployed, you can view this on docker hub.

https://hub.docker.com/u/damienbod/

firstazuredocker_02

For more information on docker images and containers:

https://docs.docker.com/engine/getstarted/step_four/

Step 3: Deploy to Azure

Login to https://portal.azure.com/ and click the new button and search for Web App On Linux. We want to deploy a docker container to this, using our docker image.

Select the + New and search for Web App On Linux.
firstazuredocker_03

Then select. Do not click the create until the docker container has been configured.

firstazuredocker_04

Now configure the docker container. Add the new created image on docker hub to the text field.

firstazuredocker_05

Click create and the aplication will be deployed on Azure. Now the application can be used.

firstazuredocker_07

And the application runs as requires.

http://thingsclient.azurewebsites.net/home

firstazuredocker_06

Notes:

The Visual Studio 2017 docker tooling is still rough and has problems when using newer versions of docker, or the msbuild 1.1 versions etc, but it is still in RC and will improve before the release. Next steps are now to use CI to automatically complete all these steps, add security, and use docker compose for multiple container deployments.

Links

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/12/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-13/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-23/

https://blogs.msdn.microsoft.com/malte_lantin/2017/01/13/create-you-first-asp-net-core-app-and-host-it-in-a-linux-docker-container-on-microsoft-azure-part-33/

https://hub.docker.com/

Orchestrating multi service asp.net core application using docker-compose

Debugging Asp.Net core apps running in Docker Containers using VS 2017

https://docs.docker.com/engine/getstarted/step_four/

https://stefanprodan.com/2016/aspnetcore-cd-pipeline-docker-hub/



Andrew Lock: How to pass parameters to a view component

How to pass parameters to a view component

In my last post I showed how to create a custom view component to simplify my Razor views, and separate the logic of what to display from the UI concern.

View components are a good fit where you have some complex rendering logic, which does not belong in the UI, and is also not a good fit for an action endpoint - approximately equivalent to child actions from the previous version of ASP.NET.

In this post I will show how you can pass parameters to a view component when invoking it from your view, from a controller, or when used as a tag helper.

In the previous post I showed how to create a simple LoginStatusViewComponent that shows you the email of the user and a log out link when a user is logged in, and register or login links when the user is anonymous:

How to pass parameters to a view component

The view component itself was simple, but it separated out the logic of which template to display from the templates themselves. It was created with a simple InvokeAsync method that did not require any parameters:

public class LoginStatusViewComponent : ViewComponent  
{
    private readonly SignInManager<ApplicationUser> _signInManager;
    private readonly UserManager<ApplicationUser> _userManager;

    public LoginStatusViewComponent(SignInManager<ApplicationUser> signInManager, UserManager<ApplicationUser> userManager)
    {
        _signInManager = signInManager;
        _userManager = userManager;
    }

    public async Task<IViewComponentResult> InvokeAsync()
    {
        if (_signInManager.IsSignedIn(HttpContext.User))
        {
            var user = await _userManager.GetUserAsync(HttpContext.User);
            return View("LoggedIn", user);
        }
        else
        {
            return View("Anonymous");
        }
    }
}

Invoking the LoginStatus view component from the _layout.cshtml involves calling Component.InvokeAsync and awaiting the response:

 @await Component.InvokeAsync("LoginStatus")

Updating a view component to accept parameters

The example presented is pretty simple, in that it is self contained; the InvokeAsync method does not have any parameters to pass to it. But what if we wanted to control how the view component behaves when invoked. For example, imagine that you want to control whether to display the Register link for anonymous users. Maybe your site that has an external registration system instead, so the "register" link is not valid in some cases.

First, lets create a simple view model to use in our "anonymous" view:

public class AnonymousViewModel  
{
    public bool IsRegisterLinkVisible { get; set; }
}

Next, we update the InvokeAsync method of our view component to take a boolean parameter. If the user is not logged in, we will pass this parameter down into the view model:

public async Task<IViewComponentResult> InvokeAsync(bool shouldShowRegisterLink)  
{
    if (_signInManager.IsSignedIn(HttpContext.User))
    {
        var user = await _userManager.GetUserAsync(HttpContext.User);
        return View("LoggedIn", user);
    }
    else
    {
        var viewModel = new AnonymousViewModel
        {
            IsRegisterLinkVisible = shouldShowRegisterLink
        };
        return View(viewModel);
    }
}

Finally, we update the anonymous default.cshtml template to honour this boolean:

@model LoginStatusViewComponent.AnonymousViewModel
<ul class="nav navbar-nav navbar-right">  
    @if(Model.IsRegisterLinkVisible)
    {
        <li><a asp-area="" asp-controller="Account" asp-action="Register">Register</a></li>
    }
    <li><a asp-area="" asp-controller="Account" asp-action="Login">Log in</a></li>
</ul>  

Passing parameters to view components using InvokeAsync

Our component is all set up to conditionally show or hide the register link, all that remains is to invoke it.

Passing parameters to a view component is achieved using anonymous types. In our layout, we specify the parameters in an optional parameter passed to InvokeAsync:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    @await Component.InvokeAsync("LoginStatus", new { shouldShowRegisterLink = false })
</div>  

With this in place, the register link can be shown:

How to pass parameters to a view component

or hidden:

How to pass parameters to a view component

If you omit the anonymous type, then the parameters will all have their default values (false for our bool, but null for objects).

Passing parameters to view components when invoked from a controller

Passing parameters to a view component when invoked from a controller is very similar - just pass an anonymous method with the appropriate values when creating the controller:

public IActionResult IndexVC()  
{
    return ViewComponent("LoginStatus", new { shouldShowRegisterLink = false })
}

Passing parameters to view components when invoked as a tag helper in ASP.NET Core 1.1.0

In the previous post I showed how to invoke view components as tag helpers. The parameterless version of our invocation looks like this:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    <vc:login-status></vc:login-status>
</div>  

Passing parameters to a view component tag helper is the same as for normal tag helpers. You convert the parameters to lower-kebab case and add them as attributes to the tag, e.g.:

<vc:login-status should-show-register-link="false"></vc:login-status>  

This gives a nice syntax for invoking our view components without having to drop into C# land use @await Component.InvokeAsync(), and will almost certainly become the preferred way to use them in the future.

Summary

In this post I showed how you can pass parameters to a view component. When invoking from a view in ASP.NET Core 1.0.0 or from a controller, you can use an anonymous method to pass parameters, where the properties are the name of the parameters.

In ASP.NET Core 1.1.0 you can use the alternative tag helper invocation method to pass parameters as attributes. Just remember to use lower-kebab-case for your component name and parameters! You can find sample code for this approach on GitHub.


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Damien Bowden: Angular Lazy Loading with Webpack 2

This article shows how Angular lazy loading can be supported using Webpack 2 for both JIT and AOT builds. The Webpack loader angular-router-loader from Brandon Roberts is used to implement this.

A big thanks to Roberto Simonetti for his help in this.

Code: VS2017 angular 4.x | VS2017 angular 2.x

Blogs in this series:

2017.03.18: Updated to angular 4.0.0

See here for full history:
https://github.com/damienbod/AngularWebpackVisualStudio/blob/master/CHANGELOG.md

First create an Angular module

In this example, the about module will be lazy loaded when the user clicks on the about tab. The about.module.ts is the entry point for this feature. The module has its own component and routing.
The app will now be setup to lazy load the AboutModule.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

import { AboutRoutes } from './about.routes';
import { AboutComponent } from './components/about.component';

@NgModule({
    imports: [
        CommonModule,
        AboutRoutes
    ],

    declarations: [
        AboutComponent
    ],

})

export class AboutModule { }

Add the angular-router-loader Webpack loader to the packages.json file

To add lazy loading to the app, the angular-router-loader npm package needs to be added to the packages.json npm file in the devDependencies.

"devDependencies": {
    "@types/node": "7.0.0",
    "angular2-template-loader": "^0.6.0",
    "angular-router-loader": "^0.5.0",

Configure the Angular 2 routing

The lazy loading routing can be added to the app.routes.ts file. The loadChildren defines the module and the class name of the module which can be lazy loaded. It is also possible to pre-load lazy load modules if required.

import { Routes, RouterModule } from '@angular/router';

export const routes: Routes = [
    { path: '', redirectTo: 'home', pathMatch: 'full' },
    {
        path: 'about', loadChildren: './about/about.module#AboutModule',
    }
];

export const AppRoutes = RouterModule.forRoot(routes);

Update the tsconfig-aot.json and tsconfig.json files

Now the tsconfig.json for development JIT builds and the tsconfig-aot.json for AOT production builds need to be configured to load the AboutModule module.

AOT production build

The files property contains all the module entry points as well as the app entry file. The angularCompilerOptions property defines the folder where the AOT will be built into. This must match the configuration in the Webpack production config file.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angularApp/app/app.module.ts",
    "angularApp/app/about/about.module.ts",
    "angularApp/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

JIT development build

The modules and entry points are also defined for the JIT build.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ],
    "types": [
      "node"
    ]
  },
  "files": [
    "angularApp/app/app.module.ts",
    "angularApp/app/about/about.module.ts",
    "angularApp/main.ts"
  ],
  "awesomeTypescriptLoaderOptions": {
    "useWebpackText": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

Configure Webpack to chunk and use the router lazy loading

Now the webpack configuration needs to be updated for the lazy loading.

AOT production build

The webpack.prod.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders. The genDir folder defined here must match the definition in tsconfig-aot.json.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader?aot=true&genDir=aot/'
        ]
    },

JIT development build

The webpack.dev.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].bundle.js',
        chunkFilename: 'dist/[id].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader',
            'angular2-template-loader',        
            'source-map-loader',
            'tslint-loader'
        ]
    },

Build and run

Now the application can be built using the npm build scripts and the dotnet command tool.

Open a command line in the root of the src files. Install the npm packages:

npm install

Now build the production build. The build-production does a ngc build, and then a webpack production build.

npm run build-production

You can see that Webpack creates an extra chunked file for the About Module.

lazyloadingwebpack_01

Then start the application. The server is implemented using ASP.NET Core 1.1.

dotnet run

When the application is started, the AboutModule is not loaded.

lazyloadingwebpack_02

When the about tab is clicked, the chunked AboutModule is loaded.

lazyloadingwebpack_03

Absolutely fantastic. You could also pre-load the modules if required. See this blog.

Links:

https://github.com/brandonroberts/angular-router-loader

https://www.npmjs.com/package/angular-router-loader

https://vsavkin.com/angular-router-preloading-modules-ba3c75e424cb

https://webpack.github.io/docs/



Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: Building production ready Angular apps with Visual Studio and ASP.NET Core

This article shows how Angular SPA apps can be built using Visual Studio and ASP.NET Core which can be used in production. Lots of articles, blogs templates exist for ASP.NET Core and Angular but very few support Angular production builds.

Although Angular is not so old, many different seeds and build templates already exist, so care should be taken when choosing the infrastructure for the Angular application. Any Angular template, seed which does not support AoT or treeshaking should NOT be used, and also any third party Angular component which does not support AoT should not be used.

This example uses webpack 2 to build and bundle the Angular application. In the package.json, npm scripts are used to configure the different builds and can be used inside Visual Studio using the npm task runner.

Code: VS2017 angular 4.x | VS2017 angular 2.x

Blogs in this series:

2017.03.18: Updated to angular 4.0.0

See here for full history:
https://github.com/damienbod/AngularWebpackVisualStudio/blob/master/CHANGELOG.md

Short introduction to AoT and treeshaking

AoT

AoT stands for Ahead of Time compilation. As per definition from the Angular docs:

“With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the app first.”

With AoT, you have smaller packages sizes, fewer asynchronous requests and better security. All is explained very well in the Angular Docs:

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

The AoT uses the platformBrowser to bootstrap and not platformBrowserDynamic which is used for JIT, Just in Time.

// Entry point for AoT compilation.
export * from './polyfills';

import { platformBrowser } from '@angular/platform-browser';
import { enableProdMode } from '@angular/core';
import { AppModuleNgFactory } from '../aot/angular2App/app/app.module.ngfactory';

enableProdMode();

platformBrowser().bootstrapModuleFactory(AppModuleNgFactory);

treeshaking

Treeshaking removes the unused portions of the libraries from the application, reducing the size of the application.

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

npm task runner

npm scripts can be used easily inside Visual Studio by using the npm task runner. Once installed, this needs to be configured correctly.

VS2015: Go to Tools –> Options –> Projects and Solutions –> External Web Tools and select all the checkboxes. More infomation can be found here.

In VS2017, this is slightly different:

Go to Tools –> Options –> Projects and Solutions –> Web Package Management –> External Web Tools and select all checkboxes:

vs_angular_build_01

npm scripts

ngc

ngc is the angular compiler which is used to do the AoT build using the tsconfig-aot.json configuration.

"ngc": "ngc -p ./tsconfig-aot.json",

The tsconfig-aot.json file builds to the aot folder.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angular2App/app/app.module.ts",
    "angular2App/app/about/about.module.ts",
    "angular2App/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

build-production

The build-production npm script is used for the production build and can be used for the publish or the CI as required. The npm script used the ngc script and the webpack-production build.

"build-production": "npm run ngc && npm run webpack-production",

webpack-production npm script:

"webpack-production": "set NODE_ENV=production&& webpack",

watch-webpack-dev

The watch build monitors the source files and builds if any file changes.

"watch-webpack-dev": "set NODE_ENV=development&& webpack --watch --color",

start (webpack-dev-server)

The start script runs the webpack-dev-server client application and also the ASPNET Core server application.

"start": "concurrently \"webpack-dev-server --hot --inline --progress --port 8080\" \"dotnet run\" ",

Any of these npm scripts can be run from the npm task runner.

vs_angular_build_02

Deployment

When deploying the application for an IIS, the build-production needs to be run, then the dotnet publish command, and then the contents can be copied to the IIS server. The publish-for-iis npm script can be used to publish. The command can be started from a build server without problem.

"publish-for-iis": "npm run build-production && dotnet publish -c Release" 

vs_angular_build_02

https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-publish

When deploying to an IIS, you need to install the DotNetCore.1.1.0-WindowsHosting.exe for the IIS. Setting up the server IIS docs:

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

Why not webpack task runner?

The Webpack task runner cannot be used for Webpack Angular applications because it does not support the required commands for Angular Webpack builds, either dev or production. The webpack -d build causes map errors in IE and the ngc compiler cannot be used, hence no production builds can be started from the Webpack Task Runner. For Angular Webpack projects, do not use the Webpack Task Runner, use the npm task runner.

Full package.json

{
  "name": "angular-webpack-visualstudio",
  "version": "1.0.0",
  "description": "An Angular VS template",
  "main": "wwwroot/index.html",
  "author": "",
  "license": "ISC",
    "repository": {
    "type": "git",
    "url": "https://github.com/damienbod/Angular2WebpackVisualStudio.git"
  },
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "start": "concurrently \"webpack-dev-server --hot --inline --port 8080\" \"dotnet run\" ",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release"
  },
  "dependencies": {
    "@angular/common": "4.0.0",
    "@angular/compiler": "4.0.0",
    "@angular/core": "4.0.0",
    "@angular/forms": "4.0.0",
    "@angular/http": "4.0.0",
    "@angular/platform-browser": "4.0.0",
    "@angular/platform-browser-dynamic": "4.0.0",
    "@angular/router": "4.0.0",
    "@angular/upgrade": "4.0.0",
    "@angular/animations": "4.0.0",
    "angular-in-memory-web-api": "0.3.1",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.9",
    "rxjs": "5.0.3",
    "zone.js": "0.8.4",
    "@angular/compiler-cli": "4.0.0",
    "@angular/platform-server": "4.0.0",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
     "@types/node": "7.0.8",
    "angular2-template-loader": "0.6.2",
    "angular-router-loader": "^0.5.0",
    "awesome-typescript-loader": "3.1.2",
    "clean-webpack-plugin": "^0.1.15",
    "concurrently": "^3.4.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.27.1",
    "file-loader": "^0.10.1",
    "html-webpack-plugin": "^2.28.0",
    "jquery": "^3.1.1",
    "json-loader": "^0.5.4",
    "node-sass": "^4.5.0",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.6.1",
    "sass-loader": "^6.0.3",
    "source-map-loader": "^0.2.0",
    "style-loader": "^0.13.2",
    "ts-helpers": "^1.1.2",
    "tslint": "^4.5.1",
    "tslint-loader": "^3.4.3",
    "typescript": "2.2.1",
    "url-loader": "^0.5.8",
    "webpack": "^2.2.1",
    "webpack-dev-server": "2.4.1"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

Full webpack.prod.js

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angularApp/vendor.ts',
        'polyfills': './angularApp/polyfills.ts',
        'app': './angularApp/main-aot.ts' // AoT compilation
    },

    output: {
        path: __dirname + '/wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            inject: 'body',
            template: 'angularApp/index.html'
        }),

        new CopyWebpackPlugin([
            { from: './angularApp/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};


Links:

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/preboot/angular2-webpack

https://webpack.github.io/docs/

https://github.com/jtangelder/sass-loader

https://github.com/petehunt/webpack-howto/blob/master/README.md

https://blogs.msdn.microsoft.com/webdev/2015/03/19/customize-external-web-tools-in-visual-studio-2015/

https://marketplace.visualstudio.com/items?itemName=MadsKristensen.NPMTaskRunner

http://sass-lang.com/

http://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html

https://angular.io/docs/ts/latest/guide/webpack.html

http://blog.mgechev.com/2016/06/26/tree-shaking-angular2-production-build-rollup-javascript/

https://angular.io/docs/ts/latest/tutorial/toh-pt5.html

http://angularjs.blogspot.ch/2016/06/improvements-coming-for-routing-in.html?platform=hootsuite

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

https://weblog.west-wind.com/posts/2016/Jun/06/Publishing-and-Running-ASPNET-Core-Applications-with-IIS

http://blog.mgechev.com/2017/01/17/angular-in-production/



Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 and ASP.NET Core 1.1

aka RC5 – last RC – promised!

The update from ASP.NET Core 1.0 (aka LTS – long term support) to ASP.NET Core 1.1 (aka Current) didn’t go so well (at least IMHO).

There were a couple of breaking changes both on the APIs as well as in behaviour. Especially around challenge/response based authentication middleware and EF Core.

Long story short – it was not possible for us to make IdentityServer support both versions. That’s why we decided to move to 1.1, which includes a bunch of bug fixes, and will also most probably be the version that ships with the new Visual Studio.

To be more specific – we build against ASP.NET Core 1.1 and the 1.0.0-preview2-003131 SDK.

Here’s a guide that describes how to update your host to 1.1. Our docs and samples have been updated.


Filed under: ASP.NET, OAuth, OpenID Connect, WebAPI


Ben Foster: Bare metal APIs with ASP.NET Core MVC

ASP.NET Core MVC now provides a true "one asp.net" framework that can be used for building both APIs and websites. But what if you only want to build an API?

Most of the ASP.NET Core MVC tutorials I've seen advise using the Microsoft.AspNetCore.Mvc package. While this does indeed give you what you need to build APIs, it also gives you a lot more:

  • Microsoft.AspNetCore.Mvc.ApiExplorer
  • Microsoft.AspNetCore.Mvc.Cors
  • Microsoft.AspNetCore.Mvc.DataAnnotations
  • Microsoft.AspNetCore.Mvc.Formatters.Json
  • Microsoft.AspNetCore.Mvc.Localization
  • Microsoft.AspNetCore.Mvc.Razor
  • Microsoft.AspNetCore.Mvc.TagHelpers
  • Microsoft.AspNetCore.Mvc.ViewFeatures
  • Microsoft.Extensions.Caching.Memory
  • Microsoft.Extensions.DependencyInjection
  • NETStandard.Library

A few of these packages are still needed if you're building APIs but many are specific to building full websites.

After installing the above package we typically register MVC in Startup.ConfigureServices like so:

services.AddMvc();

This code is responsible for wiring up the necessary MVC services with application container. Let's look at what this actually does:

public static IMvcBuilder AddMvc(this IServiceCollection services)
{
    var builder = services.AddMvcCore();

    builder.AddApiExplorer();
    builder.AddAuthorization();

    AddDefaultFrameworkParts(builder.PartManager);

    // Order added affects options setup order

    // Default framework order
    builder.AddFormatterMappings();
    builder.AddViews();
    builder.AddRazorViewEngine();
    builder.AddCacheTagHelper();

    // +1 order
    builder.AddDataAnnotations(); // +1 order

    // +10 order
    builder.AddJsonFormatters();

    builder.AddCors();

    return new MvcBuilder(builder.Services, builder.PartManager);
}

Again most of the service registration refers to the components used for rendering web pages.

Bare Metal APIs

It turns out that the ASP.NET team anticipated that developers may only want to build APIs and nothing else, so they gave us the ability to do just that.

First of all, rather than installing Microsoft.AspNetCore.Mvc, only install Microsoft.AspNetCore.Mvc.Core. This will give you the bare MVC middleware (routing, controllers, HTTP results) and not a lot else.

In order to process JSON requests and return JSON responses we also need the Microsoft.AspNetCore.Mvc.Formatters.Json package.

Then, to add both the core MVC middleware and JSON formatter, add the following code to ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddJsonFormatters();
}

The final thing to do is to change your controllers to derive from ControllerBase instead of Controller. This provides a base class for MVC controllers without any View support.

Looking at the final list of packages in project.json, you can see we really don't need that much after all, especially given most of these are related to configuration and logging:

"Microsoft.AspNetCore.Mvc.Core": "1.1.0",
"Microsoft.AspNetCore.Mvc.Formatters.Json": "1.1.0",
"Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
"Microsoft.Extensions.Configuration.Json": "1.1.0",
"Microsoft.Extensions.Configuration.CommandLine": "1.1.0",
"Microsoft.Extensions.Logging": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Microsoft.Extensions.Logging.Debug": "1.1.0"

You can find the complete code on GitHub.


Dominick Baier: New in IdentityServer4: Resource-based Configuration

For RC4 we decided to re-design our configuration object model for resources (formerly known as scopes).

I know, I know – we are not supposed to make fundamental breaking changes once reaching the RC status – but hey – we kind of had our “DNX” moment, and realized that we either change this now – or never.

Why did we do that?
We spent the last couple of years explaining OpenID Connect and OAuth 2.0 based architectures to hundreds of students in training classes, attendees at conferences, fellow developers, and customers from all types of industries.

While most concepts are pretty clear and make total sense – scopes were the most confusing part for most people. The abstract nature of a scope as well as the fact that the term scope has a somewhat different meaning in OpenID Connect and OAuth 2.0, made this concept really hard to grasp.

Maybe it’s also partly our fault, that we stayed very close to the spec-speak with our object model and abstraction level, that we forced that concept onto every user of IdentityServer.

Long story short – every time I needed to explain scope, I said something like “A scope is a resource a client wants to access.”..and “there are two types of scopes: identity related and APIs…”.

This got us thinking if it would make more sense to introduce the notion of resources in IdentityServer, and get rid of scopes.

What did we do?
Before RC4 – our configuration object model had three main parts: users, client, and scopes (and there were two types of scopes – identity and resource – and some overlapping settings between them).

Starting with RC4 – the configuration model does not have scope anymore as a top-level concept, but rather identity resources and API resources.

terminology

We think this is a more natural way (and language) to model a typical token-based system.

From our new docs:

User
A user is a human that is using a registered client to access resources.

Client
A client is a piece of software that requests tokens from IdentityServer – either for authenticating a user (requesting an identity token)
or for accessing a resource (requesting an access token). A client must be first registered with IdentityServer before it can request tokens.

Resources
Resources are something you want to protect with IdentityServer – either identity data of your users (like user id, name, email..), or APIs.

Enough talk, show me the code!
Pre-RC4, you would have used a scope store to return a flat list of scopes. Now the new resource store deals with two different resource types: IdentityResource and ApiResource.

Let’s start with identity – standard scopes used to be defined like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        StandardScopes.OpenId,
        StandardScopes.Profile
    };
}

..and now:

public static IEnumerable<IdentityResource> GetIdentityResources()
{
    return new List<IdentityResource>
    {
        new IdentityResources.OpenId(),
        new IdentityResources.Profile()
    };
}

Not very different. Now let’s define a custom identity resource with associated claims:

var customerProfile = new IdentityResource(
    name:        "profile.customer",
    displayName: "Customer profile",
    claimTypes:  new[] { "name""status""location" });

This is all that’s needed for 90% of all identity resources you will ever define. If you need to tweak details, you can set various properties on the IdentityResource class.

Let’s have a look at the API resources. You used to define a resource-scope like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        new Scope
        {
            Name = "api1",
            DisplayName = "My API #1",
 
            Type = ScopeType.Resource
        }
    };
}

..and the new way:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource("api1""My API #1")
    };
}

Again – for the simple case there is not a huge difference. The ApiResource object model starts to become more powerful when you have advanced requirements like APIs with multiple scopes (and maybe different claims based on the scope) and support for introspection, e.g.:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource
        {
            Name = "calendar",
 
            // secret for introspection endpoint
            ApiSecrets =
            {
                new Secret("secret".Sha256())
            },
 
            // claims to include in access token
            UserClaims =
            {
                JwtClaimTypes.Name,
                JwtClaimTypes.Email
            },
 
            // API has multiple scopes
            Scopes =
            {
                new Scope
                {
                    Name = "calendar.read_only",
                    DisplayName = "Read only access to the calendar"
                },
                new Scope
                {
                    Name = "calendar.full_access",
                    DisplayName = "Full access to the calendar",
                    Emphasize = true,
 
                    // include additional claim for that scope
                    UserClaims =
                    {
                        "status"
                    }
                }
            }
        }
    };

IOW – We reversed the configuration approach, and you now model APIs (which might have scopes) – and not scopes (that happen to represent an API).

We like the new model much better as it reflects how you architect a token-based system much better. We hope you like it too – and sorry for moving the cheese ;)

As always – give us feedback on the issue tracker. RTM is very close.


Filed under: .NET Security, ASP.NET, OAuth, Uncategorized, WebAPI


Ben Foster: Using .NET Core Configuration with legacy projects

In .NET Core, configuration has been re-engineered, throwing away the System.Configuration model that relied on XML-based configuration files and introducing a number of new configuration components offering more flexibility and better extensibility.

At its lowest level, the new configuration system still provides access to key/value based settings. However, it also supports multiple configuration sources such as JSON files, and probably my favourite feature, strongly typed binding to configuration classes.

Whilst the new configuration system sit unders the ASP.NET repository on GitHub, it doesn't actually have any dependency on any of the new ASP.NET components meaning it can also be used in your non .net core projects too.

In this post I'll cover how to use .NET Core Configuration in an ASP.NET Web API application.

Install the packages

The new .NET Core configuration components are published under Microsoft.Extensions.Configuration.* packages on NuGet. For this demo I've installed the following packages:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.Json (support for JSON configuration files)
  • Microsoft.Extensions.Configuration.Binder (strongly-typed binding of configuration settings)

Initialising configuration

To initialise the configuration system we use ConfigurationBuilder. When you install additional configuration sources the builder will be extended with a number of new methods for adding those sources. Finally call Build() to create a configuration instance:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Build();

Accessing configuration settings

Once you have the configuration instance, settings can be accessed using their key:

var applicationName = configuration["ApplicationName"];

If your configuration settings have a heirarchical structure (likely if you're using JSON or XML files) then each level in the heirarchy will be separated with a :.

To demonstrate I've added a appsettings.json.config file containing a few configuration settings:

{
  "connectionStrings": {
    "MyDb": "server=localhost;database=mydb;integrated security=true"
  },
  "apiSettings": {
    "url": "http://localhost/api",
    "apiKey": "sk_1234566",
    "useCache":  true
  }
}

Note: I'm using the .config extension as a simple way to prevent IIS serving these files directly. Alternatively you can set up IIS request filtering to prevent access to your JSON config files.

I've then wired up an endpoint in my controller to return the configuration, using keys to access my values:

public class ConfigurationController : ApiController
{
    public HttpResponseMessage Get()
    {
        var config = new
        {
            MyDbConnectionString = Startup.Config["ConnectionStrings:MyDb"],
            ApiSettings = new
            {
                Url = Startup.Config["ApiSettings:Url"],
                ApiKey = Startup.Config["ApiSettings:ApiKey"],
                UseCache = Startup.Config["ApiSettings:UseCache"],
            }
        };

        return Request.CreateResponse(config);
    }
}

When I hit my /configuration endpoint I get the following JSON response:

{
    "MyDbConnectionString": "server=localhost;database=mydb;integrated security=true",
    "ApiSettings": {
        "Url": "http://localhost/api",
        "ApiKey": "sk_1234566",
        "UseCache": "True"
    }
}

Strongly-typed configuration

Of course accessing settings in this way isn't a vast improvement over using ConfigurationManager and as you'll notice above, we're not getting the correct type for all of our settings.

Fortunately the new .NET Core configuration system supports strongly-typed binding of your configuration, using Microsoft.Extensions.Configuration.Binder.

I created the following class to bind my configuration to:

public class AppConfig
{
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

To bind to this class directly, use the Get<T> extensions provided by the binder package. Here's my updated controller:

public HttpResponseMessage Get()
{
    var config = Startup.Config.Get<AppConfig>();
    return Request.CreateResponse(config);
}

The response:

{
   "ConnectionStrings":{
      "MyDb":"server=localhost;database=mydb;integrated security=true"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}

Now I can access my application configuration in a much nicer way:

if (config.ApiSettings.UseCache)
{

}

What about Web/App.config?

So far I've demonstrated how to use some of the new configuration features in a legacy application. But what if you still rely on traditional XML based configuration files like web.config or app.config?

In my application I still have a few settings in app.config (I'm self-hosting the API) that I require in my application. Ideally I'd like to use the .NET core configuration system to bind these to my AppConfig class too:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <connectionStrings>
    <add name="MyLegacyDb" connectionString="server=localhost;database=legacy" />
  </connectionStrings>
  <appSettings>
    <add key="ApplicationName" value="CoreConfigurationDemo"/>
  </appSettings>
</configuration>

There is an XML configuration source for .NET Core. However, if you try and use this for appSettings or connectionStrings elements you'll find the generated keys are not really ideal for strongly typed binding:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .AddXmlFile("app.config")
    .Build();

If we inspect the configuration after calling Build() we get the following key/value for the MyLegacyDb connection string:

[connectionStrings:add:MyLegacyDb:connectionString, server=localhost;database=legacy]

This is due to how the XML source binds XML attributes.

Given that we still have access to the older System.Configuration system it makes sense to use this to access our XML config files and then plug the values into the new .NET core configuration system. We can do this by creating a custom configuration provider.

Creating a custom configuration provider.

To implement a custom configuration provider you implement the IConfigurationProvider and IConfigurationSource interfaces. You can also derive from the abstract class ConfigurationProvider which will save you writing some boilerplate code.

For a more advanced implementation that requires reading file contents and supports multiple files of the same type, check out Andrew Lock's write-up on how to add a YAML configuration provider.

Since I'm relying on System.Configuration.ConfigurationManager to read app.config and do not need to support multiple files, my implementation is quite simple:

public class LegacyConfigurationProvider : ConfigurationProvider, IConfigurationSource
{
    public override void Load()
    {
        foreach (ConnectionStringSettings connectionString in ConfigurationManager.ConnectionStrings)
        {
            Data.Add($"ConnectionStrings:{connectionString.Name}", connectionString.ConnectionString);
        }

        foreach (var settingKey in ConfigurationManager.AppSettings.AllKeys)
        {
            Data.Add(settingKey, ConfigurationManager.AppSettings[settingKey]);
        }
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return this;
    }
}

When ConfigurationBuilder.Build() is called the Build method of each configured source is executed, which returns a IConfigurationProvider used to get the configuration data. Since we're deriving from ConfigurationProvider we can override Load, adding each of the connection strings and application settings from app.config.

I updated my AppConfig to include the new setting and connection string as below:

public class AppConfig
{
    public string ApplicationName { get; set; }
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
        public string MyLegacyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

The only change I need to make to my application is to add the configuration provider to my configuration builder:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Add(new LegacyConfigurationProvider())
    .Build();

Then, when I hit my /configuration endpoint I get my complete configuration, bound from both my JSON file and app.config:

{
   "ApplicationName":"CoreConfigurationDemo",
   "ConnectionStrings":{
      "MyDb":"server=localhost;integrated security=true",
      "MyLegacyDb":"server=localhost;database=legacy"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}


Pedro Félix: Accessing the HTTP Context on ASP.NET Core

TL;DR

On ASP.NET Core, the access to the request context can be done via the new IHttpContextAccessor interface, which provides a HttpContext property with this information. The IHttpContextAccessor is obtained via dependency injection or directly from the service locator. However, it requires an explicit service collection registration, mapping the IHttpContextInterface to the HttpContextAccessor concrete class, with singleton scope.

Not so short version

System.Web

On classical ASP.NET, the current HTTP context, containing both request and response information, can be accessed anywhere via the omnipresent System.Web.HttpContext.Current static property. Internally, this property uses information stored in the CallContext object representing the current call flow. This CallContext is preserved even when the same flow crosses multiple threads, so it can handle async methods.

ASP.NET Web API

On ASP.NET Web API, obtaining the current HTTP context without having to flow it explicitly on every call is typically achieved with the help of the dependency injection container.
For instance, Autofac provides the RegisterHttpRequestMessage extension method on the ContainerBuilder, which allows classes to have HttpRequestMessage constructor dependencies.
This extension method configures a delegating handler that registers the input HttpRequestMessage instance into the current lifetime scope.

ASP.NET Core

ASP.NET Core uses a different approach. The access to the current context is provided via a IHttpContextAccessor service, containing a single HttpContext property with both a getter and a setter. So, instead of directly injecting the context, the solution is based on injecting an accessor to the context.
This apparently superfluous indirection provides one benefit: the accessor can have singleton scope, meaning that it can be injected into singleton components.
Notice that injecting a per HTTP request dependency, such as the request message, directly into another component is only possible if the component has the same lifetime scope.

In the current ASP.NET Core 1.0.0 implementation, the IHttpContextAccessor service is implemented by the HttpContextAccessor concrete class and must be configured as a singleton.
 

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}

Notice that this registration is not done by default and must be explicitly performed. If not, any IHttpContextAccessor dependency will result in an activation exception.
On the other hand, no additional configuration is need to capture the context at the beginning of each request, because this is automatically done.

The following implementation details shed some light on this behavior:

  • Each time a new request starts to be handled, a common IHttpContextFactory reference is used to create the HttpContext. This common reference is obtained by the WebHost during startup and used for all requests.

  • The used HttpContextFactory concrete implementation is initialized with an optional IHttpContextAccessor implementation. When available, this accessor is assigned with each created context. This means that if any accessor is registered on the services, then it will automatically be used to set all created contexts.

  • How can the same accessor instance hold different contexts, one for each call flow? The answer lies in the HttpContextAccessor concrete implementation and its use of AsyncLocal to store the context separately for each logical call flow. It is this characteristics that allows a singleton scoped accessor to provide request scoped contexts.

To conclude:

  • Everywhere the HTTP context is needed, declare an IHttpContextAccessor dependency and use it to fetch the context.

  • Don’t forget to explicitly register the IHttpContextAccessor interface on the service collection, mapping it to the concrete HttpContextAccessor type.

  • Also, don’t forget to make this registration with singleton scope.



Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.