Andrew Lock: Creating strongly typed xUnit theory test data with TheoryData

Creating strongly typed xUnit theory test data with TheoryData

In a recent post I described the various ways you can pass data to xUnit theory tests using attributes such as [InlineData], [ClassData], or [MemberData]. For the latter two, you create a property, method or class that returns IEnumerable<object[]>, where each object[] item contains the arguments for your theory test.

In this post, I'll show an alternative way to pass data to your theory tests by using the strongly-typed TheoryData<> class. You can use it to create test data in the same way as the previous post, but you get the advantage of compile-time type checking (as you should in C#!)

The problem with IEnumerable<object[]>

I'll assume you've already seen the previous post on how to use [ClassData] and [MemberData] attributes but just for context, this is what a typical theory test and data function might look like:

public class CalculatorTests  
{
    [Theory]
    [MemberData(nameof(Data))]
    public void CanAdd(int value1, int value2, int expected)
    {
        var calculator = new Calculator();
        var result = calculator.Add(value1, value2);
        Assert.Equal(expected, result);
    }

    public static IEnumerable<object[]> Data =>
        new List<object[]>
        {
            new object[] { 1, 2, 3 },
            new object[] { -4, -6, -10 },
            new object[] { -2, 2, 0 },
            new object[] { int.MinValue, -1, int.MaxValue },
        };
}

The test function CanAdd(value1, value2, expected) has three int parameters, and is decorated with a [MemberData] attribute that tells xUnit to load the parameters for the theory test from the Data property.

This works perfectly well, but if you're anything like me, returning an object[] just feels wrong. As we're using objects, there's nothing stopping you returning something like this:

public static IEnumerable<object[]> Data =>  
    new List<object[]>
    {
        new object[] { 1.5, 2.3m, "The value" }
    };

This compiles without any warnings or errors, even from the xUnit analyzers. The CanAdd function requires three ints, but we're returning a double, a decimal, and a string. When the test executes, you'll get the following error:

Message: System.ArgumentException : Object of type 'System.String' cannot be converted to type 'System.Int32'.  

That's not ideal. Luckily, xUnit allows you to provide the same data as a strongly typed object, TheoryData<>.

Strongly typed data with TheoryData

The TheoryData<> types provide a series of abstractions around the IEnumerable<object[]> required by theory tests. It consists of a TheoryData base class, and a number of generic derived classes TheoryData<>. The basic abstraction looks like the following:

public abstract class TheoryData : IEnumerable<object[]>  
{
    readonly List<object[]> data = new List<object[]>();

    protected void AddRow(params object[] values)
    {
        data.Add(values);
    }

    public IEnumerator<object[]> GetEnumerator()
    {
        return data.GetEnumerator();
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
}

This class implements IEnumerable<object[]> but it has no other public members. Instead, the generic derived classes TheoryData<> provide a public Add<T>() method, to ensure you can only add rows of the correct type. For example, the derived class with three generic arguments looks likes the following:

public class TheoryData<T1, T2, T3> : TheoryData  
{
    /// <summary>
    /// Adds data to the theory data set.
    /// </summary>
    /// <param name="p1">The first data value.</param>
    /// <param name="p2">The second data value.</param>
    /// <param name="p3">The third data value.</param>
    public void Add(T1 p1, T2 p2, T3 p3)
    {
        AddRow(p1, p2, p3);
    }
}

This type just passes the generic arguments to the protected AddRow() command, but it enforces that the types are correct, as the code won't compile if you try and pass an incorrect parameter to the Add<T>() method.

Using TheoryData with the [ClassData] attribute

First, we'll look at how to use TheoryData<> with the [ClassData] attribute. You can apply the [ClassData] attribute to a theory test, and the referenced type will be used to load the data. In the previous post, the data class implemented IEnumerable<object[]>, but we can alternatively implement TheoryData<T1, T2, T3> to ensure all the types are correct, for example:

public class CalculatorTestData : TheoryData<int, int, int>  
{
    public CalculatorTestData()
    {
        Add(1, 2, 3);
        Add(-4, -6, -10);
        Add(-2, 2, 0);
        Add(int.MinValue, -1, int.MaxValue);
        Add(1.5, 2.3m, "The value"); // will not compile!
    }
}

You can apply this to your theory test in exactly the same way as before, but this time you can be sure that every row will have the correct argument types:

[Theory]
[ClassData(nameof(CalculatorTestData))]
public void CanAdd(int value1, int value2, int expected)  
{
    var calculator = new Calculator();
    var result = calculator.Add(value1, value2);
    Assert.Equal(expected, result);
}

The main thing to watch out for here is that that the CalculatorTestData implements the correct generic TheoryData<> - there's no compile time checking that you're referencing a TheoryData<int, int, int> instead of a TheoryData<string> for example.

Using TheoryData with the [MemberData] attribute

You can use TheoryData<> with [MemberData] attributes as well as [ClassData] attributes. Instead of referencing a static property that returns an IEnumerable<object[]>, you reference a property or method that returns a TheoryData<> object with the correct parameters.

For example, we can rewrite the Data property from the start of this post to use a TheoryData<int, int, int> object:

public static TheoryData<int, int, int> Data  
{
    get
    {
        var data = new TheoryData<int, int, int>();
        data.Add(1, 2, 3);
        data.Add(-4, -6, -10 );
        data.Add( -2, 2, 0 );
        data.Add(int.MinValue, -1, int.MaxValue );
        data.Add( 1.5, 2.3m, "The value"); // won't compile
        return data;
    }
}

This is effectively identical to the original example, but the strongly typed TheoryData<> won't let us add invalid data.

That's pretty much all there is to it, but if the verbosity of that example bugs you, you can make use of collection initialisers and expression bodied members to give:

public static TheoryData<int, int, int> Data =>  
    new TheoryData<int, int, int>
        {
            { 1, 2, 3 },
            { -4, -6, -10 },
            { -2, 2, 0 },
            { int.MinValue, -1, int.MaxValue }
        };

As with the [ClassData] attribute, you have to manually ensure that the TheoryData<> generic arguments match the theory test parameters they're used with, but at least you can be sure all of the rows in the IEnumerable<object[]> are consistent!

Summary

In this post I described how to create strongly-typed test data for xUnit theory tests using TheoryData<> classes. By creating instances of this class instead of IEnumerable<object[]> you can be sure that each row of data has the correct types for the theory test.


Anuraj Parameswaran: Using LESS CSS with ASP.NET Core

This post is about getting started with LESS CSS with ASP.NET. Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themeable and extendable. Less css helps developers to avoid code duplication.


Dominick Baier: Missing Claims in the ASP.NET Core 2 OpenID Connect Handler?

The new OpenID Connect handler in ASP.NET Core 2 has a different (aka breaking) behavior when it comes to mapping claims from an OIDC provider to the resulting ClaimsPrincipal.

This is especially confusing and hard to diagnose since there are a couple of moving parts that come together here. Let’s have a look.

You can use my sample OIDC client here to observe the same results.

Mapping of standard claim types to Microsoft proprietary ones
The first annoying thing is, that Microsoft still thinks they know what’s best for you by mapping the OIDC standard claims to their proprietary ones.

This can be fixed elegantly by clearing the inbound claim type map on the Microsoft JWT token handler:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

A basic OpenID Connect authentication request
Next – let’s start with a barebones scenario where the client requests the openid scope only.

First confusing thing is that Microsoft pre-populates the Scope collection on the OpenIdConnectOptions with the openid and the profile scope (don’t get me started). This means if you only want to request openid, you first need to clear the Scope collection and then add openid manually.

services.AddAuthentication(options =>
{
    options.DefaultScheme = "Cookies";
    options.DefaultChallengeScheme = "oidc";
})
    .AddCookie("Cookies", options =>
    {
        options.AccessDeniedPath = "/account/denied";
    })
    .AddOpenIdConnect("oidc", options =>
    {
        options.Authority = "https://demo.identityserver.io";
        options.ClientId = "server.hybrid";
        options.ClientSecret = "secret";
        options.ResponseType = "code id_token";
 
        options.SaveTokens = true;
                    
        options.Scope.Clear();
        options.Scope.Add("openid");
                    
        options.TokenValidationParameters = new TokenValidationParameters
        {
            NameClaimType = "name", 
            RoleClaimType = "role"
        };
    });

With the ASP.NET Core v1 handler, this would have returned the following claims: nbf, exp, iss, aud, nonce, iat, c_hash, sid, sub, auth_time, idp, amr.

In V2 we only get sid, sub and idp. What happened?

Microsoft added a new concept to their OpenID Connect handler called ClaimActions. Claim actions allow modifying how claims from an external provider are mapped (or not) to a claim in your ClaimsPrincipal. Looking at the ctor of the OpenIdConnectOptions, you can see that the handler will now skip the following claims by default:

ClaimActions.DeleteClaim("nonce");
ClaimActions.DeleteClaim("aud");
ClaimActions.DeleteClaim("azp");
ClaimActions.DeleteClaim("acr");
ClaimActions.DeleteClaim("amr");
ClaimActions.DeleteClaim("iss");
ClaimActions.DeleteClaim("iat");
ClaimActions.DeleteClaim("nbf");
ClaimActions.DeleteClaim("exp");
ClaimActions.DeleteClaim("at_hash");
ClaimActions.DeleteClaim("c_hash");
ClaimActions.DeleteClaim("auth_time");
ClaimActions.DeleteClaim("ipaddr");
ClaimActions.DeleteClaim("platf");
ClaimActions.DeleteClaim("ver");

If you want to “un-skip” a claim, you need to delete a specific claim action when setting up the handler. The following is the very intuitive syntax to get the amr claim back:

options.ClaimActions.Remove("amr");

If you want to see the raw claims from the token in the principal, you need to clear the whole claims action collection.

Requesting more claims from the OIDC provider
When you are requesting more scopes, e.g. profile or custom scopes that result in more claims, there is another confusing detail to be aware of.

Depending on the response_type in the OIDC protocol, some claims are transferred via the id_token and some via the userinfo endpoint. I wrote about the details here.

So first of all, you need to enable support for the userinfo endpoint in the handler:

options.GetClaimsFromUserInfoEndpoint = true;

If the claims are being returned by userinfo, ClaimsActions are used again to map the claims from the returned JSON document to the principal. The following default settings are used here:

ClaimActions.MapUniqueJsonKey("sub""sub");
ClaimActions.MapUniqueJsonKey("name""name");
ClaimActions.MapUniqueJsonKey("given_name""given_name");
ClaimActions.MapUniqueJsonKey("family_name""family_name");
ClaimActions.MapUniqueJsonKey("profile""profile");
ClaimActions.MapUniqueJsonKey("email""email");

IOW – if you are sending a claim to your client that is not part of the above list, it simply gets ignored, and you need to do an explicit mapping. Let’s say your client application receives the website claim via userinfo (one of the standard OIDC claims, but unfortunately not mapped by Microsoft) – you need to add the mapping yourself:

options.ClaimActions.MapUniqueJsonKey("website""website");

The same would apply for any other claims you return via userinfo.

I hope this helps. In short – you want to be explicit about your mappings, because I am sure that those default mappings will change at some point in the future which will lead to unexpected behavior in your client applications.


Filed under: ASP.NET Core, IdentityServer, OpenID Connect, WebAPI


Andrew Lock: Creating a custom xUnit theory test DataAttribute to load data from JSON files

Creating a custom xUnit theory test DataAttribute to load data from JSON files

In my last post, I described the various ways to pass data to an xUnit [Theory] test. These are:

  • [InlineData] - Pass the data for the theory test method parameters as arguments to the attribute
  • [ClassData] - Create a custom class that implements IEnumerable<object[]>, and use this to return the test data
  • [MemberData] - Create a static property or method that returns an IEnumerable<object[]> and use it to supply the test data.

All of these attributes derive from the base DataAttribute class that's part of the xUnit SDK namespace: XUnit.Sdk.

In this post I'll show how you can create your own implementation of DataAttribute. This allows you to load data from any source you choose. As an example I'll show how you can load data from a JSON file.

The DataAttribute base class

The base DataAttribute class is very simple. It's an abstract class that derives from Attribute, with a single method to implement, GetData(), which returns the test data:

public abstract class DataAttribute : Attribute  
{
    public virtual string Skip { get; set; }

    public abstract IEnumerable<object[]> GetData(MethodInfo testMethod);
}

The GetData() method returns an IEnumerable<object[]>, which should be familiar if you read my last post, or you've used either [ClassData] or [MemberData]. Each object[] item that's part of the IEnumerable<> contains the parameters for a single run of a [Theory] test.

Using the custom JsonFileDataAttribute

To implement a custom method, you just need to derive from this class, and implement GetData. You can then use your new attribute to pass data to a theory test method. In this post we'll create an attribute that loads data from a JSON file, called, JsonFileDataAttribute. We can add this to a theory test, and it will use all the data in the JSON file as data for test runs:

[Theory]
[JsonFileData("all_data.json")]
public void CanAddAll(int value1, int value2, int expected)  
{
    var calculator = new Calculator();

    var result = calculator.Add(value1, value2);

    Assert.Equal(expected, result);
}

With this usage, the entire file provides the data for the [Theory] test. Alternatively, you can specify a property value in addition to the file name. This lets you have a single JSON file containing data for multiple theory tests, e.g.

[Theory]
[JsonFileData("data.json", "AddData")]
public void CanAdd(int value1, int value2, int expected)  
{
    var calculator = new Calculator();

    var result = calculator.Add(value1, value2);

    Assert.Equal(expected, result);
}

[Theory]
[JsonFileData("data.json", "SubtractData")]
public void CanSubtract(int value1, int value2, int expected)  
{
    var calculator = new Calculator();

    var result = calculator.Subtract(value1, value2);

    Assert.Equal(expected, result);
}

That's how we'll use the attribute, Now we'll look at how to create it.

Creating the custom JsonFileDataAttribute

The implementation of JsonFileDataAttribute uses the provided file path to load a JSON file. It then deserialises the file into an IEnumerable<object[]>, optionally selecting a sub-property first. I've not tried to optimise this at all at, so it just loads the whole file into memory and then deserialises it. You could do a lot more in that respect if performance is an issue, but it does the job.

public class JsonFileDataAttribute : DataAttribute  
{
    private readonly string _filePath;
    private readonly string _propertyName;

    /// <summary>
    /// Load data from a JSON file as the data source for a theory
    /// </summary>
    /// <param name="filePath">The absolute or relative path to the JSON file to load</param>
    public JsonFileDataAttribute(string filePath)
        : this(filePath, null) { }

    /// <summary>
    /// Load data from a JSON file as the data source for a theory
    /// </summary>
    /// <param name="filePath">The absolute or relative path to the JSON file to load</param>
    /// <param name="propertyName">The name of the property on the JSON file that contains the data for the test</param>
    public JsonFileDataAttribute(string filePath, string propertyName)
    {
        _filePath = filePath;
        _propertyName = propertyName;
    }

    /// <inheritDoc />
    public override IEnumerable<object[]> GetData(MethodInfo testMethod)
    {
        if (testMethod == null) { throw new ArgumentNullException(nameof(testMethod)); }

        // Get the absolute path to the JSON file
        var path = Path.IsPathRooted(_filePath)
            ? _filePath
            : Path.GetRelativePath(Directory.GetCurrentDirectory(), _filePath);

        if (!File.Exists(path))
        {
            throw new ArgumentException($"Could not find file at path: {path}");
        }

        // Load the file
        var fileData = File.ReadAllText(_filePath);

        if (string.IsNullOrEmpty(_propertyName))
        {
            //whole file is the data
            return JsonConvert.DeserializeObject<List<object[]>>(fileData);
        }

        // Only use the specified property as the data
        var allData = JObject.Parse(fileData);
        var data = allData[_propertyName];
        return data.ToObject<List<object[]>>();
    }
}

The JsonFileDataAttribute supports relative or absolute file paths, just remember that the the file path will be relative to the folder in which your tests execute.

You may have also noticed that the GetData() method is supplied a MethodInfo parameter. If you wanted, you could update the JsonFileDataAttribute to automatically use the theory test method name as the JSON sub-property, but I'll leave that as an exercise!

Loading data from JSON files

Just to complete the picture, the following solution contains two JSON files, data.json and all_data.json, which provide the data for the tests shown earlier in this post.

Creating a custom xUnit theory test DataAttribute to load data from JSON files

You have to make sure the files are copied to the test output, which you can do from the properties dialog as shown above, or by setting the CopyToOutputDirectory attribute in your .csproj directly:

<ItemGroup>  
  <None Update="all_data.json" CopyToOutputDirectory="PreserveNewest" />
  <None Update="data.json" CopyToOutputDirectory="PreserveNewest" />
  <None Update="xunit.runner.json" CopyToOutputDirectory="PreserveNewest" />
</ItemGroup>  

The data.json file contains two properties, for two different theory tests. Notice that each property is an array of arrays, so that we can deserialize it into an IEnumerable<object[]>.

{
  "AddData": [
    [ 1, 2, 3 ],
    [ -4, -6, -10 ],
    [ -2, 2, 0 ]
  ],
  "SubtractData": [
    [ 1, 2, -1 ],
    [ -4, -6, 2 ],
    [ 2, 2, 0 ]
  ]
}

The all_data.json file on the other hand, consists of a single array of arrays, as we're using the whole file as the source for the theory test.

[
  [ 1, 2, 3 ],
  [ -4, -6, -10 ],
  [ -2, 2, 0 ]
]

With everything in place, we can run all the theory tests, using the data from the files:

Creating a custom xUnit theory test DataAttribute to load data from JSON files

Summary

xUnit contains the concept of parameterised tests, so you can write tests using a range of data. Out of the box, you can use [InlineData], [ClassData], and [MemberData] classes to pass data to such a theory test. All of these attributes derive from DataAttribute, which you can also derive from to create your own custom data source.

In this post, I showed how you could create a custom DataAttribute called JsonFileDataAttribute to load data from a JSON file. This is a basic implementation, but you could easily extend it to meet your own needs. You can find the code for this and the last post on GitHub.


Damien Bowden: IdentityServer4 Localization using ui_locales and the query string

This post is part 2 from the previous post IdentityServer4 Localization with the OIDC Implicit Flow where the localization was implemented using a shared cookie between the applications. This has its restrictions, due to the cookie domain constraints and this post shows how the oidc optional parameter ui_locales can be used instead, to pass the localization between the client application and the STS server.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

The ui_locales, which is an optional parameter defined in the OpenID standard, can be used to pass the localization from the client application to the server application in the authorize request. The parameter is passed as a query string value.

A custom RequestCultureProvider class is implemented to handle this. The culture provider checks for the ui_locales in the query string and sets the culture if it is found. If it is not found, it checks for the returnUrl parameter. This is the parameter returned by the IdentityServer4 middleware after a redirect from the /connect/authorize endpoint. The provider then searches for the ui_locales in the parameter and sets the culture if found.

Once the culture has been set, a localization cookie is set on the server and added to the response. The will be used if the client application/user tries to logout. This is required because the culture cannot be set for the endsession endpoint.

using Microsoft.AspNetCore.Localization;
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Primitives;
using Microsoft.AspNetCore.WebUtilities;
using System.Linq;

namespace IdentityServerWithIdentitySQLite
{
    public class LocalizationQueryProvider : RequestCultureProvider
    {
        public static readonly string DefaultParamterName = "culture";

        public string QureyParamterName { get; set; } = DefaultParamterName;

        /// <inheritdoc />
        public override Task<ProviderCultureResult> DetermineProviderCultureResult(HttpContext httpContext)
        {
            if (httpContext == null)
            {
                throw new ArgumentNullException(nameof(httpContext));
            }

            var query = httpContext.Request.Query;
            var exists = query.TryGetValue("ui_locales", out StringValues culture);

            if (!exists)
            {
                exists = query.TryGetValue("returnUrl", out StringValues requesturl);
                // hack because Identityserver4 does some magic here...
                // Need to set the culture manually
                if (exists)
                {
                    var request = requesturl.ToArray()[0];
                    Uri uri = new Uri("http://faketopreventexception" + request);
                    var query1 = QueryHelpers.ParseQuery(uri.Query);
                    var requestCulture = query1.FirstOrDefault(t => t.Key == "ui_locales").Value;

                    var cultureFromReturnUrl = requestCulture.ToString();
                    if(string.IsNullOrEmpty(cultureFromReturnUrl))
                    {
                        return NullProviderCultureResult;
                    }

                    culture = cultureFromReturnUrl;
                }
            }

            var providerResultCulture = ParseDefaultParamterValue(culture);

            // Use this cookie for following requests, so that for example the logout request will work
            if (!string.IsNullOrEmpty(culture.ToString()))
            {
                var cookie = httpContext.Request.Cookies[".AspNetCore.Culture"];
                var newCookieValue = CookieRequestCultureProvider.MakeCookieValue(new RequestCulture(culture));

                if (string.IsNullOrEmpty(cookie) || cookie != newCookieValue)
                {
                    httpContext.Response.Cookies.Append(".AspNetCore.Culture", newCookieValue);
                }
            }

            return Task.FromResult(providerResultCulture);
        }

        public static ProviderCultureResult ParseDefaultParamterValue(string value)
        {
            if (string.IsNullOrWhiteSpace(value))
            {
                return null;
            }

            var cultureName = value;
            var uiCultureName = value;

            if (cultureName == null && uiCultureName == null)
            {
                // No values specified for either so no match
                return null;
            }

            if (cultureName != null && uiCultureName == null)
            {
                uiCultureName = cultureName;
            }

            if (cultureName == null && uiCultureName != null)
            {
                cultureName = uiCultureName;
            }

            return new ProviderCultureResult(cultureName, uiCultureName);
        }
    }
}

The LocalizationQueryProvider can then be added as part of the localization configuration.

services.Configure<RequestLocalizationOptions>(
options =>
{
	var supportedCultures = new List<CultureInfo>
		{
			new CultureInfo("en-US"),
			new CultureInfo("de-CH"),
			new CultureInfo("fr-CH"),
			new CultureInfo("it-CH")
		};

	options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
	options.SupportedCultures = supportedCultures;
	options.SupportedUICultures = supportedCultures;

	var providerQuery = new LocalizationQueryProvider
	{
		QureyParamterName = "ui_locales"
	};

	options.RequestCultureProviders.Insert(0, providerQuery);
});

The client application can add the ui_locales parameter to the authorize request.

let culture = 'de-CH';
if (this.locale.getCurrentCountry()) {
   culture = this.locale.getCurrentLanguage() + '-' + this.locale.getCurrentCountry();
}

this.oidcSecurityService.setCustomRequestParameters({ 'ui_locales': culture});

this.oidcSecurityService.authorize();

The localization will now be sent from the client application to the server.

https://localhost:44318/account/login?returnUrl=%2Fconnect%2Fauthorize? …ui_locales%3Dfr-CH


Links:

https://damienbod.com/2017/11/01/shared-localization-in-asp-net-core-mvc/

https://github.com/IdentityServer/IdentityServer4

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization

https://github.com/robisim74/angular-l10n

https://damienbod.com/2017/11/06/identityserver4-localization-with-the-oidc-implicit-flow/

http://openid.net/specs/openid-connect-core-1_0.html



Andrew Lock: Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]

Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]

In this post I provide an introduction to creating parmeterised tests using xUnit's [Theory] tests, and how you can pass data into your test methods. I'll cover the common [InlineData] attribute, and also the [ClassData] and [MemberData] attributes. In the next post, I'll show how to load data in other ways by creating your own [DataAttribute].

If you're new to testing with xUnit, I suggest reading the getting started documentation. This shows how to get started testing .NET Core projects with xUnit, and provides an introduction to [Fact] and [Theory] tests.

Shortly after writing this post I discovered this very similar post by Hamid Mosalla. Kudos to him for beating me to it by 8 months 😉.

Basic tests using xUnit [Fact]

If we're going to write some unit tests, it's easiest to have something we want to test. I'm going to use the super-trivial and clichéd "calculator", shown below:

public class Calculator  
{
    public int Add(int value1, int value2)
    {
        return value1 + value2;
    }
}

The Add method takes two numbers, adds them together and returns the result.

We'll start by creating our first xUnit test for this class. In xUnit, the most basic test method is a public parameterless method decorated with the [Fact] attribute. The following example tests that when we pass the values 1 and 2 to the Add() function, it returns 3:

public class CalculatorTests  
{
    [Fact]
    public void CanAdd()
    {
        var calculator = new Calculator();

        int value1 = 1;
        int value2 = 2;

        var result = calculator.Add(value1, value2);

        Assert.Equal(3, result);
    }
}

If you run your test project using dotnet test (or Visual Studio's Test Explorer), then you'll see a single test listed, which shows the test was passed:

Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]

We know that the Calculator.Add() function is working correctly for these specific values, but we'll clearly need to test more values than just 1 and 2. The question is, what's the best way to achieve this? We could copy and paste the test and just change the specific values used for each one, but that's a bit messy. Instead, xUnit provides the [Theory] attribute for this situation.

Using the [Theory] attribute to create parameterised tests with [InlineData]

xUnit uses the [Fact] attribute to denote a parameterless unit test, which tests invariants in your code.

In contrast, the [Theory] attribute denotes a parameterised test that is true for a subset of data. That data can be supplied in a number of ways, but the most common is with an [InlineData] attribute.

The following example shows how you could rewrite the previous CanAdd test method to use the [Theory] attribute, and add some extra values to test:

[Theory]
[InlineData(1, 2, 3)]
[InlineData(-4, -6, -10)]
[InlineData(-2, 2, 0)]
[InlineData(int.MinValue, -1, int.MaxValue)]
public void CanAddTheory(int value1, int value2, int expected)  
{
    var calculator = new Calculator();

    var result = calculator.Add(value1, value2);

    Assert.Equal(expected, result);
}

Instead of specifying the values to add (value1 and value2) in the test body, we pass those values as parameters to the test. We also pass in the expected result of the calculation, to use in the Assert.Equal() call.

The data is provided by the [InlineData] attribute. Each instance of [InlineData] will create a separate execution of the CanAddTheory method. The values passed in the constructor of [InlineData] are used as the parameters for the method - the order of the parameters in the attribute matches the order in which they're supplied to the method.

Tip: The xUnit 2.3.0 NuGet package includes some Roslyn analyzers that can help ensure that your [InlineData] parameters match the method's parameters. The image below shows three errors: not enough parameters, too many parameters, and parameters of the wrong type

Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]

If you run the tests for this method, you'll see each [InlineData] creates a separate instance. xUnit handily adds the parameter names and values to the test description, so you can easily see which iteration failed.

Creating parameterised tests in xUnit with [InlineData], [ClassData], and [MemberData]

As an aside, do you see what I did with that int.MinValue test? You're testing your edge cases work as expected right? 😉

The [InlineData] attribute is great when your method parameters are constants, and you don't have too many cases to test. If that's not the case, then you might want to look at one of the other ways to provide data to your [Theory] methods.

Using a dedicated data class with [ClassData]

If the values you need to pass to your [Theory] test aren't constants, then you can use an alternative attribute, [ClassData], to provide the parameters. This attribute takes a Type which xUnit will use to obtain the data:

[Theory]
[ClassData(typeof(CalculatorTestData))]
public void CanAddTheoryClassData(int value1, int value2, int expected)  
{
    var calculator = new Calculator();

    var result = calculator.Add(value1, value2);

    Assert.Equal(expected, result);
}

We've specified a type of CalculatorTestData in the [ClassData] attribute. This class must implement IEnumerable<object[]>, where each item returned is an array of objects to use as the method parameters. We could rewrite the data from the [InlineData] attribute using this approach:

public class CalculatorTestData : IEnumerable<object[]>  
{
    public IEnumerator<object[]> GetEnumerator()
    {
        yield return new object[] { 1, 2, 3 };
        yield return new object[] { -4, -6, -10 };
        yield return new object[] { -2, 2, 0 };
        yield return new object[] { int.MinValue, -1, int.MaxValue };
    }

    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
}

Obviously you could write this enumerator in multiple ways, but I went for a simple iterator approach. xUnit will call .ToList() on your provided class before it runs any of the theory method instances, so it's important the data is all independent. You don't want to have shared objects between tests runs causing weird bugs!

The [ClassData] attribute is a convenient way of removing clutter from your test files, but what if you don't want to create an extra class? For these situations, you can use the [MemberData] attribute.

Using generator properties with the [MemberData] properties

The [MemberData] attribute can be used to fetch data for a [Theory] from a static property or method of a type. This attribute has quite a lot options, so I'll just run through some of them here.

Loading data from a property on the test class

The [MemberData] attribute can load data from an IEnnumerable<object[]> property on the test class. The xUnit analyzers will pick up any issues with your configuration, such as missing properties, or using properties that return invalid types.

In the following example I've added a Data property which returns an IEnumerable<object[]>, just like for the [ClassData]

public class CalculatorTests  
{
    [Theory]
    [MemberData(nameof(Data))]
    public void CanAddTheoryMemberDataProperty(int value1, int value2, int expected)
    {
        var calculator = new Calculator();

        var result = calculator.Add(value1, value2);

        Assert.Equal(expected, result);
    }

    public static IEnumerable<object[]> Data =>
        new List<object[]>
        {
            new object[] { 1, 2, 3 },
            new object[] { -4, -6, -10 },
            new object[] { -2, 2, 0 },
            new object[] { int.MinValue, -1, int.MaxValue },
        };
}

Loading data from a method on the test class

As well as properties, you can obtain [MemberData] from a static method. These methods can even be parameterised themselves. If that's the case, you need to supply the parameters in the [MemberData], as shown below:

public class CalculatorTests  
{
    [Theory]
    [MemberData(nameof(GetData), parameters: 3)]
    public void CanAddTheoryMemberDataMethod(int value1, int value2, int expected)
    {
        var calculator = new Calculator();

        var result = calculator.Add(value1, value2);

        Assert.Equal(expected, result);
    }

    public static IEnumerable<object[]> GetData(int numTests)
    {
        var allData = new List<object[]>
        {
            new object[] { 1, 2, 3 },
            new object[] { -4, -6, -10 },
            new object[] { -2, 2, 0 },
            new object[] { int.MinValue, -1, int.MaxValue },
        };

        return allData.Take(numTests);
    }
}

In this case, xUnit first calls GetData(), passing in the parameter as numTests: 3. It then uses each object[] returned by the method to execute the [Theory] test.

Loading data from a property or method on a different class

This option is sort of a hybrid between the [ClassData] attribute and the [MemberData] attribute usage you've seen so far. Instead of loading data from a property or method on the test class, you load data from a property or method on some other specified type:

public class CalculatorTests  
{
    [Theory]
    [MemberData(nameof(Data), MemberType= typeof(CalculatorData))]
    public void CanAddTheoryMemberDataMethod(int value1, int value2, int expected)
    {
        var calculator = new Calculator();

        var result = calculator.Add(value1, value2);

        Assert.Equal(expected, result);
    }
}

public class CalculatorData  
{
    public static IEnumerable<object[]> Data =>
        new List<object[]>
        {
            new object[] { 1, 2, 3 },
            new object[] { -4, -6, -10 },
            new object[] { -2, 2, 0 },
            new object[] { int.MinValue, -1, int.MaxValue },
        };
}

That pretty much covers your options for providing data to [Theory] tests. If these attributes don't let you provide data in the way you want, you can always create your own, as you'll see in my next post.


Damien Bowden: IdentityServer4 Localization with the OIDC Implicit Flow

This post shows how to implement localization in IdentityServer4 when using the Implicit Flow with an Angular client.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

The problem

When the oidc implicit client calls the endpoint /connect/authorize to authenticate and authorize the client and the identity, the user is redirected to the AccountController login method using the IdentityServer4 package. If the culture and the ui-culture is set using the query string or using the default localization filter, it gets ignored in the host. By using a localization cookie, which is set from the client SPA application, it is possible to use this culture in IdentityServer4 and it’s host.

Part 2 IdentityServer4 Localization using ui_locales and the query string

IdentityServer 4 Localization

The ASP.NET Core localization is configured in the startup method of the IdentityServer4 host. The localization service, the resource paths and the RequestCultureProviders are configured here. A custom LocalizationCookieProvider is added to handle the localization cookie. The MVC middleware is then configured to use the localization.

public void ConfigureServices(IServiceCollection services)
{
	...

	services.AddSingleton<LocService>();
	services.AddLocalization(options => options.ResourcesPath = "Resources");

	services.AddAuthentication();

	services.AddIdentity<ApplicationUser, IdentityRole>()
	.AddEntityFrameworkStores<ApplicationDbContext>();

	services.Configure<RequestLocalizationOptions>(
		options =>
		{
			var supportedCultures = new List<CultureInfo>
				{
					new CultureInfo("en-US"),
					new CultureInfo("de-CH"),
					new CultureInfo("fr-CH"),
					new CultureInfo("it-CH")
				};

			options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
			options.SupportedCultures = supportedCultures;
			options.SupportedUICultures = supportedCultures;

			options.RequestCultureProviders.Clear();
			var provider = new LocalizationCookieProvider
			{
				CookieName = "defaultLocale"
			};
			options.RequestCultureProviders.Insert(0, provider);
		});

	services.AddMvc()
	 .AddViewLocalization()
	 .AddDataAnnotationsLocalization(options =>
	 {
		 options.DataAnnotationLocalizerProvider = (type, factory) =>
		 {
			 var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
			 return factory.Create("SharedResource", assemblyName.Name);
		 };
	 });

	...

	services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddInMemoryIdentityResources(Config.GetIdentityResources())
		.AddInMemoryApiResources(Config.GetApiResources())
		.AddInMemoryClients(Config.GetClients())
		.AddAspNetIdentity<ApplicationUser>()
		.AddProfileService<IdentityWithAdditionalClaimsProfileService>();
}

The localization is added to the pipe in the Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
	app.UseRequestLocalization(locOptions.Value);

	app.UseStaticFiles();

	app.UseIdentityServer();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The LocalizationCookieProvider class implements the RequestCultureProvider to handle the localization sent from the Angular client as a cookie. The class uses the defaultLocale cookie to set the culture. This was configured in the startup class previously.

using Microsoft.AspNetCore.Localization;
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;

namespace IdentityServerWithIdentitySQLite
{
    public class LocalizationCookieProvider : RequestCultureProvider
    {
        public static readonly string DefaultCookieName = ".AspNetCore.Culture";

        public string CookieName { get; set; } = DefaultCookieName;

        /// <inheritdoc />
        public override Task<ProviderCultureResult> DetermineProviderCultureResult(HttpContext httpContext)
        {
            if (httpContext == null)
            {
                throw new ArgumentNullException(nameof(httpContext));
            }

            var cookie = httpContext.Request.Cookies[CookieName];

            if (string.IsNullOrEmpty(cookie))
            {
                return NullProviderCultureResult;
            }

            var providerResultCulture = ParseCookieValue(cookie);

            return Task.FromResult(providerResultCulture);
        }

        public static ProviderCultureResult ParseCookieValue(string value)
        {
            if (string.IsNullOrWhiteSpace(value))
            {
                return null;
            }

            var cultureName = value;
            var uiCultureName = value;

            if (cultureName == null && uiCultureName == null)
            {
                // No values specified for either so no match
                return null;
            }

            if (cultureName != null && uiCultureName == null)
            {
                uiCultureName = cultureName;
            }

            if (cultureName == null && uiCultureName != null)
            {
                cultureName = uiCultureName;
            }

            return new ProviderCultureResult(cultureName, uiCultureName);
        }
    }
}

The Account login view uses the localization to translate the different texts into one of the supported cultures.

@using System.Globalization
@using IdentityServerWithAspNetIdentity.Resources
@model IdentityServer4.Quickstart.UI.Models.LoginViewModel
@inject SignInManager<ApplicationUser> SignInManager

@inject LocService SharedLocalizer

@{
    ViewData["Title"] = @SharedLocalizer.GetLocalizedHtmlString("login");
}

<h2>@ViewData["Title"]</h2>
<div class="row">
    <div class="col-md-8">
        <section>
            <form asp-controller="Account" asp-action="Login" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
                <h4>@CultureInfo.CurrentCulture</h4>
                <hr />
                <div asp-validation-summary="All" class="text-danger"></div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("email")</label>
                    <div class="col-md-8">
                        <input asp-for="Email" class="form-control" />
                        <span asp-validation-for="Email" class="text-danger"></span>
                    </div>
                </div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("password")</label>
                    <div class="col-md-8">
                        <input asp-for="Password" class="form-control" type="password" />
                        <span asp-validation-for="Password" class="text-danger"></span>
                    </div>
                </div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("rememberMe")</label>
                    <div class="checkbox col-md-8">
                        <input asp-for="RememberLogin" />
                    </div>
                </div>
                <div class="form-group">
                    <div class="col-md-offset-4 col-md-8">
                        <button type="submit" class="btn btn-default">@SharedLocalizer.GetLocalizedHtmlString("login")</button>
                    </div>
                </div>
                <p>
                    <a asp-action="Register" asp-route-returnurl="@Model.ReturnUrl">@SharedLocalizer.GetLocalizedHtmlString("registerAsNewUser")</a>
                </p>
                <p>
                    <a asp-action="ForgotPassword">@SharedLocalizer.GetLocalizedHtmlString("forgotYourPassword")</a>
                </p>
            </form>
        </section>
    </div>
</div>

@section Scripts {
    @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); }
}

The LocService uses the IStringLocalizerFactory interface to configure a shared resource for the resources.

using Microsoft.Extensions.Localization;
using System.Reflection;

namespace IdentityServerWithAspNetIdentity.Resources
{
    public class LocService
    {
        private readonly IStringLocalizer _localizer;

        public LocService(IStringLocalizerFactory factory)
        {
            var type = typeof(SharedResource);
            var assemblyName = new AssemblyName(type.GetTypeInfo().Assembly.FullName);
            _localizer = factory.Create("SharedResource", assemblyName.Name);
        }

        public LocalizedString GetLocalizedHtmlString(string key)
        {
            return _localizer[key];
        }
    }
}

Client Localization

The Angular SPA client uses the angular-l10n the localize the application.

 "dependencies": {
    "angular-l10n": "^4.0.0",

the angular-l10n is configured in the app module and is configured to save the current culture in a cookie called defaultLocale. This cookie matches what was configured on the server.

...

import { L10nConfig, L10nLoader, TranslationModule, StorageStrategy, ProviderType } from 'angular-l10n';

const l10nConfig: L10nConfig = {
    locale: {
        languages: [
            { code: 'en', dir: 'ltr' },
            { code: 'it', dir: 'ltr' },
            { code: 'fr', dir: 'ltr' },
            { code: 'de', dir: 'ltr' }
        ],
        language: 'en',
        storage: StorageStrategy.Cookie
    },
    translation: {
        providers: [
            { type: ProviderType.Static, prefix: './i18n/locale-' }
        ],
        caching: true,
        missingValue: 'No key'
    }
};

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpClientModule,
        TranslationModule.forRoot(l10nConfig),
		DataEventRecordsModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent,
        SecureFilesComponent
    ],
    providers: [
        OidcSecurityService,
        SecureFileService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

export class AppModule {

    clientConfiguration: any;

    constructor(
        public oidcSecurityService: OidcSecurityService,
        private http: HttpClient,
        configuration: Configuration,
        public l10nLoader: L10nLoader
    ) {
        this.l10nLoader.load();

        console.log('APP STARTING');
        this.configClient().subscribe((config: any) => {
            this.clientConfiguration = config;

            let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
            openIDImplicitFlowConfiguration.stsServer = this.clientConfiguration.stsServer;
            openIDImplicitFlowConfiguration.redirect_url = this.clientConfiguration.redirect_url;
            // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
            // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
            openIDImplicitFlowConfiguration.client_id = this.clientConfiguration.client_id;
            openIDImplicitFlowConfiguration.response_type = this.clientConfiguration.response_type;
            openIDImplicitFlowConfiguration.scope = this.clientConfiguration.scope;
            openIDImplicitFlowConfiguration.post_logout_redirect_uri = this.clientConfiguration.post_logout_redirect_uri;
            openIDImplicitFlowConfiguration.start_checksession = this.clientConfiguration.start_checksession;
            openIDImplicitFlowConfiguration.silent_renew = this.clientConfiguration.silent_renew;
            openIDImplicitFlowConfiguration.post_login_route = this.clientConfiguration.startup_route;
            // HTTP 403
            openIDImplicitFlowConfiguration.forbidden_route = this.clientConfiguration.forbidden_route;
            // HTTP 401
            openIDImplicitFlowConfiguration.unauthorized_route = this.clientConfiguration.unauthorized_route;
            openIDImplicitFlowConfiguration.log_console_warning_active = this.clientConfiguration.log_console_warning_active;
            openIDImplicitFlowConfiguration.log_console_debug_active = this.clientConfiguration.log_console_debug_active;
            // id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time,
            // limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
            openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = this.clientConfiguration.max_id_token_iat_offset_allowed_in_seconds;

            configuration.FileServer = this.clientConfiguration.apiFileServer;
            configuration.Server = this.clientConfiguration.apiServer;

            this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);

            // if you need custom parameters
            // this.oidcSecurityService.setCustomRequestParameters({ 'culture': 'fr-CH', 'ui-culture': 'fr-CH', 'ui_locales': 'fr-CH' });
        });
    }

    configClient() {

        console.log('window.location', window.location);
        console.log('window.location.href', window.location.href);
        console.log('window.location.origin', window.location.origin);
        console.log(`${window.location.origin}/api/ClientAppSettings`);

        return this.http.get(`${window.location.origin}/api/ClientAppSettings`);
    }
}

When the applications are started, the user can select a culture and login.

And the login view is localized correctly in de-CH

Or in french, if the culture is fr-CH

Links:

https://damienbod.com/2017/11/11/identityserver4-localization-using-ui_locales-and-the-query-string/

https://damienbod.com/2017/11/01/shared-localization-in-asp-net-core-mvc/

https://github.com/IdentityServer/IdentityServer4

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization

https://github.com/robisim74/angular-l10n



Anuraj Parameswaran: Getting started with OData in ASP.NET Core

This post is about getting started with OData in ASP.NET Core. OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.


Damien Bowden: Shared Localization in ASP.NET Core MVC

This article shows how ASP.NET Core MVC razor views and view models can use localized strings from a shared resource. This saves you creating many different files and duplicating translations for the different views and models. This makes it much easier to manage your translations, and also reduces the effort required to export, import the translations.

Code: https://github.com/damienbod/AspNetCoreMvcSharedLocalization

A default ASP.NET Core MVC application with Individual user accounts authentication is used to create the application.

A LocService class is used, which takes the IStringLocalizerFactory interface as a dependency using construction injection. The factory is then used, to create an IStringLocalizer instance using the type from the SharedResource class.

using Microsoft.Extensions.Localization;
using System.Reflection;

namespace AspNetCoreMvcSharedLocalization.Resources
{
    public class LocService
    {
        private readonly IStringLocalizer _localizer;

        public LocService(IStringLocalizerFactory factory)
        {
            var type = typeof(SharedResource);
            var assemblyName = new AssemblyName(type.GetTypeInfo().Assembly.FullName);
            _localizer = factory.Create("SharedResource", assemblyName.Name);
        }

        public LocalizedString GetLocalizedHtmlString(string key)
        {
            return _localizer[key];
        }
    }
}

The dummy SharedResource is required to create the IStringLocalizer instance using the type from the class.

namespace AspNetCoreMvcSharedLocalization.Resources
{
    /// <summary>
    /// Dummy class to group shared resources
    /// </summary>
    public class SharedResource
    {
    }
}

The resx resource files are added with the name, which matches the IStringLocalizer definition. This example uses SharedResource.de-CH.resx and the other localizations as required. One of the biggest problems with ASP.NET Core localization, if the name of the resx does not match the name/type of the class, view using the resource, it will not be found and so not localized. It will then use the default string, which is the name of the resource. This is also a problem as we programme in english, but the default language is german or french. Some programmers don’t understand german. It is bad to have german strings throughout the english code base.

The localization setup is then added to the startup class. This application uses de-CH, it-CH, fr-CH and en-US. The QueryStringRequestCultureProvider is used to set the request localization.

public void ConfigureServices(IServiceCollection services)
{
	...

	services.AddSingleton<LocService>();
	services.AddLocalization(options => options.ResourcesPath = "Resources");

	services.AddMvc()
		.AddViewLocalization()
		.AddDataAnnotationsLocalization(options =>
		{
			options.DataAnnotationLocalizerProvider = (type, factory) =>
			{
				var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
				return factory.Create("SharedResource", assemblyName.Name);
			};
		});

	services.Configure<RequestLocalizationOptions>(
		options =>
		{
			var supportedCultures = new List<CultureInfo>
				{
					new CultureInfo("en-US"),
					new CultureInfo("de-CH"),
					new CultureInfo("fr-CH"),
					new CultureInfo("it-CH")
				};

			options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
			options.SupportedCultures = supportedCultures;
			options.SupportedUICultures = supportedCultures;

			options.RequestCultureProviders.Insert(0, new QueryStringRequestCultureProvider());
		});

	services.AddMvc();
}

The localization is then added as a middleware.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
	...

	var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
	app.UseRequestLocalization(locOptions.Value);

	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

Razor Views

The razor views use the shared resource localization by injecting the LocService. This was registered in the IoC in the startup class. The localized strings can then be used as required.

@model RegisterViewModel
@using AspNetCoreMvcSharedLocalization.Resources

@inject LocService SharedLocalizer

@{
    ViewData["Title"] = @SharedLocalizer.GetLocalizedHtmlString("register");
}
<h2>@ViewData["Title"]</h2>
<form asp-controller="Account" asp-action="Register" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <h4>@SharedLocalizer.GetLocalizedHtmlString("createNewAccount")</h4>
    <hr />
    <div asp-validation-summary="All" class="text-danger"></div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("email")</label>
        <div class="col-md-10">
            <input asp-for="Email" class="form-control" />
            <span asp-validation-for="Email" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("password")</label>
        <div class="col-md-10">
            <input asp-for="Password" class="form-control" />
            <span asp-validation-for="Password" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("confirmPassword")</label>
        <div class="col-md-10">
            <input asp-for="ConfirmPassword" class="form-control" />
            <span asp-validation-for="ConfirmPassword" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">@SharedLocalizer.GetLocalizedHtmlString("register")</button>
        </div>
    </div>
</form>
@section Scripts {
    @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); }
}

View Model

The models validation messages are also localized. The ErrorMessage of the attributes are used to get the localized strings.

using System.ComponentModel.DataAnnotations;

namespace AspNetCoreMvcSharedLocalization.Models.AccountViewModels
{
    public class RegisterViewModel
    {
        [Required(ErrorMessage = "emailRequired")]
        [EmailAddress]
        [Display(Name = "Email")]
        public string Email { get; set; }

        [Required(ErrorMessage = "passwordRequired")]
        [StringLength(100, ErrorMessage = "passwordStringLength", MinimumLength = 8)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "confirmPasswordNotMatching")]
        public string ConfirmPassword { get; set; }
    }
}

The AddDataAnnotationsLocalization DataAnnotationLocalizerProvider is setup to always use the SharedResource resx files for all of the models. This prevents duplicating the localizations for each of the different models.

.AddDataAnnotationsLocalization(options =>
{
	options.DataAnnotationLocalizerProvider = (type, factory) =>
	{
		var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
		return factory.Create("SharedResource", assemblyName.Name);
	};
});

The localization can be tested using the following requests:

https://localhost:44371/Account/Register?culure=de-CH&ui-culture=de-CH
https://localhost:44371/Account/Register?culure=it-CH&ui-culture=it-CH
https://localhost:44371/Account/Register?culure=fr-CH&ui-culture=fr-CH
https://localhost:44371/Account/Register?culure=en-US&ui-culture=en-US

The QueryStringRequestCultureProvider reads the culture and the ui-culture from the parameters. You could also use headers or cookies to send the required localization in the request, but this needs to be configured in the Startup class.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization



Andrew Lock: Fixing the error "Program has more than one entry point defined" for console apps containing xUnit tests

Fixing the error

This post describes a problem I ran into when converting a test project from .NET Framework to .NET Core. The test project was a console app, so that specific tests could easily be run from the command line, as well as using the normal xUnit console runner. Unfortunately, after converting the project to .NET Core, the project would no longer compile, giving the error:

CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point.

Fixing the error

This post digs into the root cause of the error, why it manifests, and how to fix it. The issue and solution are described in this GitHub issue.

tl;dr; Add <GenerateProgramFile>false</GenerateProgramFile> inside a <PropertyGroup> element in your test project's .csproj file.

The problematic test project

The configuration I ran into probably isn't that common, but I have seen it used in a few places. Essentially, you have a console application that contains xUnit (or some other testing framework like MSTest) tests. You can then easily run certain tests from the command line, without having to use the specific xUnit or MSTest test runner/harness.

For example, you might have some key integration tests that you want to be able to run on some machine that doesn't have the required unit testing runners installed. By using the console-app approach, you can simply call the test methods in the program's static void main method. Alternatively, you may want to include xUnit tests as part of your real app.

Consider the following example project. It consists of a single "integration" test in the CriticialTests class, and a Program.cs that runs the test on startup.

Fixing the error

The test might look something like:

public class CriticalTests  
{
    [Fact]
    public void MyIntegrationTest()
    {
        // Do something
        Console.WriteLine("Testing complete");
    }
}

The Program.cs file simply creates an instance of this class, and invokes the MyIntegrationTest method directly:

public class Program  
{
    public static void Main(string[] args)
    {
        new CriticalTests().MyIntegrationTest();
    }
}

Unfortunately, this project won't compile. Instead, you'll get this error:

CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point.

Why is there a problem?

On the face of it, this doesn't make sense. The dotnet test documentation states:

Unit tests are console application projects…

so if they're console applications, surely they should have a static void main right?

Why are test project's console projects?

The detail of why a test project is a console application is a little subtle; it's not immediately obvious from looking at the .csproj project file.

For example, consider the following project file. This is for a .NET Core class library project, created using the "SDK style" .csproj file. There's not much to it, just the Sdk attribute and the TargetFramework (which is .NET Core 2.0):

<Project Sdk="Microsoft.NET.Sdk">  
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
</Project>  

Now lets look at a .NET Core console project's .csproj.

<Project Sdk="Microsoft.NET.Sdk">  
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <OutputType>Exe</OutputType>
  </PropertyGroup>
</Project>  

This is almost identical, the only difference is the <OutputType> element, which tells MSBuild we're producing a console app instead of a library project.

Finally, lets look at a .NET Core (xUnit) test project's .csproj:

<Project Sdk="Microsoft.NET.Sdk">  
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <IsPackable>false</IsPackable>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" />
    <PackageReference Include="xunit" Version="2.3.1" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" />
  </ItemGroup>
</Project>  

This project has a bit more to it, but the notable features are

  • The Sdk is the same as the library and console projects
  • It has an <IsPackable> project, so calling dotnet pack on the solution won't try and create a NuGet for this project
  • It has three NuGet packages for the .NET Test SDK, xUnit, and the xUnit adapater for dotnet test
  • It doesn't have an <OutputType> of Exe

The interesting point is that last bullet. I (and the documentation) stated that a test project is a console app, so why doesn't it have an <OutputType>?

The secret, is that the Microsoft.NET.Test.Sdk NuGet package is injecting the <OutputType> element when you build your project. It does this by including a .targets file in the package, which runs automatically when your project builds.

If you want to see for yourself, open the Microsoft.Net.Test.Sdk.targets file from the NuGet package (e.g. at %USERPROFILE%\.nuget\packages\microsoft.net.test.sdk\15.3.0\build\netcoreapp1.0) Alternatively, you can view the file on NuGet. The important part is:

<PropertyGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'">  
  <OutputType>Exe</OutputType>
</PropertyGroup>  

So if the project is a .NET Core project, it adds the <OutputType>. That explains why the project is a console app, but it doesn't explain why we're getting a build error…

It also doesn't explain why the test SDK needs to do this in the first place, but I don't have the answer to that one. This comment by Brad Wilson suggests it's not actually required, and is there for legacy reasons more than anything else.

Why is there a build error?

The build error is actually a consequence of forcing the project to a console app. If you take a library project and simply add the <OutputType>Exe</OutputType> element to it, you'll get the following error instead:

CS5001 Program does not contain a static 'Main' method suitable for an entry point  

A console app needs an "entry point" i.e. a method to run when the app is starting. So to convert a library project to a console app you must also add a Program class with a static void Main method.

Can you see the problem with that, given the Microsoft.Net.Test.Sdk <OutputType> behaviour?

If adding the Microsoft.Net.Test.Sdk package to a library project silently converted it to a console app, then you'd get a build error by default. You'd be forced to add a static void Main, even if you only ever wanted to run the app using dotnet test or Visual Studio's Test Explorer.

To get round this, the SDK package automatically generates a Program file for you if you're running on .NET Core. This ensures the build doesn't break when you add the package to a class library. You can see the MSBuild target that does this in the .targets file for the package. It creates a Program file using the correct language (VB or C#) and compiles it into your test project.

Which leads us back to the original error. If you already have a Program file in your project, the compiler doesn't know which file to choose as the entry point for your app, hence the message:

CS0017 Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point.

So now we know exactly what's happening, we can fix it.

The solution

The error message gives you a hint as to how to fix it, Compile with /main, but the compiler is assuming you actually want both Program classes. In reality, we don't need the auto generated one at all, as we have our own.

Luckily, the Microsoft.Net.Test.Sdk.targets file uses a property to determine whether it should generate the file:

 <GenerateProgramFile Condition="'$(GenerateProgramFile)' == ''">true</GenerateProgramFile>

This defines a property called $(GenerateProgramFile), and sets its value to true as long as it doesn't already have a value.

We can use that condition to override the property's value to false in our test csproj file, by adding <GenerateProgramFile>false</GenerateProgramFile> to a PropertyGroup. For example:

<Project Sdk="Microsoft.NET.Sdk">  
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <IsPackable>false</IsPackable>
    <GenerateProgramFile>false</GenerateProgramFile>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" />
    <PackageReference Include="xunit" Version="2.3.1" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" />
  </ItemGroup>
</Project>  

With the property added, we can build and run our application both using dotnet test, or by simply running the console app directly.

Fixing the error

Summary

The Microsoft.Net.Test.Sdk NuGet package required for testing with the dotnet test framework includes an MSBuild .targets file that adds an <OutputType>Exe</OutputType> property to your test project, and automatically generates a Program file.

If your test project is already a console application, or includes a Program class with a static void main method, then you must disable the auto-generation of the program file. Add the following element to your test project's .csproj, inside a <PropertyGroup> element:

<GenerateProgramFile>false</GenerateProgramFile>  

References


Dominick Baier: Using iOS11 SFAuthenticationSession with IdentityModel.OidcClient

Starting with iOS 11, there’s a special system service for browser-based authentication called SFAuthenticationSession. This is the recommended approach for OpenID Connect and OAuth 2 native iOS clients (see RFC8252).

If you are using our OidcClient library – this is how you would wrap that in an IBrowser:

using Foundation;
using System.Threading.Tasks;
using IdentityModel.OidcClient.Browser;
using SafariServices;
 
namespace iOS11Client
{
    public class SystemBrowser : IBrowser
    {
        SFAuthenticationSession _sf;
 
        public Task InvokeAsync(BrowserOptions options)
        {
            var wait = new TaskCompletionSource();
 
            _sf = new SFAuthenticationSession(
                new NSUrl(options.StartUrl),
                options.EndUrl,
                (callbackUrl, error) =>
                {
                    if (error != null)
                    {
                        var errorResult = new BrowserResult
                        {
                            ResultType = BrowserResultType.UserCancel,
                            Error = error.ToString()
                        };
 
                        wait.SetResult(errorResult);
                    }
                    else
                    {
                        var result = new BrowserResult
                        {
                            ResultType = BrowserResultType.Success,
                            Response = callbackUrl.AbsoluteString
                        };
 
                        wait.SetResult(result);
                    }
                });
 
            _sf.Start();
            return wait.Task;
        }
    }
}

Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: Templates for IdentityServer4 v2

I finally found the time to update the templates for IdentityServer4 to version 2. You can find the source code and instructions here.

To be honest, I didn’t have time to research more advanced features like post-actions (wanted to do automatic restore, but didn’t work for me) and VSIX for Visual Studio integration. If anyone has experience in this area, feel free to contact me on github.

Also – more advanced templates are coming soon (e.g. ASP.NET Identity, EF etc…)

IS4 templates.gif


Filed under: IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Andrew Lock: Migrating passwords in ASP.NET Core Identity with a custom PasswordHasher

Migrating passwords in ASP.NET Core Identity with a custom PasswordHasher

In my last post I provided an overview of the ASP.NET Core Identity PasswordHasher<> implementation, and how it enables backwards compatibility between password hashing algorithms. In this post, I'll create a custom implementation of IPasswordHasher<> that we can use to support other password formats. We'll use this to migrate existing password hashes created using BCrypt to the default ASP.NET Core Identity hashing format.

Disclaimer: You should always think carefully before replacing security-related components, as a lot of effort goes into making the default components secure by default. This article solves a specific problem, but you should only use it if you need it!

As in the last post, the code I'm going to show is based on the ASP.NET Core 2.0 release. You can view the full source code for ASP.NET Core Identity on GitHub here.

Background

As I discussed in my last post, the IPasswordHasher<> interface has two responsibilities:

  • Hash a password so it can be stored in a database
  • Verify a provided plain-text password matches a previously stored hash

In this post I'm focusing on the scenario where you want to add ASP.NET Core Identity to an existing app, or you already have a database that contains usernames and password hashes.

The problem is that your password hashes are stored using a hash format that isn't compatible with ASP.NET Core Identity. In this example, I'm going to assume your passwords are hashed using BCrypt, using the excellent BCrypt.Net library, but you could easily apply it to any other hashing algorithm. The BCryptPasswordHasher<> we create will allow you to verify existing password hashes created using BCrypt, as well as hashes created by ASP.NET Core Identity

Note that I won't be implementing the algorithms to hash new passwords with BCrypt. Instead, the hasher will create new hashes using the default ASP.NET Core Identity hash function, by deriving from the default PasswordHasher<> implementation. Also, when a user logs in and verifies their password, the hasher will optionally re-hash the password using the ASP.NET Core Identity default hash function. That way, hashes will slowly migrate from the legacy hash function to the default hash function.

As a reminder, the ASP.NET Core Identity (v3) password hasher mode uses the PBKDF2 algorithm with HMAC-SHA256, 128-bit salt, 256-bit subkey, and 10,000 iterations.

If you want to keep all your passwords using BCrypt, then you could implement IPasswordHasher<> directly. That would actually make the code simpler, as you won't need to handle multiple hash formats, but I specifically wanted to migrate our passwords to the NIST-recommended PBKDF2 algorithm, hence this hybrid-solution.

The implementation

As discussed in my last post, the default PasswordHasher<> implementation already handles multiple hashing formats, namely two different versions of PBKDF2. It does this by storing a single-byte "format-marker" along with the password hash. The whole combination is then base64 encoded and stored in the database as a string.

When a password needs to be verified compared to a stored hash, the hash is read from the database, decoded from base64 to bytes, and the first byte is inspected. If the byte is a 0x00, the password hash was created using v2 of the hashing algorithm. If the byte is a 0x01, then v3 was used.

Migrating passwords in ASP.NET Core Identity with a custom PasswordHasher

We maintain compatibility with the PasswordHasher algorithm by storing our own custom format marker in the first byte of the password hash, in a similar fashion. 0x00 and 0x01 are already taken, so I chose 0xFF as it seems like it should be safe for a while!

Migrating passwords in ASP.NET Core Identity with a custom PasswordHasher

When a password hash and plain-text password are provided for verification, we follow a similar approach to the default PasswordHasher<>. We convert the password from Base64 into bytes, and examine the first byte. If the hash starts with 0xFF then we have a BCrypyt hash. If it starts with something else, then we just delegate the call to the base PasswordHasher<> implementation we derive from.

/// <summary>
/// A drop-in replacement for the standard Identity hasher to be backwards compatible with existing bcrypt hashes
/// New passwords will be hashed with Identity V3
/// </summary>
public class BCryptPasswordHasher<TUser> : PasswordHasher<TUser> where TUser : class  
{
    readonly BCryptPasswordSettings _settings;
    public BCryptPasswordHasher(BCryptPasswordSettings settings)
    {
        _settings = settings;
    }

    public override PasswordVerificationResult VerifyHashedPassword(TUser user, string hashedPassword, string providedPassword)
    {
        if (hashedPassword == null) { throw new ArgumentNullException(nameof(hashedPassword)); }
        if (providedPassword == null) { throw new ArgumentNullException(nameof(providedPassword)); }

        byte[] decodedHashedPassword = Convert.FromBase64String(hashedPassword);

        // read the format marker from the hashed password
        if (decodedHashedPassword.Length == 0)
        {
            return PasswordVerificationResult.Failed;
        }

        // ASP.NET Core uses 0x00 and 0x01, so we start at the other end
        if (decodedHashedPassword[0] == 0xFF)
        {
            if (VerifyHashedPasswordBcrypt(decodedHashedPassword, providedPassword))
            {
                // This is an old password hash format - the caller needs to rehash if we're not running in an older compat mode.
                return _settings.RehashPasswords
                    ? PasswordVerificationResult.SuccessRehashNeeded
                    : PasswordVerificationResult.Success;
            }
            else
            {
                return PasswordVerificationResult.Failed;
            }
        }

        return base.VerifyHashedPassword(user, hashedPassword, providedPassword);
    }

    private static bool VerifyHashedPasswordBcrypt(byte[] hashedPassword, string password)
    {
        if (hashedPassword.Length < 2)
        {
            return false; // bad size
        }

        //convert back to string for BCrypt, ignoring first byte
        var storedHash = Encoding.UTF8.GetString(hashedPassword, 1, hashedPassword.Length - 1);

        return BCrypt.Verify(password, storedHash);
    }
}

Note that the PasswordHasher<> we derive from takes an optional IOptions<PasswordHasherOptions> object in its constructor. If you want to provide a custom PasswordHasherOptions object to the base implementation then you could add that to the BCryptPasswordHasher<> constructor. If you don't, the default options will be used instead.

Instead, I provide a BCryptPasswordSettings parameter in the constructor. This controls whether existing BCrypt passwords should be re-hashed with the ASP.NET Core Identity hashing algorithm, or whether they should be left as BCrypt passwords, based on the RehashPasswords property:

public class BCryptPasswordSettings  
{
    public bool RehashPasswords {get; set; }
}

Note, even if RehashPasswords is false, new passwords will be created using the identity v3 PBKDF2 format. If you want to ensure all your passwords are kept in the BCrypt format, you will need to override the HashPassword method too.

You can replace the default PasswordHasher<> implementation by registering the BCryptPasswordHasher in Startup.ConfigureServices(). Just make sure you register it before the call to AddIdentity, e.g.:

public void ConfigureServices(IServiceCollection services)  
{
    // must be added before AddIdentity()
    services.AddScoped<IPasswordHasher<ApplicationUser>, BCryptPasswordHasher<ApplicationUser>>();

    services.AddIdentity<ApplicationUser, IdentityRole>()
        .AddEntityFrameworkStores<ApplicationDbContext>()
        .AddDefaultTokenProviders();

    services.AddMvc();
}

By registering your custom implementation of IPasswordHasher<> first, the default implementation PasswordHasher<> will be skipped in the call to AddIdentity(), avoiding any issues due to multiple registered instances.

Converting stored BCrypt passwords to support the BCryptPasswordHasher

The mechanism of extending the default PasswordHasher<> implementation I've shown in this post hinges on the ability to detect the hashing algorithm by checking the first byte of the stored password hash. That means you'll need to update your stored BCrypt hashes to include the first-byte format marker and to be Base64 encoded.

Exactly how you choose to do this is highly dependent on how and where your passwords are stored, but I've provided a basic function below that takes an existing password hash stored as a string, a byte format-marker, and produces a string in a format compatible with the BCryptPasswordHasher.

public static class HashHelper  
{
    public static string ConvertPasswordFormat(string passwordHash, byte formatMarker)
    {
        var bytes = Encoding.UTF8.GetBytes(passwordHash);
        var bytesWithMarker = new byte[bytes.Length + 1];
        bytesWithMarker[0] = formatMarker;
        bytes.CopyTo(bytesWithMarker, 1);
        return Convert.ToBase64String(bytesWithMarker);
    }
}

For our BCryptPasswordHasher, we would add the format marker to an existing BCrypt hash using:

var newHash = HashHelper.ConvertPasswordFormat(bcryptHash, 0xFF);  

Summary

In this post I showed how you could extend the default ASP.NET Core Identity PasswordHasher<> implementation to support additional password formats. This lets you verify hashes created using a legacy format (BCrypt in this example), and update them to use the default Identity password hashing algorithm.


Pedro Félix: RFC 8252 and OAuth 2.0 for Native Apps

Introduction

RFC 8252 – OAuth 2.0 for Native Apps, published this month by IETF as a Best Current Practice, contains much needed guidance on how to use the OAuth 2.0 framework in native applications.
In this post I present a brief summary of the defined best practices.

OAuth for native applications

When the OAuth 1.0 protocol was introduced 10 years ago, its main goal was delegated authorization for client applications accessing services on behalf of users.
It was focused on the model “du jour”, where client applications were mostly server-side rendered Web sites, interacting with end users via browsers.
For instance, it was assumed that client applications were capable of holding long-term secrets, which is rather easy for servers but not for browser-side applications or native applications running on the user’s device.

OAuth 2.0, published on 2012, introduced a framework with multiple different flows, including the support for public clients, that is, clients that don’t need to hold secrets.
However it was still pretty much focused on classical Web sites and using this framework in the context of native applications was mostly left as an exercise for the reader.
Some of the questions that didn’t had a clear or straightforward answer were:

  • What is the adequate flow for a native application?
  • Should a native application be considered a confidential client or a public client?
  • Assuming an authorization code flow or intrinsic flow, how should the authorization request be performed: via an embedded web view or via the system browser?
  • How is the authorization response redirected back into the client application, since it isn’t a server any more? Via listening on a loopback port or using platform specific mechanisms (e.g. Android intents and custom URI schemes)?
  • What’s the proper way for avoiding code or token leakage into malicious applications also installed in the user’s device?

The first major guidance to these questions came with RFC 7636 – Proof Key for Code Exchange by OAuth Public Clients, published in 2015.
This document defines a way to use the authorization code flow with public clients, i.e. adequate to native applications, protected against the interception of the authorization code by another application (e.g. malicious applications installed in the same user device).
The problem that it addresses as well as the proposed solutions are described on a previous post: OAuth 2.0 and PKCE.

The recently published RFC 8252 – OAuth 2.0 for Native Apps (October 2017) builds upon RFC 7636 and defines a set of best practices for when using OAuth 2.0 on native applications, with emphasis on the user-agent integration aspects.

In summary, it defines the following best practices:

  • A native client application must be a public client, except if using dynamic client registration (RFC7591) to provision per device unique clients, where each application installation has an set of secret credentials) – section 8.4.

  • The client application should use the authorization code grant flow with PKCE (RFC 7636 – Proof Key for Code Exchange by OAuth Public Clients), instead of the implicit flow, namely because the later does not support the protection provided by PKCE – section 8.2.

  • The application should use an external user-agent, such as the system browser, instead of an embedded user-agent such as a web view – section 4.

    • An application using a web view can control everything that happens inside it, namely access the user’s credentials when they are inserted on it.
      Using an external user-agent isolates the user credentials from the client application, which is one of the OAuth 2.0 original goals.

    • Using the system-browser can also provide a kind of Single Sign-On – users delegating access to multiple applications using the same authorization server (or delegated identity provider) only have to authenticate once because the session artifacts (e.g. cookies) will still be available.

    • To avoid switching out of the application into the external user-agent, which may not provide a good user experience, some platforms support “in-app browser tabs” where the user agent seems to be embedded into the application, while supporting full data isolation – iOS SFAuthenticationSession or Android’s Chrome Custom Tabs.

  • The authorization request should use one of the chosen user-agent mechanism, by providing it with the URI for the authorization endpoint with the embedded request on it.

  • The redirect back to the application can use one of multiple techniques.

    • Use a redirect endpoint (e.g. com.example.myapp:/oauth2/redirect) with a private scheme (e.g. com.example.myapp) that points to the application.
      Android’s implicit intents are an example of a mechanism allowing this.
      When using this technique, the custom URI scheme must be the reversal of a domain name under the application’s control (e.g. com.example.myapp if the myapp.example.com name is controlled by the application’s author) – section 7.1.

    • Another option is to use a claimed HTTPS redirect URI, which is a feature provided by some platforms (e.g. Android’s App Links) where a request to a claimed URI triggers a call into the application instead of a regular HTTP request. This is considered to be the preferred method – section 7.2.

    • As a final option, the redirect can be performed by having the application listening on the loopback interface (127.0.0.1 or ::1).

To illustrate these best practices, the following diagram represents an end-to-end OAuth 2.0 flow on a native application

native.auth

  • On step 1, the application invokes the external user-agent, using a platform specific mechanism, and passing in the authorization URI with the embedded authorization request.

  • As a consequence, on step 2, the external user-agent is activated and does a HTTP request to the authorization endpoint using the provided URI.

  • The response to step 2 depends on the concrete authorization endpoint and is not defined by the OAuth specifications.
    A common pattern is for the response to be a redirect to a login page, followed by a consent page.

  • On step 3, after ending the direct user interaction, the authorization endpoint produces a HTTP response with the authorization response embedded inside (e.g. 302 status code with the authorization response URI in the Location header).

  • On step 4, the user-agent reacts to this response by processing the redirect URI.
    If using a private scheme or a claimed redirect URI, the user-agent uses a platform specific inter process communication mechanism to deliver the authorization response to the application (e.g. Android’s intents).
    If using localhost as the redirect URI host, the user-agent does a regular HTTP requests to the loopback interface, which is being listened by the application, thereby providing the authorization response to it.

  • On steps 5 and 6, the application exchanges the authorization code for the access and refresh tokens, using a straightforward token request.
    This interaction is done directly between the client and the token endpoint, without going through the user-agent, since no user interaction is needed (back-channel vs. front-channel).
    Since the client is public, this interaction is not authenticated (i.e. does not include any client application credentials).
    Due to this anonymous characteristic and to protect against code hijack, the code_verifier` parameter from the PKCE extension must be added to the token request.

  • Finally, on step 7, the application can use the resource server on the user’s behalf, by adding the access token to the Authorization header.

The AppAuth libraries for iOS and Android already follows these best practices.

I hope this brief summary helps.
As always, questions, comments, and suggestions are highly appreciated.



Andrew Lock: Exploring the ASP.NET Core Identity PasswordHasher

Exploring the ASP.NET Core Identity PasswordHasher

In this post I'll look at some of the source code that makes up the ASP.NET Core Identity framework. In particular, I'm going to look at the PasswordHasher<T> implementation, and how it handles hashing user passwords for verification and storage. You'll also see how it handles updating the hashing algorithm used by your app, while maintaining backwards compatibility with existing hash functions.

I'll start by describing where password hashing fits into ASP.NET Core Identity overall, and the functionality provided by the IPasswordHasher<TUser> interface. Then I'll provide a high-level overview of the PasswordHasher<T> implementation, before finally digging into a few details.

In the next post, I'll show how to create a custom IPasswordHasher<TUser> implementation, so you can integrate an existing user database into ASP.NET Core Identity. This will let you use your existing password hashes without having to reset every user's password, and optionally allow you to migrate them to the suggested ASP.NET Core Identity hash format.

ASP.NET Core Identity and password hashing

You're no doubt familiar with the "username and password" authentication flow used by the vast majority of web apps. ASP.NET Core Identity uses this flow by default (I'm going to ignore third-party login providers for the purposes of this article).

When a user registers with the app, they provide a username and password (and any other required information). The app will create a hash of the password, and store it in the database along with the user's details.

Exploring the ASP.NET Core Identity PasswordHasher

A hash is a one way function, so given the password you can work out the hash, but given the hash you can't get the original password back. For security reasons, the characteristics of the hash function are important; in particular, the hash function should be relatively costly to compute, so that if your database of password hashes were to be compromised, it would take a long time to crack them.

Important You should never store a user's password directly in a database (or anywhere else). Also, you should never store the password in an encrypted format, in which you can recover the password. Instead, passwords should only ever be stored as a hash of the original, using a strong cryptographic hash function designed for this purpose.

When it comes to logging in, users POST their username and password to the app. The app will take the identifier and attempt to find an existing account in its database. If it finds the account, it retrieves the stored password hash associated with the account.

The app then hashes the password that was submitted, and compares the two hashes. If the hashes match, then the password is correct, and the user can be authenticated. If the hashes don't match, the user provided the wrong password, and should be rejected.

Exploring the ASP.NET Core Identity PasswordHasher

The IPasswordHasher<TUser> interface

With this typical flow, there are two different scenarios in which we need to hash a password:

  • When the user registers - to create the password hash that will be stored in the database
  • When the user logs in - to hash the provided password and compare it to the stored hash

These two scenarios are closely related, and are encapsulated in the IPasswordHasher<TUser> interface in ASP.NET Core Identity. The Identity framework is designed to be highly extensible, so most of the key parts of infrastructure are exposed as interfaces, with default implementations that are registered by default.

The IPasswordHasher<TUser> is one such component. It's used in the two scenarios described above and exposes a method for each, as shown below.

Note: In this post I'm going to show the source code as it exists in the ASP.NET Core 2.0 release, by using the rel/2.0.0 tag in the Identity Github repo. You can view the full source for the IPasswordHasher<TUser> here.

public interface IPasswordHasher<TUser> where TUser : class  
{
    string HashPassword(TUser user, string password);

    PasswordVerificationResult VerifyHashedPassword(
        TUser user, string hashedPassword, string providedPassword);
}

The IPasswordHasher<TUser> interface is a generic interface, where the generic parameter is the type representing a User in the system - often a class deriving from IdentityUser.

When a new user registers, the Identity framework calls HashPashword() to hash the provided password, before storing it in the database. When a user logs in, the framework calls VerifyHashedPassword() with the user account, the stored password hash, and the password provided by the user.

Pretty self explanatory right? Let's take a look at the default implementation of this interface.

The default PasswordHasher<TUser> implementation

The default implementation in the Identity framework is the PasswordHasher<TUser> class (source code). This clas is designed to work with two different hashing formats:

  • ASP.NET Identity Version 2: PBKDF2 with HMAC-SHA1, 128-bit salt, 256-bit subkey, 1000 iterations
  • ASP.NET Core Identity Version 3: PBKDF2 with HMAC-SHA256, 128-bit salt, 256-bit subkey, 10000 iterations

The PasswordHasher<TUser> class can hash passwords in both of these formats, as well as verify passwords stored in either one.

Verifying hashed passwords

When a password is provided that you need to compare against a hashed version, the PasswordHasher<TUser> needs to know which format was used to hash the password. To do this, it preppends a single byte to the hash before storing it in the database (Base64 encoded).

When a password needs to be verified, the hasher checks the first byte, and uses the appropriate algorithm to hash the provided password.

public virtual PasswordVerificationResult VerifyHashedPassword(TUser user, string hashedPassword, string providedPassword)  
{
    // Convert the stored Base64 password to bytes
    byte[] decodedHashedPassword = Convert.FromBase64String(hashedPassword);

    // The first byte indicates the format of the stored hash
    switch (decodedHashedPassword[0])
    {
        case 0x00:
            if (VerifyHashedPasswordV2(decodedHashedPassword, providedPassword))
            {
                // This is an old password hash format - the caller needs to rehash if we're not running in an older compat mode.
                return (_compatibilityMode == PasswordHasherCompatibilityMode.IdentityV3)
                    ? PasswordVerificationResult.SuccessRehashNeeded
                    : PasswordVerificationResult.Success;
            }
            else
            {
                return PasswordVerificationResult.Failed;
            }

        case 0x01:
            if (VerifyHashedPasswordV3(decodedHashedPassword, providedPassword))
            {
                return PasswordVerificationResult.Success;
            }
            else
            {
                return PasswordVerificationResult.Failed;
            }

        default:
            return PasswordVerificationResult.Failed; // unknown format marker
    }
}

When the password is verified, the hasher returns one of three results:

  • PasswordVerificationResult.Failed - the provided password was incorrect
  • PasswordVerificationResult.Success - the provided password was correct
  • PasswordVerificationResult.SuccessRehashNeeded - the provided password was correct, but the stored hash should be updated

The switch statement in VerifyHashedPassword() has two main cases - one for Identity v2 hashing, and one for Identity v3 hashing. If the password has been stored using the older v2 hashing algorithm, and the provided password is correct, then the hasher will either return Success or SuccessRehashNeeded.

Which result it chooses is based on the PasswordHasherCompatibilityMode which is passed in via an IOptions<PasswordHasherOptions> object. This lets you choose whether or not to rehash the older passwords; if you need the password hashes to remain compatible with Identity v2, then you might want to keep the older hash format.

As well as verifying hashed passwords, the PasswordHasher<TUser> is used to create new hashes.

Hashing new passwords

The HashPassword() function is called when a new user registers, and the password needs hashing before it's stored in the database. It's also called after an old v2 format password hash is verified, and needs rehashing.

private readonly RandomNumberGenerator _rng;

public virtual string HashPassword(TUser user, string password)  
{
    if (_compatibilityMode == PasswordHasherCompatibilityMode.IdentityV2)
    {
        return Convert.ToBase64String(HashPasswordV2(password, _rng));
    }
    else
    {
        return Convert.ToBase64String(HashPasswordV3(password, _rng));
    }
}

The hashes are generated in the correct format, depending on the PasswordHasherCompatibilityMode set in the options, which is then Base64 encoded before it's stored in the database.

I won't dwell on the hashing algorithms themselves too much, but as an example, the HashPasswordV2 function is shown below. Of particular note is the line 4th from the bottom, where the "first byte" format marker is set to 0x00:

private static byte[] HashPasswordV2(string password, RandomNumberGenerator rng)  
{
    const KeyDerivationPrf Pbkdf2Prf = KeyDerivationPrf.HMACSHA1; // default for Rfc2898DeriveBytes
    const int Pbkdf2IterCount = 1000; // default for Rfc2898DeriveBytes
    const int Pbkdf2SubkeyLength = 256 / 8; // 256 bits
    const int SaltSize = 128 / 8; // 128 bits

    // Produce a version 2 text hash.
    byte[] salt = new byte[SaltSize];
    rng.GetBytes(salt);
    byte[] subkey = KeyDerivation.Pbkdf2(password, salt, Pbkdf2Prf, Pbkdf2IterCount, Pbkdf2SubkeyLength);

    var outputBytes = new byte[1 + SaltSize + Pbkdf2SubkeyLength];
    outputBytes[0] = 0x00; // format marker
    Buffer.BlockCopy(salt, 0, outputBytes, 1, SaltSize);
    Buffer.BlockCopy(subkey, 0, outputBytes, 1 + SaltSize, Pbkdf2SubkeyLength);
    return outputBytes;
}

The "first byte" format marker is the byte that is used by the VerifyHashedPassword() function to identify the format of the stored password. A format marker of 0x00 indicates that the password is stored in the v2 format; a value of 0x01 indicates the password is stored in the v3 format. In the next post, we'll use this to extend the class and support other formats too.

That's pretty much all there is to the PasswordHasher<TUser> class. If you'd like to see more details of the hashing algorithms themselves, I suggest checking out the source code.

Summary

The IPasswordHasher<TUser> is used by the ASP.NET Core Identity framework to both hash passwords for storage, and to verify that a provided password matches a stored hash. The default implementation PasswordHasher<TUser> supports two different formats of hash function: one used by Identity v2, and a stronger version used by ASP.NET Core Identity v3.

If you need to keep the passwords in the v2 format you can set the PasswordHasherCompatibilityMode on the IOptions<PasswordHasherOptions> object in the constructor to IdentityV2. If you use IdentityV3 instead, new passwords will be hashed with the stronger algorithm, and when old passwords are verified, they will be rehashed with the newer, stronger algorithm.


Damien Bowden: Implementing custom policies in ASP.NET Core using the HttpContext

This article shows how to implement a custom ASP.NET Core policy using the AuthorizationHandler class. The handler validates, that the identity from the HttpContext has the authorization to update the object in the database.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Scenerio

In the example, each admin user of the client application, can create DataEventRecord entities which can only be accessed by the corresponding identity. If a different identity with a different user sends a PUT request to update the object, a 401 response is returned. Because the Username from the identity is saved in the database for each entity, the custom policy can validate the identity and the entity to be updated.

Creating the Requirement for the Policy

A simple requirement is created for the policy implementation. The AuthorizationHandler implementation requires this and the requirement class is also used to add the policy to the application.

using Microsoft.AspNetCore.Authorization;

namespace ApiServer.Policies
{
    public class CorrectUserRequirement : IAuthorizationRequirement
    {
        public class CorrectUserRequirement : IAuthorizationRequirement{}
    }
}

Creating the custom Handler for the Policy

If a method is called, which is protected by the CorrectUserHandler, the HandleRequirementAsync is executed. In this method, the id of the object to be updated is extracted from the url path. The id is then used to select the Username from the database, which is then compared to the Username from the HttpContext identity name. If the values are not equal, no success message is returned.

using ApiServer.Repositories;
using Microsoft.AspNetCore.Authorization;
using System;
using System.Threading.Tasks;

namespace ApiServer.Policies 
{
    public class CorrectUserHandler : AuthorizationHandler<CorrectUserRequirement>
    {
        private readonly IDataEventRecordRepository _dataEventRecordRepository;

        public CorrectUserHandler(IDataEventRecordRepository dataEventRecordRepository)
        {
            _dataEventRecordRepository = dataEventRecordRepository;
        }

        protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, CorrectUserRequirement requirement)
        {
            if (context == null)
                throw new ArgumentNullException(nameof(context));
            if (requirement == null)
                throw new ArgumentNullException(nameof(requirement));

            var authFilterCtx = (Microsoft.AspNetCore.Mvc.Filters.AuthorizationFilterContext)context.Resource;
            var httpContext = authFilterCtx.HttpContext;
            var pathData = httpContext.Request.Path.Value.Split("/");
            long id = long.Parse(pathData[pathData.Length -1]);

            var username = _dataEventRecordRepository.GetUsername(id);
            if (username == httpContext.User.Identity.Name)
            {
                context.Succeed(requirement);
            }

            return Task.CompletedTask;
        }
    }
}

Adding the policy to the application

The custom policy is added to the ASP.NET Core application using the AddAuthorization extension using the requirement.

services.AddAuthorization(options =>
{
	...
	options.AddPolicy("correctUser", policyCorrectUser =>
	{
		policyCorrectUser.Requirements.Add(new CorrectUserRequirement());
	});
});

Using the policy in the ASP.NET Core controller

The PUT method uses the correctUser policy to authorize the request.

[Authorize("dataEventRecordsAdmin")]
[Authorize("correctUser")]
[HttpPut("{id}")]
public IActionResult Put(long id, [FromBody]DataEventRecordDto dataEventRecordDto)
{
	_dataEventRecordRepository.Put(id, dataEventRecordDto);
	return NoContent();
}

If a user logs in, and tries to update an entity belonging to a different user, the request is rejected.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/policies



Andrew Lock: Free .NET Core eBook, including ASP.NET Core and EF Core

Free .NET Core eBook, including ASP.NET Core and EF Core

Manning have recently released a free eBook, put together by Dustin Metzgar, called Exploring .NET Core with Microservices, ASP.NET Core, and Entity Framework Core. This eBook features five hand-picked chapters from upcoming books on .NET Core. It provides an introduction to modern software development practices and how to apply them to .NET Core projects:

  • Refactoring
  • Identifying and Scoping Microservices
  • Creating and Communicating with Web Services
  • Creating Web Pages with MVC Controllers
  • Querying the Database

All you need to get your hands on the free eBook is an email address. Just click here, add the eBook to your cart, and checkout with your email address.

Free .NET Core eBook, including ASP.NET Core and EF Core

If you like the book and want to learn more, you might want to consider a look at the books that went into this compilation:

Some of the books are currently available in a Manning Early Access Program (MEAP) version. These are available to buy now, and you get access to the chapters as they are written. When the final version is finished, you'll receive a copy of the PDF and optionally the paperback.

Thanks!


Anuraj Parameswaran: Dockerize an existing ASP.NET MVC 5 application

This post is about describe the process of the migrating of existing ASP.NET MVC 5 or ASP.NET Web Forms application to Windows Containers. Running an existing .NET Framework-based application in a Windows container doesn’t require any changes to your app. To run your app in a Windows container you create a Docker image containing your app and start the container.


Andrew Lock: Running tests with dotnet xunit using Cake

Running tests with dotnet xunit using Cake

In this post I show how you can run tests using the xUnit .NET CLI Tool dotnet xunit when building projects using Cake. Cake includes first class support for running test using dotnet test via the DotNetCoreTest alias, but if you want access to the additional configuration provided by the dotnet-xunit tool, you'll currently need to run the tool using DotNetCoreTool instead.

dotnet test vs dotnet xunit

Typically, .NET Core unit tests are run using the dotnet test command. This runs unit tests for a project regardless of which unit test framework was used - MSTest, NUnit, or xUnit. As long as the test framework has an appropriate adapter, the dotnet test command will hook into it, and provide a standard set of features.

However, the suggested approach to run .NET Core tests with xUnit is to use the dotnet-xunit framework tool. This provides more advanced access to xUnit settings, so you can set a whole variety of properties like how test names are listed, whether diagnostics are enabled, or parallelisation options for the test runs.

You can install the dotnet-xunit tool into a project by adding a DotnNetCliToolReference element to your .csproj file. If you add the xunit.runner.visualstudio and Microsoft.NET.Test.Sdk packages too, then you'll still be able to run your tests using dotnet test and Visual Studio:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFrameworks>netcoreapp2.0</TargetFrameworks>
    <IsPackable>false</IsPackable>
  </PropertyGroup>

  <ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" />
    <PackageReference Include="xunit" Version="2.3.0" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.0" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.0" />
  </ItemGroup>

</Project>  

With these packages installed and restored, you'll be able to run your tests using either dotnet test or dotnet xunit, but if you use the latter option, you'll have a whole host of additional arguments available to you. To see all your options, run dotnet xunit --help. The following is just a small selection of the help screen:

> dotnet xunit --help
xUnit.net .NET CLI Console Runner  
Copyright (C) .NET Foundation.

usage: dotnet xunit [configFile] [options] [reporter] [resultFormat filename [...]]

Note: Configuration files must end in .json (for JSON) or .config (for XML)  
      XML configuration files are only supported on net4x frameworks

Valid options (all frameworks):  
  -framework name        : set the framework (default: all targeted frameworks)
  -configuration name    : set the build configuration (default: 'Debug')
  -nobuild               : do not build the test assembly before running
  -nologo                : do not show the copyright message
  -nocolor               : do not output results with colors
  -failskips             : convert skipped tests into failures
  -stoponfail            : stop on first test failure
  -parallel option       : set parallelization based on option
                         :   none        - turn off parallelization
                         :   collections - parallelize test collections
  -maxthreads count      : maximum thread count for collection parallelization
                         :   default   - run with default (1 thread per CPU thread)
                         :   unlimited - run with unbounded thread count
                         :   (number)  - limit task thread pool size to 'count'
  -wait                  : wait for input after completion
  -diagnostics           : enable diagnostics messages for all test assemblies
[Output truncated]

Running tests with Cake

Cake is my preferred build scripting system for .NET Core projects. In their own words:

Cake (C# Make) is a cross platform build automation system with a C# DSL to do things like compiling code, copy files/folders, running unit tests, compress files and build NuGet packages.

I use cake to build my open source projects. A very simple Cake script consisting of a restore, build, and test, might look something like the following:

var configuration = Argument("Configuration", "Release");  
var solution = "./test.sln";

// Run dotnet restore to restore all package references.
Task("Restore")  
    .Does(() =>
    {
        DotNetCoreRestore();
    });

Task("Build")  
    .IsDependentOn("Restore")
    .Does(() =>
    {
        DotNetCoreBuild(solution
           new DotNetCoreBuildSettings()
                {
                    Configuration = configuration
                });
    });

Task("Test")  
    .IsDependentOn("Build")
    .Does(() =>
    {
        var projects = GetFiles("./test/**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreTest(
                project.FullPath,
                new DotNetCoreTestSettings()
                {
                    Configuration = configuration,
                    NoBuild = true
                });
        }
    });

This will run a restore in the solution directory, build the solution, and then run dotnet test for every project in the test sub-directory. Each of the steps uses a C# alias which calls the dotnet SDK commands:

  • DotNetCoreRestore - restores NuGet packages using dotnet restore
  • DotNetCoreBuild - builds the solution using dotnet build, using the settings provided in the DotNetCoreBuildSettings object
  • DotNetCoreTest - runs the tests in the project using dotnet test and the settings provided in the DotNetCoreTestSettings object.

Customizing the arguments passed to the dotnet tool

In the previous example, we ran tests with dotnet test and were able to set a number of additional options using the strongly typed DotNetCoreTestSettings object. If you want to pass additional options down to the dotnet test call, you can add customise the arguments using the ArgumentCustomization property.

For example, dotnet build implicitly calls dotnet restore, even though we are specifically restoring the solution. You can forgo this second call by passing --no-restore to the dotnet build call using the ArgumentCustomization property:

Task("Build")  
    .IsDependentOn("Restore")
    .Does(() =>
    {
        DotNetCoreBuild(solution
           new DotNetCoreBuildSettings()
                {
                    Configuration = configuration,
                    ArgumentCustomization = args => args.Append($"--no-restore")
                });
    });

With this approach, you can customise the arguments that are passed to the dotnet test command. However you can't customise the command itself to call dotnet xunit. For that, you need a different Cake alias - DotNetCoreTool

Running dotnet xunit tests with Cake

Using the strongly typed *Settings objects makes invoking many of the dotnet tools easy with Cake, but it doesn't include first-class support for all of them. The dotnet-xunit tool is one such tool.

If you need to run a dotnet tool that's not directly supported by a Cake alias, you can use the general purpose DotNetCoreTool alias. You can use this to execute any dotnet tool, by providing the tool name, and the command arguments.

For example, imagine you want to run dotnet xunit with diagnostics enabled, and stop on the first failure. If you were running the tool directly from the command line you'd use:

dotnet xunit -diagnostics -stoponfail  

In Cake, we can use the DotnetCoreTool, and pass in the command line arguments manually. If we update the previous Cake "Test" target to use DotNetCoreTool we have:

Task("Test")  
    .IsDependentOn("Build")
    .Does(() =>
    {
        var projects = GetFiles("./test/**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreTool(
                projectPath: project.FullPath, 
                command: "xunit", 
                arguments: $"-configuration {configuration} -diagnostics -stoponfail"
            );
        }
    });

The DotNetCoreTool isn't as convenient as being able to set strongly typed properties on a dedicated settings object. But it does at give a great deal of flexibility, and effectively lets you drop down to the command line inside your Cake build script.

If you build your scripts using Cake, and need/want to use some of the extra features afforded by dotnet xunit then DotNetCoreTool is currently the best approach, but it shouldn't be hard to create a wrapper alias for dotnet xunit that makes these arguments strongly typed. Assuming noone else has already done it and made this post obsolete, then I'll look at sending a PR as soon as I find the time!


Damien Bowden: Securing an Angular SignalR client using JWT tokens with ASP.NET Core and IdentityServer4

This post shows how an Angular SignalR client can send secure messages using JWT bearer tokens with an API and an STS server. The STS server is implemented using IdentityServer4 and the API is implemented using ASP.NET Core.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Posts in this series:

History

2017-11-05 Updated to Angular 5 and Typescript 2.6.1, SignalR 1.0.0-alpha2-final

SignalR and SPAs

At present there are 3 ways which SignalR could be secured:

Comment from Damien Edwards:
The debate over HTTPS URLs (including query strings) is long and on-going. Yes, it’s not ideal to send sensitive data in the URL even when over HTTPS. But the fact remains that when using the browser WebSocket APIs there is no other way. You only have 3 options:

  • Use cookies
  • Send tokens in query string
  • Send tokens over the WebSocket itself after onconnect

A usable sample of the last would be interesting in my mind, but I’m not expecting it to be trivial.

For an SPA client, cookies is not an option and should not be used. It is unknown if the 3rd option will work, so at present, the only way to do this, is to send the access token in the query string using HTTPS. Sending tokens in the query string has its problems, which you will need to accept and/or setup you deployment, logging to protect againt these increased risks when compared with sending the access token in the header.

Setup

The demo app is setup using 3 different projects, the API which hosts the SignalR Hub and the APIs, the STS server using ASP.NET Core and IdentityServer4 and the client application using Angular hosted in ASP.NET Core.

The client is secured using the OpenID Implicit Flow using the “id_token token” flow. The access token is then used to access the API, for both the SignalR messages and also the API calls.
All three apllications run using HTTPS.

Securing the SignalR Hub on the API

The SignalR Hub uses the Authorize attribute like any ASP.NET Core MVC controller. Policies and the scheme can be defined here. The Hub uses the Bearer AuthenticationSchemes.

using ApiServer.Providers;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace ApiServer.SignalRHubs
{
    [Authorize(AuthenticationSchemes = "Bearer")]
    public class NewsHub : Hub
    {
        private NewsStore _newsStore;

        public NewsHub(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        ...
    }
}

The API project configures the API security in the Startup class in the ConfigureServices method. Firstly CORS is configured, incorrectly in this example as it allows everything. Only the required URLs should be allowed. Then the TokenValidationParameters and the JwtSecurityTokenHandler options are configured. The NameClaimType is configured so that the Name property is set from the token in the HTTP context Identity. This is set and added to the access token on the STS server.

The AddAuthentication is added with the JwtBearer token options. This is configured to accept the token in the query string as well as the header. If the request matches the SignalR Hubs, the token is received and used to validate the request.

The AddAuthorization is added and the policies are defined as required. Then the SignalR middleware is added.

public void ConfigureServices(IServiceCollection services)
{
	var sqliteConnectionString = Configuration.GetConnectionString("SqliteConnectionString");
	var defaultConnection = Configuration.GetConnectionString("DefaultConnection");

	var cert = new X509Certificate2(Path.Combine(_env.ContentRootPath, "damienbodserver.pfx"), "");

	services.AddDbContext<DataEventRecordContext>(options =>
		options.UseSqlite(sqliteConnectionString)
	);

	// used for the new items which belong to the signalr hub
	services.AddDbContext<NewsContext>(options =>
		options.UseSqlite(
			defaultConnection
		), ServiceLifetime.Singleton
	);

	services.AddSingleton<IAuthorizationHandler, CorrectUserHandler>();
	services.AddSingleton<NewsStore>();

	var policy = new Microsoft.AspNetCore.Cors.Infrastructure.CorsPolicy();
	policy.Headers.Add("*");
	policy.Methods.Add("*");
	policy.Origins.Add("*");
	policy.SupportsCredentials = true;

	services.AddCors(x => x.AddPolicy("corsGlobalPolicy", policy));

	var guestPolicy = new AuthorizationPolicyBuilder()
		.RequireClaim("scope", "dataEventRecords")
		.Build();

	var tokenValidationParameters = new TokenValidationParameters()
	{
		ValidIssuer = "https://localhost:44318/",
		ValidAudience = "dataEventRecords",
		IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("dataEventRecordsSecret")),
		NameClaimType = "name",
		RoleClaimType = "role", 
	};

	var jwtSecurityTokenHandler = new JwtSecurityTokenHandler
	{
		InboundClaimTypeMap = new Dictionary<string, string>()
	};

	services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
	.AddJwtBearer(options =>
	{
		options.Authority = "https://localhost:44318/";
		options.Audience = "dataEventRecords";
		options.IncludeErrorDetails = true;
		options.SaveToken = true;
		options.SecurityTokenValidators.Clear();
		options.SecurityTokenValidators.Add(jwtSecurityTokenHandler);
		options.TokenValidationParameters = tokenValidationParameters;
		options.Events = new JwtBearerEvents
		{
			OnMessageReceived = context =>
			{
				if (context.Request.Path.Value.StartsWith("/loo") &&
					context.Request.Query.TryGetValue("token", out StringValues token)
				)
				{
					context.Token = token;
				}

				return Task.CompletedTask;
			},
			OnAuthenticationFailed = context =>
			{
				var te = context.Exception;
				return Task.CompletedTask;
			}
		};
	});

	services.AddAuthorization(options =>
	{
		options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
		{
			policyAdmin.RequireClaim("role", "dataEventRecords.admin");
		});
		options.AddPolicy("dataEventRecordsUser", policyUser =>
		{
			policyUser.RequireClaim("role", "dataEventRecords.user");
		});
		options.AddPolicy("dataEventRecords", policyUser =>
		{
			policyUser.RequireClaim("scope", "dataEventRecords");
		});
		options.AddPolicy("correctUser", policyCorrectUser =>
		{
			policyCorrectUser.Requirements.Add(new CorrectUserRequirement());
		});
	});

	services.AddSignalR();

	services.AddMvc(options =>
	{
	   //options.Filters.Add(new AuthorizeFilter(guestPolicy));
	}).AddJsonOptions(options =>
	{
		options.SerializerSettings.ContractResolver = new DefaultContractResolver();
	});

	services.AddScoped<IDataEventRecordRepository, DataEventRecordRepository>();
}

The Configure method in the Startup class of the API defines the SignalR Hubs and adds the Authentication.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole();
	loggerFactory.AddDebug();

	loggerFactory.AddSerilog();

	app.UseExceptionHandler("/Home/Error");
	app.UseCors("corsGlobalPolicy");
	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseSignalR(routes =>
	{
		routes.MapHub<LoopyHub>("loopy");
		routes.MapHub<NewsHub>("looney");
	});

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

Securing the SignalR client in Angular

The Angular SPA application is secured using the oidc Implicit Flow. After a successful client and identity login, the access token can be used to access the Hub or the API. The Hub is initialized after the client has recieved an access token. The Hub connection is then setup, using the same parameter logic defined on the API server. “token=…” Now each message is sent using the access token.

import 'rxjs/add/operator/map';
import { Subscription } from 'rxjs/Subscription';

import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';

import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import * as NewsActions from './store/news.action';
import { Configuration } from '../app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;
    private actionUrl: string;
    private headers: HttpHeaders;

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(private http: HttpClient,
        private store: Store<any>,
        private configuration: Configuration,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.actionUrl = `${this.configuration.Server}api/news/`;

        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');

        this.init();
    }

    send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    getAllGroups(): Observable<string[]> {

        const token = this.oidcSecurityService.getToken();
        if (token !== '') {
            const tokenValue = 'Bearer ' + token;
            this.headers = this.headers.append('Authorization', tokenValue);
        }

        return this.http.get<string[]>(this.actionUrl, { headers: this.headers });
    }

    private init() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.initHub();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    private initHub() {
        console.log('initHub');
        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }

        this._hubConnection = new HubConnection(`${this.configuration.Server}looney${tokenValue}`);

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.on('History', (newsItems: NewsItem[]) => {
            this.store.dispatch(new NewsActions.ReceivedGroupHistoryAction(newsItems));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(() => {
                console.log('Error while establishing connection')
            });
    }

}

Or here’s a more simple example with everything in the Angular component.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';
import { Configuration } from '../../app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Component({
    selector: 'app-home-component',
    templateUrl: './home.component.html'
})

export class HomeComponent implements OnInit, OnDestroy {
    private _hubConnection: HubConnection;
    async: any;
    message = '';
    messages: string[] = [];

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(
        private configuration: Configuration,
        private oidcSecurityService: OidcSecurityService
    ) {
    }

    ngOnInit() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.init();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    ngOnDestroy(): void {
        this.isAuthorizedSubscription.unsubscribe();
    }

    sendMessage(): void {
        const data = `Sent: ${this.message}`;

        this._hubConnection.invoke('Send', data);
        this.messages.push(data);
    }

    private init() {

        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }

        this._hubConnection = new HubConnection(`${this.configuration.Server}loopy${tokenValue}`);

        this._hubConnection.on('Send', (data: any) => {
            const received = `Received: ${data}`;
            this.messages.push(received);
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }
}

Logs on the server

As the application sends the token in the query string, this can be accessed on the server using standard logging. The 3 applications are configured using Serilog and the logs are saved to Seq server and also to log files. If you open the Seq server, the access_token can be viewed and copied.

You can then use jwt.io to view the details of the token.
https://jwt.io/

Or you can use postman to do API calls for which you might not have the authorization or authentication rights. This token will work as long as it is valid. All you need to do, is add this to the header and you have the same rights as the identity for which the access token was created.

You can also view the token in the log files:

2017-10-15 17:11:33.790 +02:00 [Information] Request starting HTTP/1.1 OPTIONS http://localhost:44390/looney?token=eyJhbGciOiJSUzI1NiIsIm... 
2017-10-15 17:11:33.795 +02:00 [Information] Request starting HTTP/1.1 OPTIONS http://localhost:44390/loopy?token=eyJhbGciOiJSUzI1NiIsImtpZCI6IjA2RDNFNDZFO...
2017-10-15 17:11:33.803 +02:00 [Information] Policy execution successful.

Due to this, you need to check that the deployment admin, developers, devop people can be trusted or reduce the access to the production scenarios. This has also implications with GDPR.

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/aspnet/SignalR/issues/888

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Dominick Baier: SAML2p Identity Provider Support for IdentityServer4

One very common feature request is support for acting as a SAML2p identity provider.

This is not a trivial task, but our friends at Rock Solid Knowledge were working hard, and now published a beta version. Give it a try!

 


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Andrew Lock: Debugging JWT validation problems between an OWIN app and IdentityServer4

Debugging JWT validation problems between an OWIN app and IdentityServer4

This post describes an issue I ran into at work recently, as part of an effort to migrate our identity application from IdentityServer3 to IdentityServer4. One of our services was unable to validate the JWT sent as a bearer token, even though other services were able to validate it. I'll provide some background to the migration, a more detailed description of the problem, and the solution.

tl;dr; The problematic service was attempting to call a "validation endpoint" to validate the JWT, instead of using local validation. This works fine with IdentityServer3, but the custom access token validation endpoint has been removed in IdentityServer4. Forcing local validation by calling ValidationMode.Local when adding the middleware with app.UseIdentityServerBearerTokenAuthentication() fixed the issue.

An overview of the system

The system I'm working on, as is common these days, consists of a variety of different services working together. On the front end, we have a JavaScript SPA. This makes HTTP calls to several different server-side apps to fetch the data that it displays to the user. In this case, the services are ASP.NET (not Core) apps using OWIN to provide a Web API. These services may in turn make requests to other back-end APIs, but we'll ignore those for now. In this post I'm going to focus on two services, called the AdminAPI service and the DataAPI service:

Debugging JWT validation problems between an OWIN app and IdentityServer4

For authentication, we have an application running IdentityServer3 which is again non-core ASP.NET (sidenote: I still haven't worked out what we're calling non-core ASP.NET these days!). Users authenticate with the IdentityServer3 app, which returns a JSON Web Token (JWT). The client app sends the JWT in the Authorization header when making requests to the AdminAPI and the DataAPI.

Debugging JWT validation problems between an OWIN app and IdentityServer4

Before the AdminAPI or the DataAPI accept the JWT sent in the Authorization header, they must first validate the JWT. This ensures the token hasn't been tampered with and can be trusted.

A brief background on JWT tokens and Identity

In order to understand the problem, we need to have an understanding of how JWT tokens work, so I'll provide a brief outline here. Feel free to skip this section if this is old news, or check out https://jwt.io/introduction for a more in-depth description.

A JWT provides a mechanism for the IdentityServer app to transfer information to another app (e.g. the AdminAPI or DataAPI) through an insecure medium (the JavaScript app in the browser) in such a way that the data can't be tampered with. It's important they can't be tampered with as they're often used for authentication and authorisation - you don't want users to be able to impersonate other users, or grant themselves additional privileges.

A JWT consists of three parts:

  • Header - A description of the type of token (JWT) and the algorithms used to secure the token
  • Payload - The information to be transferred. This typically includes a set of claims, which describe the entity (i.e. the user), and potentially other details such as the expiry date of the token, who issued it etc.
  • Signature - A cryptographic signature that describes the header and the payload. If either the header or payload are modified, the signature will no longer be correct, so the JWT can be discarded as fraudulent.

In our case, the signature for the JWT is created using an X.509 certificate using asymmetric cryptography. The signature is generated using the private key of the certificate, which is only known to IdentityServer and is not exposed. However, anyone can validate the signature using the public certificate, which IdentityServer makes available at well-known URLs.

Upgrading IdentityServer

I had been tasked with porting the existing ASP.NET IdentityServer3 app to an ASP.NET Core IdentityServer4 app. IdentityServer3 and IdentityServer4 both use the OpenID Connect and OAuth 2 protocols, so from the point of view of the consumers of the app, upgrading IdentityServer in this way should be seamless.

The good news is that for the most part, the upgrade really was painless. IdentityServer 4 has a few different abstractions to IdentityServer3, so you may have to tweak some things and implement some different interfaces, but the changes are relatively minor and make sense. Scott Brady has a great post on IdentityServer 4, or you could watch Dominick Baier explain some of the changes himself on Channel 9.

Trouble in paradise

With the IdentiyServer app ported to .NET Core, all that remained was to test the integration with the AdminAPI and DataAPI. Initial impressions were very positive - the client app would authenticate with the IdentityServer app to retrieve a JWT, and would send this in requests to the AdminAPI. The AdminAPI validated the signature in the JWT token, and used the claims it contained to execute the action. All looking good.

The problem was the DataAPI. When the client app navigated to a given page, it would send a request to the DataAPI with the same JWT as it sent to the AdminAPI. However, the DataAPI failed to validate the signature.

How could that be? Both APIs were configured with the same IdentityServer as the authority, and the same JWT was being sent to both APIs. I tweaked the DataAPI configuration to make sure it was identical to the AdminAPI and logged as much as I could, but try as I might, I couldn't find any differences between the two APIs. Both APIs were even running on the same server, in the same web site, using the same IIS app pool.

Yet the AdminAPI could validate the token, and the DataAPI could not.

IdentityServer3.AccessTokenValidation

At this point, I'll back up slightly, and describe exactly how the AdminAPI and DataAPI validate the JWTs.

We are using the IdentityServer3.AccessTokenValidation library to validate the JWTs. This extracts the identity contained in the JWT to authenticate the incoming request, and assigns it to the IPrincipal of the request. This library includes an OWIN middleware that you can add to your IAppBuilder pipeline something like the following (from the DataAPI):

var options = new IdentityServerBearerTokenAuthenticationOptions  
{
    Authority = https://www.test.domain/identity,
    AuthenticationType = "Bearer",
    RequiredScopes = new []{ "DataAPI" }
};
app.UseIdentityServerBearerTokenAuthentication(options); // add the middleware  

This snippet shows all the configuration required to validate incoming tokens, extract the identity in the JWT payload, and assign the principal for the current thread. In this example, the IdentityServer app is hosted at https://www.test.domain/identity, and incoming JWTs must have the "DataAPI" scope to be considered valid

If you're not familiar with IdentityServer, it might surprise you that no other configuration is required. No client IDs, no secrets, no certificates. Instead, thanks to the use of open standards (OpenID Connect), the validation middleware can contact your IdentityServer app to obtain all the information it needs.

When the validation middleware needs to validate an incoming JWT, it calls a well-known URL on IdentityServer (literally well-known; the URL path is /.well-known/openid-configuration). This returns a JSON document indicating the capabilities of the server, and the location of a variety of useful links. The following is a fragment of a discovery document as an example:

{
    "issuer": "https://www.test.domain/identity",
    "jwks_uri": "https://www.test.domain/identity/.well-known/openid-configuration/jwks",
    "authorization_endpoint": "https://www.test.domain/identity/connect/authorize",
    "token_endpoint": "https://www.test.domain/identity/connect/token",
    "userinfo_endpoint": "https://www.test.domain/identity/connect/userinfo",
    "end_session_endpoint": "https://www.test.domain/identity/connect/endsession",
    "check_session_iframe": "https://www.test.domain/identity/connect/checksession",
    "revocation_endpoint": "https://www.test.domain/identity/connect/revocation",
    "introspection_endpoint": "https://www.test.domain/identity/connect/introspect",
    "scopes_supported": [
        "openid",
        "profile",
        "email",
        "AdminAPI",
        "DataAPI"
    ],
    "claims_supported": [
        "sub",
        "name",
        "family_name",
        "given_name",
        "email",
        "id"
    ],
   ...
}

To actually validate an incoming token, the middleware uses one of two approaches:

  • Local - The middleware uses the discovery document and the jwks_uri link to dynamically download the public certificate required to validate the JWTs.
  • ValidationEndpoint - The middleware sends the JWT to IdentityServer and asks it to validate the token.

You can explicitly choose which validation mode the middleware should use, but it defaults to Both. As stated by Brock Allen on Stack Overflow:

"both" will dynamically determine which of the two approaches described above [to use] based upon some heuristics on the incoming access token presented to the Web API.

That gives us the background, so lets get back to the problem in hand, why one of our apps could validate the JWT, and the other could not.

Back to the problem, and the solution

After much trial and error, I finally discovered that the problem I was having was due to the "both" heuristic, and the validation mode it was choosing. In the AdminAPI (which was able to validate the JWTs issued by IdentityServer4) the middleware was choosing the Local validation mode. It would retrieve the public certificate of the X.509 cert used to sign the token by using the OpenID Connect discovery document, and could verify the signature.

The DataAPI on the other hand, was trying to use ValidationEndpoint validation of the JWT. For some reason, the heuristic decided that local validation wasn't possible, and so was trying to send the JWT to IdentityServer4 for validation.

Unfortunately, the custom access token validation endpoint available in IdentityServer3 was removed in IdentityServer4. Every time the DataAPI attempted to validate the JWT, it was getting a 404 from the IdentityServer4 app, so the validation was failing.

The simple solution was to force the middleware to always use Local validation, by updating the ValidationMode in the middleware options:

var options = new IdentityServerBearerTokenAuthenticationOptions  
{
    Authority = https://www.test.domain/identity,
    AuthenticationType = "Bearer",
    RequiredScopes = new []{ "DataAPI" },
    ValidationMode = ValidationMode.Local // <- add this
};
app.UseIdentityServerBearerTokenAuthentication(options);  

As soon as the DataAPI was updated with the above change, it was able to validate the JWTs created using IdentityServer4, and t app started working again.

An obvious follow-up to this issue would be to figure out why the the DataAPI was choosing ValidationEndpoint validation instead of Local validation. I'm sure the answer lies somewhere in this source code file, but for the life of me I can't figure it out; given it was the same token, and same middleware configuration in both cases it should have been the same validation type as far as I can see!

Ultimately, it just needs to work, so I've moved on.

No, it doesn't irritate me not knowing why it happens.

Honest.

Summary

Upgrading from IdentityServer3 to IdentityServer4, and in the process switching from an ASP.NET app to ASP.NET Core, is not something that should be taken lightly, but overall the process went smoothly. In particular, .NET Core 2.0 made the port much easier.

The only issue was that a consumer of IdentityServer4 was attempting to use ValidationEndpoint to validate tokens, when using the IdentityServer3.AccessTokenValidation library for authentication. IdentityServer4 has removed the custom access token validation endpoint used by this method, so attempts to validate JWTs will fail when it's used.

Instead, you can force the middleware to use Local validation instead. This downloads the public certificate from IdentityServer4, and validates the signature locally, without having to call custom endpoints.

References


Anuraj Parameswaran: How to handle Ajax requests in ASP.NET Core Razor Pages?

This post is about handling Ajax requests in ASP.NET Core Razor Pages. Razor Pages is a new feature of ASP.NET Core MVC that makes coding page-focused scenarios easier and more productive.


Dominick Baier: New in IdentityServer4 v2: Simplified Configuration behind Load-balancers or Reverse-Proxies

Many people struggle with setting up ASP.NET Core behind load-balancers and reverse-proxies. This is due to the fact that Kestrel is often used just for serving up the application, whereas the “real HTTP traffic” is happening one hop earlier. IOW the ASP.NET Core app is actually running on e.g. http://localhost:5000 – but the incoming traffic is directed at e.g. https://myapp.com.

This is an issue when the application needs to generate links (e.g. in the IdentityServer4 discovery endpoint).

Microsoft hides the problem when running in IIS (this is handled in the IIS integration), and for other cases recommends the forwarded headers middleware. This middleware requires some more understanding how the underlying traffic forwarding works, and its default configuration does often not work for more advanced scenarios.

Long story short – we added a shortcut (mostly due to popular demand) to IdentityServer that allows hard-coding the public origin – simply set the PublicOrigin property on the IdentityServerOptions. See the following screenshot where I configured the value https://login.foo.com – but note that Kestrel still runs on localhost.

publicOrigin.png

HTH


Filed under: ASP.NET Core, IdentityServer, Uncategorized, WebAPI


Anuraj Parameswaran: How to create a self contained .Net core application?

There are two ways to deploy a .NET Core application. FDD (Framework-dependent deployments) and SCD (Self-contained deployments), a self-contained deployment (SCD) doesn’t rely on the presence of shared components on the target system. All components, including both the .NET Core libraries and the .NET Core runtime, are included with the application and are isolated from other .NET Core applications. This post is about deploying .NET Core application in Self-contained way.


Dominick Baier: IdentityServer4 v2

Wow – this was probably our biggest update ever! Version 2.0 of IdentityServer4 is not only incorporating all the feedback we got over the last year, it also includes the necessary updates for ASP.NET Core 2 – and also has a couple of brand new features. See the release notes for a complete list as well as links to issues and PRs.

The highlights (from my POV) are:

ASP.NET Core 2 support
The authentication system in ASP.NET Core 1.x was a left-over from Katana and was designed around the fact that no DI system exists. We suggested to Microsoft that this should be updated the next time they have the “luxury” of breaking changes. That’s what happened (see more details here).

This was by far the biggest change in IdentityServer (both from a config and internal plumbing point of view). The new system is superior, but this was a lot of work!

Support for the back-channel logout specification
In addition to the JS/session management spec and front-channel logout spec – we also implemented the back-channel spec. This is for situations where the iframe logout approach for server-side apps is either too brittle or just not possible.

Making federation scenarios more robust
Federation with external providers is a complex topic – both sign-in and sign-out require a lot state management and attention to details.

The main issue was the state keeping when making round–trips to upstream providers. The way the Microsoft handlers implement that is by adding the protected state on the URL. This lead to problems with URL length (either because Azure services default to 2KB of allowed URL length, e.g. Azure AD or because of IE who has the same restriction). We fixed that by including a state cache that you can selectively enable on the external handlers. This way the temporary state is kept in a cache and the URLs stay short.

Internal cleanup and refactoring
We did a lot of cleanup internally – some are breaking changes. Generally speaking we opened up more classes (especially around response generation) for derivation or replacement. One of the most popular requests was e.g. to customize the response of the introspection endpoint and redirect handling in the authorize endpoint. Oh btw – endpoints are now extensible/replaceable as well.

Support for the ASP.NET Core config system
Clients and resources can now be loaded from the ASP.NET config system, which in itself is an extensible system. The main use case is probably JSON-based config files and overriding certain settings (e.g. secrets) using environment variables.

Misc
We also updated our docs and the satellite repos like samples, EF, ASP.NET Identity and the quickstart UI. We gonna work on new templates and VS integration next.

Support
If you need help migrating to v2 – or just in general implementing IdentityServer – let us know. We provide consulting, support and software development services.

Last but not least – we’d like to thank our 89 contributors and everyone who opened/reported an issue and gave us feedback – keep it coming! We already have some nice additions for 2.x lined up. Stay tuned.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OpenID Connect, WebAPI


Anuraj Parameswaran: Token based authentication in ASP.NET Core

This post is about token based authentication in ASP.NET Core. The general concept behind a token-based authentication system is simple. Allow users to enter their username and password in order to obtain a token which allows them to fetch a specific resource - without using their username and password. Once their token has been obtained, the user can offer the token - which offers access to a specific resource for a time period - to the remote site.


Andrew Lock: Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

These days, running your apps over HTTPS is pretty much required. so you need an SSL certificate to encrypt the connection between your app and a user's browser.

I was recently trying to create a self-signed certificate for use in a Linux development environment, to serve requests with ASP.NET Core over SSL when developing locally. Playing with certs is always harder than I think it's going to be, so this post describes the process I took to create and trust a self-signed cert.

Disclaimer I'm very much a Windows user at heart, so I can't give any guarantees as to whether this process is correct. It's just what I found worked for me!

Using Open SSL to create a self-signed certificate

On Windows, creating a self-signed development certificate for development is often not necessary - Visual Studio automatically creates a development certificate for use with IIS Express, so if you run your apps this way, then you shouldn't have to deal with certificates directly.

On the other hand, if you want to host Kestrel directly over HTTPS, then you'll need to work with certificates directly one way or another. On Linux, you'll either need to create a cert for Kestrel to use, or for a reverse-proxy like Nginx or HAProxy. After much googling, I took the approach described in this post.

Creating a basic certificate using openssl

Creating a self-signed cert with the openssl library on Linux is theoretically pretty simple. My first attempt was to use a script something like the following:

openssl req -new -x509 -newkey rsa:2048 -keyout localhost.key -out localhost.cer -days 365 -subj /CN=localhost  
openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.cer  

This creates 3 files:

  • localhost.cer - The public key for the SSL certificate
  • localhost.key - The private key for the SSL certificate
  • localhost.pfx - An X509 certificate containing both the public and private key. This is the file that will be used by our ASP.NET Core app to serve over HTTPS.

The script creates a certificate with a "Common Name" for the localhost domain (the -subj /CN=localhost part of the script). That means we can use it to secure connections to the localhost domain when developing locally.

The problem with this certificate is that it only includes a common name so the latest Chrome versions will not trust it. Instead, we need to create a certificate with a Subject Alternative Name (SAN) for the DNS record (i.e. localhost).

The easiest way I found to do this was to use a .conf file containing all our settings, and to pass it to openssl.

Creating a certificate with DNS SAN

The following file shows the .conf config file that specifies the particulars of the certificate that we're going to create. I've included all of the details that you must specify when creating a certificate, such as the company, email address, location etc.

If you're creating your own self signed certificate, be sure to change these details, and to add any extra DNS records you need.

[ req ]
prompt              = no  
default_bits        = 2048  
default_keyfile     = localhost.pem  
distinguished_name  = subject  
req_extensions      = req_ext  
x509_extensions     = x509_ext  
string_mask         = utf8only

# The Subject DN can be formed using X501 or RFC 4514 (see RFC 4519 for a description).
#   Its sort of a mashup. For example, RFC 4514 does not provide emailAddress.
[ subject ]
countryName     = GB  
stateOrProvinceName = London  
localityName            = London  
organizationName         = .NET Escapades


# Use a friendly name here because its presented to the user. The server's DNS
#   names are placed in Subject Alternate Names. Plus, DNS names here is deprecated
#   by both IETF and CA/Browser Forums. If you place a DNS name here, then you 
#   must include the DNS name in the SAN too (otherwise, Chrome and others that
#   strictly follow the CA/Browser Baseline Requirements will fail).
commonName          = Localhost dev cert  
emailAddress            = test@test.com

# Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ...
[ x509_ext ]

subjectKeyIdentifier        = hash  
authorityKeyIdentifier  = keyid,issuer

# You only need digitalSignature below. *If* you don't allow
#   RSA Key transport (i.e., you use ephemeral cipher suites), then
#   omit keyEncipherment because that's key transport.
basicConstraints        = CA:FALSE  
keyUsage            = digitalSignature, keyEncipherment  
subjectAltName          = @alternate_names  
nsComment           = "OpenSSL Generated Certificate"

# RFC 5280, Section 4.2.1.12 makes EKU optional
#   CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
#   In either case, you probably only need serverAuth.
# extendedKeyUsage  = serverAuth, clientAuth

# Section req_ext is used when generating a certificate signing request. I.e., openssl req ...
[ req_ext ]

subjectKeyIdentifier        = hash

basicConstraints        = CA:FALSE  
keyUsage            = digitalSignature, keyEncipherment  
subjectAltName          = @alternate_names  
nsComment           = "OpenSSL Generated Certificate"

# RFC 5280, Section 4.2.1.12 makes EKU optional
#   CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
#   In either case, you probably only need serverAuth.
# extendedKeyUsage  = serverAuth, clientAuth

[ alternate_names ]

DNS.1       = localhost

# Add these if you need them. But usually you don't want them or
#   need them in production. You may need them for development.
# DNS.5       = localhost
# DNS.6       = localhost.localdomain
# DNS.7       = 127.0.0.1

# IPv6 localhost
# DNS.8     = ::1

We save this config to a file called localhost.conf, and use it to create the certificate using a similar script as before. Just run this script in the same folder as the localhost.conf file.

openssl req -config localhost.conf -new -x509 -sha256 -newkey rsa:2048 -nodes \  
    -keyout localhost.key -days 3650 -out localhost.crt
openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt  

This will ask you for an export password for your pfx file. Be sure that you provide a password and keep it safe - ASP.NET Core requires that you don't leave the password blank. You should now have an X509 certificate called localhost.pfx that you can use to add HTTPS to your app.

Trusting the certificate

Before we use the certificate in our apps, we need to trust it on our local machine. Exactly how you go about this varies depending on which flavour of Linux you're using. On top of that, some apps seem to use their own certificate stores, so trusting the cert globally won't necessarily mean it's trusted in all of your apps.

The following example worked for me on Ubuntu 16.04, and kept Chrome happy, but I had to explicitly add an exception to Firefox when I first used the cert.

#Install the cert utils
sudo apt install libnss3-tools  
# Trust the certificate for SSL 
pk12util -d sql:$HOME/.pki/nssdb -i localhost.pfx  
# Trust a self-signed server certificate
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n 'dev cert' -i localhost.crt  

As I said before, I'm not a Linux guy, so I'm not entirely sure if you need to run both of the trust commands, but I did just in case! If anyone knows a better approach I'm all ears :)

We've now created a self-signed certificate with a DNS SAN name for localhost, and we trust it on the development machine. The last thing remaining is to use it in our app.

Configuring Kestrel to use your self-signed certificate

For simplicity, I'm just going to show how to load the localhost.pfx certificate in your app from the .pfx file, and how configure Kestrel to use it to serve requests over HTTPS. I've hard-coded the .pfx password in this example for simplicity, but you should load it from configuration instead.

Warning You should never include the password directly like this in a production app.

The following example is for ASP.NET Core 2.0 - Shawn Wildermuth has an example of how to add SSL in ASP.NET Core 1.X (as well as how to create a self-signed cert on Windows).

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        return WebHost.CreateDefaultBuilder()
            .UseKestrel(options =>
            {
                // Configure the Url and ports to bind to
                // This overrides calls to UseUrls and the ASPNETCORE_URLS environment variable, but will be 
                // overridden if you call UseIisIntegration() and host behind IIS/IIS Express
                options.Listen(IPAddress.Loopback, 5001);
                options.Listen(IPAddress.Loopback, 5002, listenOptions =>
                {
                    listenOptions.UseHttps("localhost.pfx", "testpassword");
                });
            })
            .UseStartup<Startup>()
            .Build();
}

Although CreateDefaultBuilder() adds Kestrel to the app anyway, you can call UseKestrel() again and specify additional options. Here we are defining two URLs and ports to listen on (The IPAddress.Loopback address corresponds to localhost or 127.0.0.1):

We add HTTPS to the second Listen() call with the UseHttps() extension method. There are several overloads of the method, which allow you to provide an X509Certificate2 object directly, or as in this case, a filename and password to a certificate.

If everything is configured correctly, you should be able to view the app in Chrome, and see a nice, green, Secure padlock:

Creating and trusting a self-signed certificate on Linux for use in Kestrel and ASP.NET Core

As I said at the start of this post, I'm not 100% on all of this, so if anyone has any suggestions or improvements, please let me know in the comments.

Resources


Damien Bowden: Using EF Core and SQLite to persist SignalR Group messages in ASP.NET Core

The article shows how SignalR messages can be saved to a database using EF Core and SQLite. The post uses the SignalR Hub created in this blog; SignalR Group messages with ngrx and Angular, and extends it so that users can only join an existing SignalR group. The group history is then sent to the client that joined.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Posts in this series:

History

2017-11-05 Updated to Angular 5 and Typescript 2.6.1, SignalR 1.0.0-alpha2-final

Creating the Database Store.

To create a store for the SignalR Hub, an EF Core Context is created and also the store logic which is responsible for accessing the database. The NewsContext class is really simple and just provides 2 DbSets, NewsItemEntities which will be used to save the SignalR messages, and NewsGroups which is used to validate and create the groups in the SignalR Hub.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AspNetCoreAngularSignalR.Providers
{
    public class NewsContext : DbContext
    {
        public NewsContext(DbContextOptions<NewsContext> options) :base(options)
        { }

        public DbSet<NewsItemEntity> NewsItemEntities { get; set; }

        public DbSet<NewsGroup> NewsGroups { get; set; }
    }
}

The NewsStore provides the methods which will be used in the Hub and also an ASP.NET Core Controller to create, select the groups if required. The NewsStore uses the NewsContext class.

using AspNetCoreAngularSignalR.SignalRHubs;
using System;
using System.Collections.Generic;
using System.Linq;

namespace AspNetCoreAngularSignalR.Providers
{
    public class NewsStore
    {
        public NewsStore(NewsContext newsContext)
        {
            _newsContext = newsContext;
        }

        private readonly NewsContext _newsContext;

        public void AddGroup(string group)
        {
            _newsContext.NewsGroups.Add(new NewsGroup
            {
                Name = group
            });
            _newsContext.SaveChanges();
        }

        public bool GroupExists(string group)
        {
            var item = _newsContext.NewsGroups.FirstOrDefault(t => t.Name == group);
            if(item == null)
            {
                return false;
            }

            return true;
        }

        public void CreateNewItem(NewsItem item)
        {
            if (GroupExists(item.NewsGroup))
            {
                _newsContext.NewsItemEntities.Add(new NewsItemEntity
                {
                    Header = item.Header,
                    Author = item.Author,
                    NewsGroup = item.NewsGroup,
                    NewsText = item.NewsText
                });
                _newsContext.SaveChanges();
            }
            else
            {
                throw new System.Exception("group does not exist");
            }
        }

        public IEnumerable<NewsItem> GetAllNewsItems(string group)
        {
            return _newsContext.NewsItemEntities.Where(item => item.NewsGroup == group).Select(z => 
                new NewsItem {
                    Author = z.Author,
                    Header = z.Header,
                    NewsGroup = z.NewsGroup,
                    NewsText = z.NewsText
                });
        }

        public List<string> GetAllGroups()
        {
            return _newsContext.NewsGroups.Select(t =>  t.Name ).ToList();
        }
    }
}

The NewsStore and the NewsContext are registered in the ConfigureServices method in the Startup class. The SignalR Hub is a singleton and so the NewsContext and the NewsStore classes are added as singletons. The AddDbContext requires the ServiceLifetime.Singleton parameter as this is not default. This is not optimal when using the NewsContext in the ASP.NET Core controller, as you need to consider the possible multiple client requests.

public void ConfigureServices(IServiceCollection services)
{
	var sqlConnectionString = Configuration.GetConnectionString("DefaultConnection");

	services.AddDbContext<NewsContext>(options =>
		options.UseSqlite(
			sqlConnectionString
		), ServiceLifetime.Singleton
	);

	services.AddCors(options =>
	{
		options.AddPolicy("AllowAllOrigins",
			builder =>
			{
				builder
					.AllowAnyOrigin()
					.AllowAnyHeader()
					.AllowAnyMethod();
			});
	});

	services.AddSingleton<NewsStore>();
	services.AddSignalR();
	services.AddMvc();
}

Updating the SignalR Hub

The SignalR NewsHub uses the NewsStore which is injected using constructor injection. If a message is sent, or received, it is persisted using the CreateNewItem method from the store. When a new user joins an existing group, the history is sent to the client by invoking the “History” message.

using AspNetCoreAngularSignalR.Providers;
using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsHub : Hub
    {
        private NewsStore _newsStore;

        public NewsHub(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        public Task Send(NewsItem newsItem)
        {
            if(!_newsStore.GroupExists(newsItem.NewsGroup))
            {
                throw new System.Exception("cannot send a news item to a group which does not exist.");
            }

            _newsStore.CreateNewItem(newsItem);
            return Clients.Group(newsItem.NewsGroup).InvokeAsync("Send", newsItem);
        }

        public async Task JoinGroup(string groupName)
        {
            if (!_newsStore.GroupExists(groupName))
            {
                throw new System.Exception("cannot join a group which does not exist.");
            }

            await Groups.AddAsync(Context.ConnectionId, groupName);
            await Clients.Group(groupName).InvokeAsync("JoinGroup", groupName);

            var history = _newsStore.GetAllNewsItems(groupName);
            await Clients.Client(Context.ConnectionId).InvokeAsync("History", history);
        }

        public async Task LeaveGroup(string groupName)
        {
            if (!_newsStore.GroupExists(groupName))
            {
                throw new System.Exception("cannot leave a group which does not exist.");
            }

            await Clients.Group(groupName).InvokeAsync("LeaveGroup", groupName);
            await Groups.RemoveAsync(Context.ConnectionId, groupName);
        }
    }
}

A NewsController is used to select all the existing groups, or add a new group, which is used by the SignalR Hub.

using AspNetCoreAngularSignalR.SignalRHubs;
using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.SignalR;
using AspNetCoreAngularSignalR.Providers;

namespace AspNetCoreAngularSignalR.Controllers
{
    [Route("api/[controller]")]
    public class NewsController : Controller
    {
        private NewsStore _newsStore;

        public NewsController(NewsStore newsStore)
        {
            _newsStore = newsStore;
        }

        [HttpPost]
        public IActionResult AddGroup([FromQuery] string group)
        {
            if (string.IsNullOrEmpty(group))
            {
                return BadRequest();
            }
            _newsStore.AddGroup(group);
            return Created("AddGroup", group);
        }

        public List<string> GetAllGroups()
        {
            return _newsStore.GetAllGroups();
        }
    }
}

Using the SignalR Hub

The NewsService Angular service, listens for SignalR events and handles these using the ngrx store.

import 'rxjs/add/operator/map';

import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';

import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import { NewsState } from './store/news.state';
import * as NewsActions from './store/news.action';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;
    private actionUrl: string;
    private headers: HttpHeaders;

    constructor(private http: HttpClient,
        private store: Store<any>
    ) {
        this.init();
        this.actionUrl = 'http://localhost:5000/api/news/';

        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');
    }

    send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    getAllGroups(): Observable<string[]> {
        return this.http.get<string[]>(this.actionUrl, { headers: this.headers });
    }

    private init() {

        this._hubConnection = new HubConnection('/looney');

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.on('History', (newsItems: NewsItem[]) => {
            this.store.dispatch(new NewsActions.ReceivedGroupHistoryAction(newsItems));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

In the Angular application, when the user joins a group, he/she receives all the existing messages.

original pic: https://damienbod.files.wordpress.com/2017/09/signlargroups.gif

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Damien Bowden: Auto redirect to an STS server in an Angular app using oidc Implicit Flow

This article shows how to implement an auto redirect in an Angular application, if using the OIDC Implicit Flow with an STS server. When a user opens the application, it is sometimes required that the user is automatically redirected to the login page on the STS server. This can be tricky to implement, as you need to know when to redirect and when not. The OIDC client is implemented using the angular-auth-oidc-client npm package.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

The angular-auth-oidc-client npm package provides an event when the OIDC module is ready to use and also can be configured to emit an event to inform the using component when the callback from the STS server has been processed. These 2 events, can be used to implement the auto redirect to the STS server, when not authorized.

The app.component can subscribe to these 2 events in the constructor.

constructor(public oidcSecurityService: OidcSecurityService,
	private router: Router
) {
	if (this.oidcSecurityService.moduleSetup) {
		this.onOidcModuleSetup();
	} else {
		this.oidcSecurityService.onModuleSetup.subscribe(() => {
			this.onOidcModuleSetup();
		});
	}

	this.oidcSecurityService.onAuthorizationResult.subscribe(
		(authorizationResult: AuthorizationResult) => {
			this.onAuthorizationResultComplete(authorizationResult);
		});
}

The onOidcModuleSetup function handles the onModuleSetup event. The Angular app is configured not to use hash (#) urls so that the STS callback is the only redirect which uses the hash. Due to this, all urls with a hash can be sent on to be processed from the OIDC module. If any other path is called, apart from the auto-login, the path is saved to the local storage. This is done so that after a successful token validation, the user is redirected back to the correct route. If the user is not authorized, the auto-login component is called.

private onOidcModuleSetup() {
	if (window.location.hash) {
		this.oidcSecurityService.authorizedCallback();
	} else {
		if ('/autologin' !== window.location.pathname) {
			this.write('redirect', window.location.pathname);
		}
		console.log('AppComponent:onModuleSetup');
		this.oidcSecurityService.getIsAuthorized().subscribe((authorized: boolean) => {
			if (!authorized) {
				this.router.navigate(['/autologin']);
			}
		});
	}
}

The onAuthorizationResultComplete function handles the onAuthorizationResult event. If the response from the server is valid, the user of the application is redirected using the saved path from the local storage.

private onAuthorizationResultComplete(authorizationResult: AuthorizationResult) {
	console.log('AppComponent:onAuthorizationResultComplete');
	const path = this.read('redirect');
	if (authorizationResult === AuthorizationResult.authorized) {
		this.router.navigate([path]);
	} else {
		this.router.navigate(['/Unauthorized']);
	}
}

The onAuthorizationResult event is only emitted if the trigger_authorization_result_event configuration property is set to true.

constructor(public oidcSecurityService: OidcSecurityService) {

 let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
 openIDImplicitFlowConfiguration.stsServer = 'https://accounts.google.com';
 ...
 openIDImplicitFlowConfiguration.trigger_authorization_result_event = true;

 this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);
}

The auto-login component redirects correctly to the STS server with the correct parameters for the application without any user interaction.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Router } from '@angular/router';
import { Subscription } from 'rxjs/Subscription';
import { OidcSecurityService, AuthorizationResult } from 'angular-auth-oidc-client';

@Component({
    selector: 'app-auto-component',
    templateUrl: './auto-login.component.html'
})

export class AutoLoginComponent implements OnInit, OnDestroy {
    lang: any;

    constructor(public oidcSecurityService: OidcSecurityService
    ) {
        this.oidcSecurityService.onModuleSetup.subscribe(() => { this.onModuleSetup(); });
    }

    ngOnInit() {
        if (this.oidcSecurityService.moduleSetup) {
            this.onModuleSetup();
        }
    }

    ngOnDestroy(): void {
        this.oidcSecurityService.onModuleSetup.unsubscribe();
    }

    private onModuleSetup() {
        this.oidcSecurityService.authorize();
    }
}

The auto-login component is also added to the routes.

import { Routes, RouterModule } from '@angular/router';
import { AutoLoginComponent } from './auto-login/auto-login.component';
...

const appRoutes: Routes = [
    { path: '', component: HomeComponent },
    { path: 'home', component: HomeComponent },
    { path: 'autologin', component: AutoLoginComponent },
    ...
];

export const routing = RouterModule.forRoot(appRoutes);

Now the user is auto redirected to the login page of the STS server when opening the Angular SPA application.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://github.com/damienbod/angular-auth-oidc-client



Andrew Lock: Using anonymous types and tuples to attach correlation IDs to scope state with Serilog and Seq in ASP.NET Core

Using anonymous types and tuples to attach correlation IDs to scope state with Serilog and Seq in ASP.NET Core

In my last post I gave an introduction to structured logging, and described how you can use scopes to add additional key-value pairs to log messages. Unfortunately, the syntax for key-value pairs using ILogger.BeginScope() can be a bit verbose, so I showed an extension method you can use to achieve the same result with a terser syntax.

In this post, I'll show some extension methods you can use to add multiple key-value pairs to a log message. To be honest, I'm not entirely convinced by any of them, so this post is more a record of my attempt rather than a recommendation! Any suggestions on ways to simplify / improve them are greatly received.

Background: the code to optimise

In the last post, I provided some sample code that we're looking to optimise. Namely, the calls to _logger.BeginScope() where we provide a dictionary of key-value pairs:

public void Add(int productId, int basketId)  
{
    using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
    {
        _logger.LogDebug("Adding product to basket");
        var product = _service.GetProduct();
        var basket = _service.GetBasket();

        using (var transaction = factory.Create())
        using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))
        {
            basket.Add(product);
            transaction.Submit();
            _logger.LogDebug("Product added to basket");
        }
    }
}

I showed how we could optimise the second call, with a simple extension method that takes a string key and an object value as two separate parameters:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, string key, object value)
    {
        return logger.BeginScope(new Dictionary<string, object> { { key, value } });
    }
}

This overload simply wraps the creation of the Dictionary<string, object> required by Serilog / Seq to attach key-value pairs to a log message. With this overload, the second BeginScope() call is reduced to:

using (_logger.BeginScope("transactionId", transaction.Id))  

This post describes my attempts to generalise this, so you can pass in multiple key-value pairs. All of these extensions will wrap the creation of Dictionary<> so that Serilog (or any other structured logging provider) can attach the key-value pairs to the log message.

Attempt 1: BeginScope extension method using Tuples

It's worth noting, that if you're initialising a dictionary with several KeyValuePair<>s, the syntax isn't actually very verbose, apart from the new Dictionary<> definition. You can use the dictionary initialisation syntax {key, value} to add multiple keys:

 using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))

I was hoping to create an overload for BeginScope() that had a similar terseness for the key creation, but without the need to explicitly create a Dictionary<> in the calling code.

My first thought was C# 7 tuples. KeyValuePairs are essentially tuples, so it seemed like a good fit. The following extension method accepts a params array of tuples, where the key value is a string and the value is an object:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, params (string key, object value)[] keys)
    {
        return logger.BeginScope(keys.ToDictionary(x => x.key, x => x.value));
    }
}

With this extension method, we could write our sample BeginScope() call as the following:

 using (_logger.BeginScope((key: nameof(productId), value: productId ), ( key: nameof(basketId), value: basketId)))

or even simpler, by omitting the tuple names entirely:

 using (_logger.BeginScope((nameof(productId), productId ), ( nameof(basketId), basketId)))

Initially, I was pretty happy with this. It's nice and concise and it achieves what I was aiming for. Unfortunately, it has some flaws if you try and use it with only a single tuple.

Overload resolutions with a single tuple

The BeginScope overload works well when you have multiple key-value pairs, but you would expect the behaviour to be the same no matter how many tuples you pass to the method. Unfortunately, if you try and call it with just a single tuple you'll be out of luck:

using (_logger.BeginScope((key: "transactionId", value: transaction.Id))  

We're clearly passing a tuple here, so you might hope our overload would be used. Unfortunately, the main ILogger.BeginScope<T>(T state) is a generic method, so it tends to be quite greedy when it comes to overload selection. Our tuple definition (string key, object value) state is less specific than the generic T state so it is never called. The transactionId value isn't added to the log as a correlation ID, instead it's serialised and added to the Scope property.

Changing our extension method to be generic ((string key, T value) state) doesn't work either; the main generic overload is always selected. How annoying.

Attempt 2: Avoiding overload resolution conflicts with Tuples

There's a simple solution to this problem: don't try and overload the ILogger.BeginScope<T>() method, just call it something else. For example, if we rename the extension to BeginScopeWith, then there won't be any overload resolution issues, and we can happily use single tuples without any issues:

public static class LoggerExtensions  
{
    public static IDisposable BeginScopeWith(this ILogger logger, params (string key, object value)[] keys)
    {
        return logger.BeginScope(keys.ToDictionary(x => x.key, x => x.value));
    }
}

By renaming the extension, we can avoid any ambiguity in method selection. That also gives us some compile-time safety, as only tuples that actually are (string key, object value) can be used:

using (_logger.BeginScopeWith((key: "transactionId", value: transactionId))) // <- explicit names  
using (_logger.BeginScopeWith(("transactionId", transactionId))) // <- names inferred  
using (_logger.BeginScopeWith((oops: "transactionId", wrong: transactionId))) // <- incorrect names ignored, only order and types matter  
using (_logger.BeginScopeWith((key: transactionId, value: "transactionId"))) // <- wrong order, won't compile  
using (_logger.BeginScopeWith(("transactionId", transactionId, basketId))) // <- to many items, wrong tuple type, won't compile  

That's about as far as I could get with tuples. It's not a bad compromise, but I wanted to try something else.

Attempt 3: Anonymous types as dictionaries of key value pairs

I only actually started exploring tuples after I went down this next avenue - using anonymous types as a Dictionary<string, object>. My motivation came from when you would specify additional HTML attributes using the HtmlHelper classes in ASP.NET Razor templates, for example:

@Html.InputFor(m => m.Name, new { placeholder = "Enter your name"})

Note You can still use Html Helpers in ASP.NET Core, but they have been largely superseded by the (basically superior in every way) Tag Helpers.

This syntax was what I had in mind when initially thinking about the problem. It's concise, it clearly contains key-value pairs, and it's familiar. The only problem is, converting an anonymous type to Dictionary<string, object> is harder than the Html Helpers make it seem!

Converting an anonymous type to a Dictionary<string, object>

Given I was trying to replicate the behaviour of the ASP.NET Html Helpers, checking out the source code seemed like a good place to start. I actually used the original ASP.NET source code, rather than the ASP.NET Core source code, but you can find similar code there too.

The code I found initially was for the HttpRouteValueDictionary. This uses a similar behaviour to the Html Helpers, in that it converts an object into a dictionary. It reads an anonymous type's properties, and uses the property name and values as key-value pairs. It also handles the case where the provided object is already a dictionary.

I used this class the basis for a helper method that converts an object into a dictionary (actually an IEnumerable<KeyValuePair<string, object>>):

public static IEnumerable<KeyValuePair<string, object>> GetValuesAsDictionary(object values)  
{
    var valuesAsDictionary = values as IEnumerable<KeyValuePair<string, object>>;
    if (valuesAsDictionary != null)
    {
        return valuesAsDictionary;
    }
    valuesAsDictionary = new Dictionary<string, object>();
    if (values != null)
    {
        foreach (PropertyHelper property in PropertyHelper.GetProperties(values))
        {
            // Extract the property values from the property helper
            // The advantage here is that the property helper caches fast accessors.
            valuesAsDictionary.Add(property.Name, property.GetValue(values));
        }
    }
    return valuesAsDictionary;
}

If the values object provided is already the correct type, we can just return it directly. Otherwise, we create a new dictionary and populate it with the properties from our object. This uses a helper class called PropertyHelper.

This is an internal helper class that's used both in the ASP.NET stack and in ASP.NET Core to extract property names and values from an object. It basically uses reflection to loop over an object's properties, but it's heavily optimised and cached. There's over 500 lines in the class, so I won't list it here, but at the heart of the class is a method that reflects over the provided object type's properties:

IEnumerable<PropertyInfo> properties = type  
    .GetProperties(BindingFlags.Public | BindingFlags.Instance)
    .Where(prop => prop.GetIndexParameters().Length == 0 && prop.GetMethod != null);

This is the magic that allows us to get all the properties as key-value pairs.

Using anonymous types to add scope state

With the GetValuesAsDictionary() helper method, we can now build an extension method that lets us use anonymous types to pass key-value pairs to the BeginScope method:

public static class LoggerExtensions  
{
    public static IDisposable BeginScopeWith(this ILogger logger, object values)
    {
        var dictionary = DictionaryHelper.GetValuesAsDictionary(values);
        return logger.BeginScope(dictionary);
    }
}

Unfortunately, we have the same method overload problem as we saw with the tuples. If we call the method BeginScope, then the "native" generic method overload will always win (unless you explicitly cast to object in the calling code - yuk), and our extension would never be called. The anonymous type object would end up serialised to the Scope property instead of being dissected into key-value pairs. We can avoid the problem by giving our extension a different name, but it does feel a bit clumsy.

Still, this extension means our using statements are a lot shorter and easier to read, even more so than the tuple syntax I think:

using (_logger.BeginScopeWith(new { productId = productId, basketId = basketId)))  
using (_logger.BeginScopeWith(new { transactionId = transaction.Id)))  

A downside is that we can't use nameof() expressions to avoid typos in the key names any more. Instead, we could use inference, which also gets rid of the duplication:

using (_logger.BeginScopeWith(new { productId, basketId)))  
using (_logger.BeginScopeWith(new { transactionId = transaction.Id)))  

Personally, I think that's a pretty nice syntax, which is much easier to read compared to the original:

 using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
 using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))

Other possibilities

The main other possibility that came to mind for this was to use dynamic and ExpandoObject, given that these are somewhat equivalent to a Dictionary<string,object> anyway. After a bit of playing I couldn't piece together anything that really worked, but if somebody else can come up with something, I'm all ears!

Summary

It bugs me that there's no way of writing either the tuple-based or object-based extension methods as overloads of BeginScope, rather than having to create a new name. It's maybe a bit silly, but I suspect it means that in practice I just won't end up using them, even though I think the anonymous type version in particular is much easier to read than the original dictionary-based version.

Even so, it was interesting to try and tackle this problem, and to look through the code that the ASP.NET team used to solve it previously. Even if I / you don't use these extension methods, it's another tool in the coding belt should it be useful in a different situation. As always, let me know if you have any comments / suggestions / issues, and thanks for reading.


Anuraj Parameswaran: jQuery Unobtrusive Ajax Helpers in ASP.NET Core

This post is about getting jQuery Unobtrusive Ajax helpers in ASP.NET Core. AjaxHelper Class represents support for rendering HTML in AJAX scenarios within a view. If you’re migrating your existing ASP.NET MVC project to ASP.NET Core MVC, but there is no tag helpers available out of the box as replacement. Instead ASP.NET Core team recommends data-* attributes. All the existing @Ajax.Form attributes are available as data-* attributes.


Darrel Miller: OpenAPI is not what I thought

Sometimes I do my best thinking in the car and today was an excellent example of this.  I had a phone call today with a the digital agency Authentic who have been hired to help you stop saying Swagger, when you mean OpenApi. I’m only partially kidding. They asked me some hard questions about why I got involved in the OpenAPI Initiative, and experiences I have had with OpenAPI delivering value.  Apparently this started a chain reaction of noodling in my subconscious because while driving my daughter to ballet, it hit me.  I’ve been thinking about OpenAPI all wrong.



Let me be categorically clear. In the beginning, I was not a fan of Swagger.  Or WADL, or RAML, or API Blueprint or RADL.  I was, and largely still am, a card carrying Restafarian.  I wear that slur with pride.  I like to eat REST Pragmatists for breakfast.   Out of band coupling is the scourge of the Internet.  We’ve tried interface definition languages before. Remember WSDL?  Been there, done that. Please, not again.

An Inflection Point

The first chink in my amour of objections appeared at the API Strategy Conference in Austin in November 2015 (There’s another one coming up soon in Portand http://apistrat.com/).  I watched Tony Tam do a workshop on Swagger.  Truth be told, I only attended to see what trouble I could cause.  Turns out he showed a tool called Swagger-Inflector and I was captivated.  Inflector used Swagger for a different purpose.  It became a DSL for driving the routing of a Java based HTTP API.  

An Inside Job

It wasn’t too long after that when I was asked if I would be interested in joining the OpenAPI Initiative.  It was clear from fighting the hypermedia fight for more than 10 years, we were losing the war.  The Swagger tooling provided value that developers building APIs wanted.  Hypermedia wasn’t solving problems they were facing, it was a promise to solve problems that they might face in a few years by doing extra work up front.  I understood the problems that Swagger/OpenAPI could cause, but I had a higher chance of convincing developers to stop drinking Mountain Dew than to pry a documentation generator from the hands of a dev with a deadline.  If I were going to have any chance of having an impact, I was going to have to work from the inside.

No Escape

A big part of my day job involves dealing with OpenAPI descriptions.  Our customer’s import them, export them.  My team uses it to describe our management API, as do all the other Azure teams.  As the de-facto “OpenAPI Guy” at Microsoft I end up having a fair number of interactions with other teams about what works and what doesn’t work in OpenAPI. A re-occurring theme is people keep wanting to put stuff in OpenAPI that has no business in an OpenAPI description.  At least that’s how I perceived it until today.

Scope Creep?

OpenApi descriptions are primarily used to drive consumer facing artifacts.  HTML documentation and client libraries are the most prominent examples.  Interface descriptions should not contain implementation details. But I keep running into scenarios where people want to add stuff that seems like it would be useful.  Credentials, rate limiting details, transformations, caching, CORS… the list continues to grow.  I’ve considered the need for a second description document that contains those details and augments the OpenAPI description.  I’ve considered adding an “x-implementation” object to the OpenApi description.  I’ve considered “x-impl-“ prefixes to distinguish between implementation details and the actual an interface description.  But nothing has felt right. I didn’t know why. Now I do.  It’s all implementation with some subset being the interface. Which subset depends on your perspective.

Pivot And Think Bigger

Remember Swagger-inflector?  It didn’t use OpenAPI to describe the interface at all.  It was purely an implementation artifact.  You know why Restafarians get all uppity about OpenAPI?  Because as an interface description language it encourages out of band coupling that makes independent evolvability hard.  That thing that micro-services need so badly.

What if OpenAPI isn’t an interface definition language at all?  What if it is purely a declarative description of an HTTP implementation?  What if tooling allowed you to project some subset of the OpenAPI description to drive HTML documentation? And another subset could be used for client code generation? And a different subset for driving server routing, middleware, validation and language bindings. OpenAPI descriptions become a platform independent definition of all aspects of an HTTP API.

Common Goals

One of my original objections to OpenAPI descriptions is that they contained information that I didn’t think belonged in an interface description.  Declaring a fixed subset of status codes an operation returned seemed unnecessary and restrictive.  But for scaffolding servers, generating mock responses and ensuring consistency across APIs, having the needed status codes identified is definitely valuable.

For the hypermedia folks, their generated documentation would only be based on a projection of the link relations, media types, entry points and possibly schemas.  For those who want a more traditional operation based  documentation, that is fine too.  It is the recognition of the projection step that is important.  It allows us to ensure that private implementation details are not leaked to interface driven artifacts.

Back to Work

Now I’m sure many people already perceive OpenAPI descriptions this way.  Well, where have you been my friends?  We need you contributing to the Github repo.  Me, I’m a bit slower and this only dawned on me today.  But hopefully, this will help me find even more ways to deliver value to developers via OpenAPI descriptions.

The other possibility of course is that people think I’m just plain wrong and that OpenAPI really is the description of an interface.


Anuraj Parameswaran: Getting started with SignalR using ASP.NET Core

This post is about getting started SignalR in ASP.NET Core. SignalR is a framework for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available.


Andrew Lock: Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

This is the first in a short series of posts exploring the process I went through to make working with scopes a little nicer in ASP.NET Core (and Serilog / Seq). In this post I'll create an extension method for logging a single key-value pair as a scope. In the next post, I'll extend this to multiple key-value pairs.

I'll start by presenting an overview of structured logging and why you should be using it in your applications. This is largely the same introduction as in my last post so feel to skip ahead if I'm preaching to the choir!

Next, I'll show how scopes are typically recorded in ASP.NET Core, with a particular focus on Serilog and Seq. This will largely demonstrate the semantics described by Nicholas Blumhardt in his post on the semantics of ILogger.BeginScope(), but it will also set the scene for the meat of this post. In particular, we'll take a look at the syntax needed to record scope state as a series of key-value pairs.

Finally, I'll show an extension method you can add to your application to make recording key-value scope state that little bit easier.

Introduction to structured logging

Structured logging involves associating key-value pairs with each log entry, instead of just outputting an unstructured string "message". For example, an unstructured log message, something that might be output to the console, might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You're limited to what you can achieve in a text editor - doable to an extent, but a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available for filtering without having to do any messy string processing.

Some of the most popular options for storing and searching structured logs are Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

Once you're storing logs in a structured manner, it becomes far easier to query and analyse your log files. Structured logging can extract parameters from the format string passed in the log message, and attach these to the log itself.

For example, the log message Request starting {protocol} {method} {url} contains three parameters, protocol, method, and url, which can all be extracted as properties on the log.

The ASP.NET Core logging framework also includes the concept of scopes which lets you attach arbitrary additional data to all log messages inside the scope. For example, the following log entry has a format string parameter, {ActionName}, which would be attached to the log message, but it also contains four scopes:

using (_logger.BeginScope("Some name"))  
using (_logger.BeginScope(42))  
using (_logger.BeginScope("Formatted {WithValue}", 12345))  
using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))  
{
    _logger.LogInformation("Hello from the {ActionName}!", name);
}

The state passed in the call to ILogger.BeginScope(state) can be anything, as shown in this example. The problem is, how this state should be logged is not clearly defined by the ILogger interface, so it's up to the logger implementation to decide.

Luckily Nicholas Blumhardt has thought hard about this problem, and has baked his rules into the Serilog / Seq implementation. There are effectively three different rules:

  1. If the state is an IEnumerable<KeyValuePair<string, object>>, attach each KeyValuePair as a property to the log.
  2. If the state is a format string / message template, add the parameters as properties to the log, and the formatted string to a Scope property.
  3. For everything else, add it to the Scope property.

For the LogInformation call shown previously, these rules result in the WithValue, ViaDictionary, and Scope values being attached to the log:

Creating an extension method for attaching key-value pairs to scope state using ASP.NET Core

Adding correlation IDs using scope

Of all these rules, the most interesting to me is the IEnumerable<KeyValuePair<string, object>> rule, which allows attaching arbitrary key-values to the log as properties. A common problem when looking through logs is looking for relationships. For example, I want to see all logs related to a particular product ID, a particular user ID, or a transaction ID. These are commonly referred to as correlation IDs as they allow you to easily determine the relationship between different log messages.

My one bugbear, is the somewhat lengthy syntax required in order to attach these correlation IDs to the log messages. Lets start with the following, highly contrived code. We're simply adding a product to a basket, but I've added correlation IDs in scopes for the productId, the basketId and the transactionId:

public void Add(int productId, int basketId)  
{
    using (_logger.BeginScope(new Dictionary<string, object> {
        { nameof(productId), productId }, { nameof(basketId), basketId} }))
    {
        _logger.LogDebug("Adding product to basket");
        var product = _service.GetProduct();
        var basket = _service.GetBasket();

        using (var transaction = factory.Create())
        using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))
        {
            basket.Add(product);
            transaction.Submit();
            _logger.LogDebug("Product added to basket");
        }
    }
}

This code does exactly what I want, but it's a bit of an eye-sore. All those dictionaries flying around and nameof() to avoid typos is a bit ugly, so I wanted to see if I could tidy it up. I didn't want to go messing with the framework code, so I thought I would create a couple of extension methods to tidy up these common patterns.

Creating a single key-value pair scope state extension

In this post we'll start with the inner-most call to BeginScope<T>, in which we create a dictionary with a single key, transactionId. For this case I created a simple extension method that takes two parameters, the key name as a string, and the value as an object. These are used to initialise a Dictionary<string, object> which is passed to the underlying ILogger.BeginScope<T> method:

public static class LoggerExtensions  
{
    public static IDisposable BeginScope(this ILogger logger, string key, object value)
    {
        return logger.BeginScope(new Dictionary<string, object> { { key, value } });
    }
}

The underlying ILogger.BeginScope<T>(T state) method only has a single argument, so there's no issue with overload resolution here. With this small addition, our second using call has gone from this:

using (_logger.BeginScope(new Dictionary<string, object> {{ "transactionId", transaction.Id }}))  

to this:

using (_logger.BeginScope("transactionId", transaction.Id))  

Much nicer, I think you'll agree!

This was the most common use case that I was trying to tidy up, so stopping at this point would be perfectly reasonable. In fact, I could already use this to tidy up the first using method too, if I was happy to change the semantics somewhat. For example

using (_logger.BeginScope(new Dictionary<string, object> {{ nameof(productId), productId }, { nameof(basketId), basketId} }))  

could become

using (_logger.BeginScope(nameof(productId), productId))  
using (_logger.BeginScope(nameof(basketId), basketId))  

Not strictly the same, but not too bad. Still, I wanted to do better. In the next post I'll show some of the avenues I explored, their pros and cons, and the final extension method I settled on.

Summary

I consider structured logging to be a no-brainer when it comes to running apps in production, and key to that are correlation IDs applied to logs wherever possible. Serilog, Seq, and the ASP.NET Core logging framework make it possible to add arbitrary properties to a log message using ILogger.BeginScope(state), but the semantics of the method call are somewhat ill-defined. Consequently, in order for scope state to be used as correlation ID properties on the log message, the state must be an IEnumerable<KeyValuePair<string, object>>.

Manually creating a Dictionary<string,object> every time I wanted to add a correlation ID was a bit cumbersome, so I wrote a simple extension overload to BeginScope method that takes a string key and and object value. This extension simply initialises a Dictionary<string, object> behind the scenes, and calls to the underlying BeginScope<T> method. This makes the call site easier to read when you are adding a single key-value pair.


Anuraj Parameswaran: Building ASP.NET Core web apps with VB.NET

This post is about developing ASP.NET Core applications with VB.NET. I started my career with VB 6.0, and .NET programming with VB.NET. When Microsoft introduced ASP.NET Core, people where concerned about Web Pages and VB.Net. Even though no one liked it, every one is using it. In ASP.NET Core 2.0, Microsoft introduced Razor Pages and support to develop .net core apps with VB.NET. Today I found one question on ASP.NET Core Web application template in VB.NET. So I thought of creating a ASP.NET Core Hello World app to VB.NET.


Damien Bowden: SignalR Group messages with ngrx and Angular

This article shows how SignalR can be used to send grouped messages to an Angular SignalR client, which uses ngrx to handle the SignalR events in the Angular client.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Other posts in this series:

History

2017-11-05 Updated to Angular 5 and Typescript 2.6.1, SignalR 1.0.0-alpha2-final

SignalR Groups

SignalR allows messages to be sent to specific groups if required. You can read about this here:

https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/working-with-groups

The documentation is for the old SignalR, but most is still relevant.

To get started, add the SignalR Nuget package to the csproj file where the Hub(s) are to be implemented.

<PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha2-final" />

In this application, the NewsItem class is used to send the messages between the SignalR clients and server.

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsItem
    {
        public string Header { get; set; }
        public string NewsText { get; set; }
        public string Author { get; set; }
        public string NewsGroup { get; set; }
    }
}

The NewsHub class implements the SignalR Hub which can send messages with NewsItem classes, or let the clients join, or leave a SignalR group. When the Send method is called, the class uses the NewsGroup property to send the messages only to clients in the group. If the client is not a member of the group, it will receive no message.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    public class NewsHub : Hub
    {
        public Task Send(NewsItem newsItem)
        {
            return Clients.Group(newsItem.NewsGroup).InvokeAsync("Send", newsItem);
        }

        public async Task JoinGroup(string groupName)
        {
            await Groups.AddAsync(Context.ConnectionId, groupName);
            await Clients.Group(groupName).InvokeAsync("JoinGroup", groupName);
        }

        public async Task LeaveGroup(string groupName)
        {
            await Clients.Group(groupName).InvokeAsync("LeaveGroup", groupName);
            await Groups.RemoveAsync(Context.ConnectionId, groupName);
        }
    }
}

The SignalR hub is configured in the Startup class. The path defined in the hub, must match the configuration in the SignalR client.

app.UseSignalR(routes =>
{
	routes.MapHub<NewssHub>("looney");
});

Angular Service for the SignalR client

To use SignalR in the Angular application, the npm package @aspnet/signalr-client needs to be added to the packages.json file.

"@aspnet/signalr-client": "1.0.0-alpha2-final"

The Angular NewsService is used to send SignalR events to the ASP.NET Core server and also to handle the messages received from the server. The send, joinGroup and leaveGroup functions are used in the ngrx store effects and the init method adds event handlers for SignalR events and dispatches ngrx actions when a message is received.

import 'rxjs/add/operator/map';

import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { Observable } from 'rxjs/Observable';

import { HubConnection } from '@aspnet/signalr-client';
import { NewsItem } from './models/news-item';
import { Store } from '@ngrx/store';
import * as NewsActions from './store/news.action';

@Injectable()
export class NewsService {

    private _hubConnection: HubConnection;
    private actionUrl: string;
    private headers: HttpHeaders;

    constructor(private http: HttpClient,
        private store: Store<any>
    ) {
        this.init();
        this.actionUrl = 'http://localhost:5000/api/news/';

        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');
    }

    send(newsItem: NewsItem): NewsItem {
        this._hubConnection.invoke('Send', newsItem);
        return newsItem;
    }

    joinGroup(group: string): void {
        this._hubConnection.invoke('JoinGroup', group);
    }

    leaveGroup(group: string): void {
        this._hubConnection.invoke('LeaveGroup', group);
    }

    getAllGroups(): Observable<string[]> {
        return this.http.get<string[]>(this.actionUrl, { headers: this.headers });
    }

    private init() {

        this._hubConnection = new HubConnection('/looney');

        this._hubConnection.on('Send', (newsItem: NewsItem) => {
            this.store.dispatch(new NewsActions.ReceivedItemAction(newsItem));
        });

        this._hubConnection.on('JoinGroup', (data: string) => {
            console.log('recieved data from the hub');
            console.log(data);
            this.store.dispatch(new NewsActions.ReceivedGroupJoinedAction(data));
        });

        this._hubConnection.on('LeaveGroup', (data: string) => {
            this.store.dispatch(new NewsActions.ReceivedGroupLeftAction(data));
        });

        this._hubConnection.on('History', (newsItems: NewsItem[]) => {
            console.log('recieved history from the hub');
            console.log(newsItems);
            this.store.dispatch(new NewsActions.ReceivedGroupHistoryAction(newsItems));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(() => {
                console.log('Error while establishing connection')
            });
    }

}

Using ngrx to manage SignalR events

The NewsState interface is used to save the application state created from the SignalR events, and the user interactions.

import { NewsItem } from '../models/news-item';

export interface NewsState {
    newsItems: NewsItem[],
    groups: string[]
};

The news.action action classes are used to connect, define the actions for events which are dispatched from Angular components, the SignalR Angular service, or ngrx effects. These actions are used in the hubConnection.on event, which receives the SignalR messages, and dispatches the proper action.

import { Action } from '@ngrx/store';
import { NewsItem } from '../models/news-item';

export const JOIN_GROUP = '[news] JOIN_GROUP';
export const LEAVE_GROUP = '[news] LEAVE_GROUP';
export const JOIN_GROUP_COMPLETE = '[news] JOIN_GROUP_COMPLETE';
export const LEAVE_GROUP_COMPLETE = '[news] LEAVE_GROUP_COMPLETE';
export const SEND_NEWS_ITEM = '[news] SEND_NEWS_ITEM';
export const SEND_NEWS_ITEM_COMPLETE = '[news] SEND_NEWS_ITEM_COMPLETE';
export const RECEIVED_NEWS_ITEM = '[news] RECEIVED_NEWS_ITEM';
export const RECEIVED_GROUP_JOINED = '[news] RECEIVED_GROUP_JOINED';
export const RECEIVED_GROUP_LEFT = '[news] RECEIVED_GROUP_LEFT';

export class JoinGroupAction implements Action {
    readonly type = JOIN_GROUP;

    constructor(public group: string) { }
}

export class LeaveGroupAction implements Action {
    readonly type = LEAVE_GROUP;

    constructor(public group: string) { }
}


export class JoinGroupActionComplete implements Action {
    readonly type = JOIN_GROUP_COMPLETE;

    constructor(public group: string) { }
}

export class LeaveGroupActionComplete implements Action {
    readonly type = LEAVE_GROUP_COMPLETE;

    constructor(public group: string) { }
}
export class SendNewsItemAction implements Action {
    readonly type = SEND_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class SendNewsItemActionComplete implements Action {
    readonly type = SEND_NEWS_ITEM_COMPLETE;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedItemAction implements Action {
    readonly type = RECIEVED_NEWS_ITEM;

    constructor(public newsItem: NewsItem) { }
}

export class ReceivedGroupJoinedAction implements Action {
    readonly type = RECIEVED_GROUP_JOINED;

    constructor(public group: string) { }
}

export class ReceivedGroupLeftAction implements Action {
    readonly type = RECIEVED_GROUP_LEFT;

    constructor(public group: string) { }
}

export type Actions
    = JoinGroupAction
    | LeaveGroupAction
    | JoinGroupActionComplete
    | LeaveGroupActionComplete
    | SendNewsItemAction
    | SendNewsItemActionComplete
    | ReceivedItemAction
    | ReceivedGroupJoinedAction
    | ReceivedGroupLeftAction;


The newsReducer ngrx reducer class receives the actions and changes the state as required. For example, when a RECEIVED_NEWS_ITEM event is sent from the Angular SignalR service, it creates a new state with the new message appended to the existing items.

import { NewsState } from './news.state';
import { NewsItem } from '../models/news-item';
import { Action } from '@ngrx/store';
import * as newsAction from './news.action';

export const initialState: NewsState = {
    newsItems: [],
    groups: ['group']
};

export function newsReducer(state = initialState, action: newsAction.Actions): NewsState {
    switch (action.type) {

        case newsAction.RECEIVED_GROUP_JOINED:
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: (state.groups.indexOf(action.group) > -1) ? state.groups : state.groups.concat(action.group)
            });

        case newsAction.RECEIVED_NEWS_ITEM:
            return Object.assign({}, state, {
                newsItems: state.newsItems.concat(action.newsItem),
                groups: state.groups
            });

        case newsAction.RECEIVED_GROUP_LEFT:
            const data = [];
            for (const entry of state.groups) {
                if (entry !== action.group) {
                    data.push(entry);
                }
            }
            console.log(data);
            return Object.assign({}, state, {
                newsItems: state.newsItems,
                groups: data
            });
        default:
            return state;

    }
}

The ngrx store is configured in the module class.

StoreModule.forFeature('news', {
     newsitems: newsReducer, newsAction
}),
 EffectsModule.forFeature([NewsEffects])

The store is then used in the different Angular components. The component only uses the ngrx store to send, receive SignalR data.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { Store } from '@ngrx/store';
import { NewsState } from '../store/news.state';
import * as NewsActions from '../store/news.action';
import { NewsItem } from '../models/news-item';

@Component({
    selector: 'app-news-component',
    templateUrl: './news.component.html'
})

export class NewsComponent implements OnInit {
    public async: any;
    newsItem: NewsItem;
    group = 'group';
    newsState$: Observable<NewsState>;

    constructor(private store: Store<any>) {
        this.newsState$ = this.store.select<NewsState>(state => state.news.newsitems);
        this.newsItem = new NewsItem();
        this.newsItem.AddData('', '', 'me', this.group);
    }

    public sendNewsItem(): void {
        this.newsItem.NewsGroup = this.group;
        this.store.dispatch(new NewsActions.SendNewsItemAction(this.newsItem));
    }

    public join(): void {
        this.store.dispatch(new NewsActions.JoinGroupAction(this.group));
    }

    public leave(): void {
        this.store.dispatch(new NewsActions.LeaveGroupAction(this.group));
    }

    ngOnInit() {
    }
}

The component template then displays the data as required.

<div class="container-fluid">

    <h1>Send some basic news messages</h1>

    <div class="row">
        <form class="form-inline" >
            <div class="form-group">
                <label for="header">Group</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="group" required>
            </div>
            <button class="btn btn-primary" (click)="join()">Join</button>
            <button class="btn btn-primary" (click)="leave()">Leave</button>
        </form>
    </div>
    <hr />
    <div class="row">
        <form class="form" (ngSubmit)="sendNewsItem()" #newsItemForm="ngForm">
            <div class="form-group">
                <label for="header">Header</label>
                <input type="text" class="form-control" id="header" placeholder="your header..." name="header" [(ngModel)]="newsItem.header" required>
            </div>
            <div class="form-group">
                <label for="newsText">Text</label>
                <input type="text" class="form-control" id="newsText" placeholder="your newsText..." name="newsText" [(ngModel)]="newsItem.newsText" required>
            </div>
            <div class="form-group">
                <label for="newsText">Author</label>
                <input type="text" class="form-control" id="author" placeholder="your newsText..." name="author" [(ngModel)]="newsItem.author" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!newsItemForm.valid">Send News to: {{group}}</button>
        </form>
    </div>

    <div class="row" *ngIf="(newsState$|async)?.newsItems.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>header</th>
                        <th>Text</th>
                        <th>Author</th>
                        <th>roup</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let item of (newsState$|async)?.newsItems; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{item.header}}</td>
                        <td>{{item.newsText}}</td>
                        <td>{{item.author}}</td>
                        <td>{{item.newsGroup}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
 
    <div class="row" *ngIf="(newsState$|async)?.length <= 0">
        <span>No news items</span>
    </div>
</div>

When the application is started, SignalR messages can be sent, received and displayed from the instances of the Angaulr application.


Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Damien Bowden: Getting started with SignalR using ASP.NET Core and Angular

This article shows how to setup a first SignalR Hub in ASP.NET Core 2.0 and use it with an Angular client. SignalR will be released with dotnet 2.1. Thanks to Dennis Alberti for his help in setting up the code example.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Other posts in this series:

History

2017-11-05 Updated to Angular 5 and Typescript 2.6.1, SignalR 1.0.0-alpha2-final
2017-09-15: Updated @aspnet/signalr-client to use npm feed, and 1.0.0-alpha1-final

The required SignalR Nuget packages and npm packages are at present hosted on MyGet. Your need to add the SignalR packagesto the csproj file. To use the MyGet feed, add the https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json to your package sources.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha2-final" />
    <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.0.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="2.0.0" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.0" PrivateAssets="All" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
  </ItemGroup>
</Project>

Now create a simple default hub.

using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreSignalr.SignalRHubs
{
    public class LoopyHub : Hub
    {
        public Task Send(string data)
        {
            return Clients.All.InvokeAsync("Send", data);
        }
    }
}

Add the SignalR configuration in the startup class. The hub which was created before needs to be added in the UseSignalR extension method.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddSignalR();
	...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseSignalR(routes =>
	{
		routes.MapHub<LoopyHub>("loopy");
	});

	...
}

Setup the Angular application. The Angular application is setup using a wepback build and all dependencies are added to the packages.json file.

You can use the The MyGet npm feed if you want to use the aspnetcore-ci-dev. You can do this using a .npmrc file in the project root. Add the registry path. If using the npm package, do not add this.

@aspnet:registry=https://dotnet.myget.org/f/aspnetcore-ci-dev/npm/

Now add the required SignalR npm packages to the packages.json file. Using the npm package from NuGet:

 "dependencies": {
    "@angular/animations": "5.0.0",
    "@angular/common": "5.0.0",
    "@angular/compiler": "5.0.0",
    "@angular/compiler-cli": "5.0.0",
    "@angular/core": "5.0.0",
    "@angular/forms": "5.0.0",
    "@angular/http": "5.0.0",
    "@angular/platform-browser": "5.0.0",
    "@angular/platform-browser-dynamic": "5.0.0",
    "@angular/platform-server": "5.0.0",
    "@angular/router": "5.0.0",
    "@angular/upgrade": "5.0.0",
    "@ngrx/effects": "^4.1.0",
    "@ngrx/store": "^4.1.0",
    "@ngrx/store-devtools": "^4.0.0",
    "bootstrap": "3.3.7",
    "core-js": "2.5.1",
    "ie-shim": "0.1.0",

    "msgpack5": "^3.5.1",
    "@aspnet/signalr-client": "1.0.0-alpha2-final",

    "rxjs": "5.5.2",
    "zone.js": "0.8.18"
  },

Add the SignalR client code. In this basic example, it is just added directly in a component. The sendMessage funtion sends messages and the hubConnection.on function receives all messages including its own.

import { Component, OnInit } from '@angular/core';
import { Observable } from 'rxjs/Observable';
import { HubConnection } from '@aspnet/signalr-client';

@Component({
    selector: 'app-home-component',
    templateUrl: './home.component.html'
})

export class HomeComponent implements OnInit {
    private _hubConnection: HubConnection;
    public async: any;
    message = '';
    messages: string[] = [];

    constructor() {
    }

    public sendMessage(): void {
        const data = `Sent: ${this.message}`;

        this._hubConnection.invoke('Send', data);
        this.messages.push(data);
    }

    ngOnInit() {
        this._hubConnection = new HubConnection('/loopy');

        this._hubConnection.on('Send', (data: any) => {
            const received = `Received: ${data}`;
            this.messages.push(received);
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
            })
            .catch(err => {
                console.log('Error while establishing connection')
            });
    }

}

The messages are then displayed in the component template.

<div class="container-fluid">

    <h1>Send some basic messages</h1>


    <div class="row">
        <form class="form-inline" (ngSubmit)="sendMessage()" #messageForm="ngForm">
            <div class="form-group">
                <label class="sr-only" for="message">Message</label>
                <input type="text" class="form-control" id="message" placeholder="your message..." name="message" [(ngModel)]="message" required>
            </div>
            <button type="submit" class="btn btn-primary" [disabled]="!messageForm.valid">Send SignalR Message</button>
        </form>
    </div>
    <div class="row" *ngIf="messages.length > 0">
        <div class="table-responsive">
            <table class="table table-striped">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Messages</th>
                    </tr>
                </thead>
                <tbody>
                    <tr *ngFor="let message of messages; let i = index">
                        <td>{{i + 1}}</td>
                        <td>{{message}}</td>
                    </tr>
                </tbody>
            </table>
        </div>
    </div>
    <div class="row" *ngIf="messages.length <= 0">
        <span>No messages</span>
    </div>
</div>

Now the first really simple SignalR Hub is setup and an Angular client can send and receive messages.

Links:

https://github.com/aspnet/SignalR#readme

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5



Andrew Lock: How to include scopes when logging exceptions in ASP.NET Core

How to include scopes when logging exceptions in ASP.NET Core

This post describes how to work around an issue I ran into when logging exceptions that occur inside a scope block in ASP.NET Core. I'll provide a brief background on logging in ASP.NET Core, structured logging, and the concept of scopes. Then I'll show how exceptions can cause you to lose an associated scope, and how to get round this using a neat trick with exception filters.

tl;dr; Exception filters are executed in the same scope as the original exception, so you can use them to write logs in the original context, before the using scope blocks are disposed.

Logging in ASP.NET Core

ASP.NET Core includes logging infrastructure that makes it easy to write logs to a variety of different outputs, such as the console, a file, or the Windows EventLog. The logging abstractions are used through the ASP.NET Core framework libraries, so you can even get log messages from deep inside the infrastructure libraries like Kestrel and EF Core if you like.

The logging abstractions include common features like different event levels, applying unique ids to specific logs, and event categories for tracking which class created the log message, as well as the ability to use structured logging for easier parsing of logs.

Structured logging is especially useful, as it makes finding and diagnosing issues so much easier in production. I'd go as far as to say that it should be absolutely required if you're running an app in production.

Introduction to structured logging

Structured logging basically involves associating key-value pairs with each log entry, instead of a simple string "message". For example, a non-structured log message might look something like:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]  
      Request starting HTTP/1.1 GET http://localhost:51919/

This message contains a lot of information, but if it's just stored as a string like this, then it's not easy to search or filter the messages. For example, what if you wanted to find all of the error messages generated by the WebHost class? You could probably put together a regex to extract all the information, but that's a lot of work.

The same method stored as a structured log would essentially be stored as a JSON object making it easily searchable, as something like:

{
    "eventLevel" : "Information",
    "category" : "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "eventId" : 1,
    "message" : "Request starting HTTP/1.1 GET http://localhost:51919/",
    "protocol" : "HTTP/1.1",
    "method" : "GET",
    "url" : "http://localhost:51919/"
}

The complete message is still there, but you also have each of the associated properties available without having to do any messy string processing. Nicholas Blumhardt has a great explanation of the benefits in this stack overflow answer.

Now, as these logs are no longer simple strings, they can't just be written to the console, or stored in a file - they need dedicated storage. Some of the most popular options are to store the logs in Elastic Search with a Kibana front end, or to use Seq. The Serilog logging provider also supports structured logging, and is typically used to write to both of these destinations.

Nicholas Blumhardt is behind both the Serilog provider and Seq, so I highly recommend checking out his blog if you're interested in structured logging. In particular, he recently wrote a post on how to easily integrate Serilog into ASP.NET Core 2.0 applications.

Adding additional properties using scopes

In some situations, you might like to add the same values to every log message that you write. For example, you might want to add a database transaction id to every log message until that transaction is committed.

You could manually add the id to every relevant message, but ASP.NET Core also provides the
concept of scopes. You can create a new scope in a using block, passing in some state you want to log, and it will be written to each log message inside the using block.

You don't have to be using structured logging to use scopes - you can add them to the console logger for example - but they make the most sense in terms of structured logging.

For example, the following sample taken from the serilog-aspnetcore package (the recommended package for easily adding Serilog to ASP.NET Core 2.0 apps) demonstrates multiple nested scopes in the Get() method. Calling _logger.BeginScope<T>(T state) creates a new scope with the provided state.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");

        using (_logger.BeginScope("Some name"))
        using (_logger.BeginScope(42))
        using (_logger.BeginScope("Formatted {WithValue}", 12345))
        using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
        {
            _logger.LogInformation("Hello from the Index!");
            _logger.LogDebug("Hello is done");
        }

        _logger.LogInformation("After");

        return new string[] { "value1", "value2" };
    }
}

Running this application and hitting the action method produces logs similar to the following in Seq:

How to include scopes when logging exceptions in ASP.NET Core

As you can see, you can store anything as the state parameter T - a string, an integer, or a Dictionary<string, object> of values. Seq handles these scope state values in too different ways:

  • integers, strings and formatted strings are added to an array of objects on the Scope property
  • Parameters and values from formatted strings, and Dictionary<string, object> are added directly to the log entry as key-value pairs.

Surprise surprise, Nicholas Blumhardt also has a post on what to make of these values, how logging providers should handle them, and how to use them!

Exceptions inside scope blocks lose the scope

Scopes work well for this situation when you want to attach additional values to every log message, but there's a problem. What if an exception occurs inside the scope using block? The scope probably contains some very useful information for debugging the problem, so naturally you'd like to include it in the error logs.

If you can include a try-catch block inside the scope block, then you're fine - you can log the errors and the scope will be included as you'd expect.

But what if the try-catch block surrounds the using blocks? For example, imagine the previous example, but this time we have a try-catch block in the method, and an exception is thrown inside the using blocks:

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                // An unexpected problem!
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "An unexpected exception occured");
            return new string[] { };
        }
    }
}

Obviously this is a trivial example, you could easily put the try-catch block inside the using blocks, but in reality the scope blocks and exception could occur several layers deep inside some service.

Unfortunately, if you look at the error logged in Seq, you can see that the scopes have all been lost. There's no Scope, WithValue, or ViaDictionary properties:

How to include scopes when logging exceptions in ASP.NET Core

At the point the exception is logged, the using blocks have all been disposed, and so the scopes have been lost. Far from ideal, especially if the scopes contained information that would help debug why the exception occurred!

Using exception filters to capture scopes

So how can we get the best of both worlds, and record the scope both for successful logs and errors? The answer was buried in an issue in the Serilog repo, and uses a "common and accepted form of 'abuse'" by using an exception filter for side effects.

Exception filters are a C# 6 feature that lets you conditionally catch exceptions in a try-catch block:

try  
{
  // Something throws an exception
}
catch(MyException ex) when (ex.MyValue == 3)  
{
  // Only caught if the expression filter evaluates
  // to true, i.e. if ex.MyValue == 3
}

If the filter evaluates to true, the catch block executes; if it evaluates to false, the catch block is ignored, and the exception continues to bubble up the call stack until it is handled.

There is a lesser known "feature" of exception filters that we can make of here - the code in an exception filter runs in the same context in which the original exception occurred - the stack in unharmed, and is only dumped if the exception filter evaluates to true.

We can use this feature to allow recording the scopes at the location the exception occurs. The helper method LogError(exception) simply writes the exception to the logger when it is called as part of an exception filter using when (LogError(ex)). Returning true means the catch block is executed too, but only after the exception has been logged with its scopes.

[Route("api/[controller]")]
public class ScopesController : Controller  
{
    ILogger<ScopesController> _logger;

    public ScopesController(ILogger<ScopesController> logger)
    {
        _logger = logger;
    }

    // GET api/scopes
    [HttpGet]
    public IEnumerable<string> Get()
    {
        _logger.LogInformation("Before");
        try
        {
            using (_logger.BeginScope("Some name"))
            using (_logger.BeginScope(42))
            using (_logger.BeginScope("Formatted {WithValue}", 12345))
            using (_logger.BeginScope(new Dictionary<string, object> { ["ViaDictionary"] = 100 }))
            {
                throw new Exception("Oops, something went wrong!");
                _logger.LogInformation("Hello from the Index!");
                _logger.LogDebug("Hello is done");
            }

            _logger.LogInformation("After");

            return new string[] { "value1", "value2" };
        }
        catch (Exception ex) when (LogError(ex))
        {
            return new string[] { };
        }
    }

    bool LogError(Exception ex)
    {
        _logger.LogError(ex, "An unexpected exception occured");
        return true;
    }
}

Now when the exception occurs, it's logged with all the active scopes at the point the exception occurred (Scope, WithValue, or ViaDictionary), instead of the active scopes inside the catch block.

How to include scopes when logging exceptions in ASP.NET Core

Summary

Structured logging is a great approach that makes filtering and searching logs after the fact much easier by storing key-value pairs of properties. You can add extra properties to each log by using scopes inside a using block. Every log written inside the using block will include the scope properties, but if an exception occurs, those scope values will be lost.

To work around this, you can use the C# 6 exception filters feature. Exception filters are executed in the same context as the original exception, so you can use them to capture the logging scope at the point the exception occurred, instead of the logging scope inside the catch block.


Damien Bowden: Getting started with Angular and Redux

This article shows how you could setup Redux in an Angular application using ngrx. Redux provides a really great way of managing state in an Angular application. State Management is hard, and usually ends up a mess when you invent it yourself. At present, Angular provides no recommendations or solution for this.

Thanks to Fabian Gosebrink for his help in learning ngrx and Redux. The to Philip Steinebrunner for his feedback.

Code: https://github.com/damienbod/AngularRedux

History

2017-11-05 Updated to Angular 5 and Typescript 2.6.1

The demo app uses an Angular component for displaying countries using the public API https://restcountries.eu/. The view displays regions and the countries per region. The data and the state of the component is implemented using ngrx.

Note: Read the Redux documentation to learn how it works. Here’s a quick summary of the redux store in this application:

  • There is just one store per application while you can register additional reducers for your Feature-Modules with StoreModule.forFeature() per module
  • The store has a state, actions, effects, and reducers
  • The actions define what can be done in the store. Components or effects dispatch these
  • effects are use to do API calls, etc and are attached to actions
  • reducers are attached to actions and are used to change the state

The following steps explains, what is required to get the state management setup in the Angular application, which uses an Angular service to request the data from the public API.

Step 1: Add the ngrx packages

Add the latest ngrx npm packages to the packages.json file in your project.

    "@ngrx/effects": "^4.1.0",
    "@ngrx/store": "^4.1.0",
    "@ngrx/store-devtools": "^4.0.0",

Step 2: Add the ngrx setup configuration to the app module.

In this app, a single Redux store will be used per module. The ngrx configuration needs to be added to the app.module and also each child module as required. The StoreModule, EffectsModule and the StoreDevtoolsModule are added to the imports array of the NgModule.

...

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { StoreDevtoolsModule } from '@ngrx/store-devtools';

@NgModule({
    imports: [
        ...
        StoreModule.forRoot({}),
        StoreDevtoolsModule.instrument({
            maxAge: 25 //  Retains last 25 states
        }),
        EffectsModule.forRoot([])
    ],

    declarations: [
        AppComponent
    ],

    bootstrap: [AppComponent],
})

export class AppModule { }

Step 3: Create the interface for the state.

This can be any type of object, array.

import { Region } from './../../models/region';

export interface CountryState {
    regions: Region[],
};

Step 4: Create the actions

Create the actions required by the components or the effects. The constructor params must match the params sent from the components or returned from the API calls.

import { Action } from '@ngrx/store';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

export const SELECTALL = '[countries] Select All';
export const SELECTALL_COMPLETE = '[countries] Select All Complete';
export const SELECTREGION = '[countries] Select Region';
export const SELECTREGION_COMPLETE = '[countries] Select Region Complete';

export const COLLAPSEREGION = '[countries] COLLAPSE Region';

export class SelectAllAction implements Action {
    readonly type = SELECTALL;

    constructor() { }
}

export class SelectAllCompleteAction implements Action {
    readonly type = SELECTALL_COMPLETE;

    constructor(public countries: Country[]) { }
}

export class SelectRegionAction implements Action {
    readonly type = SELECTREGION;

    constructor(public region: Region) { }
}

export class SelectRegionCompleteAction implements Action {
    readonly type = SELECTREGION_COMPLETE;

    constructor(public region: Region) { }
}

export class CollapseRegionAction implements Action {
    readonly type = COLLAPSEREGION;

    constructor(public region: Region) { }
}

export type Actions
    = SelectAllAction
    | SelectAllCompleteAction
    | SelectRegionAction
    | SelectRegionCompleteAction
    | CollapseRegionAction;


Step 5: Create the effects

Create the effects to do the API calls. The effects are mapped to actions and when finished call another action.

import 'rxjs/add/operator/map';
import 'rxjs/add/operator/switchMap';

import { Injectable } from '@angular/core';
import { Actions, Effect } from '@ngrx/effects';
import { Action } from '@ngrx/store';
import { of } from 'rxjs/observable/of';
import { Observable } from 'rxjs/Rx';

import * as countryAction from './country.action';
import { Country } from './../../models/country';
import { CountryService } from '../../core/services/country.service';

@Injectable()
export class CountryEffects {

    @Effect() getAllPerRegion$: Observable<Action> = this.actions$.ofType(countryAction.SELECTREGION)
        .switchMap((action: Action) =>
            this.countryService.getAllPerRegion((action as countryAction.SelectRegionAction).region.name)
                .map((data: Country[]) => {
                    const region = { name: (action as countryAction.SelectRegionAction).region.name, expanded: true, countries: data};
                    return new countryAction.SelectRegionCompleteAction(region);
                })
                .catch(() => {
                    return of({ type: 'getAllPerRegion$' })
                })
        );
    constructor(
        private countryService: CountryService,
        private actions$: Actions
    ) { }
}

Step 6: Implement the reducers

Implement the reducer to change the state when required. The reducer takes an initial state and executes methods matching the defined actions which were dispatched from the components or the effects.

import { CountryState } from './country.state';
import { Region } from './../../models/region';
import * as countryAction from './country.action';

export const initialState: CountryState = {
    regions: [
        { name: 'Africa', expanded:  false, countries: [] },
        { name: 'Americas', expanded: false, countries: [] },
        { name: 'Asia', expanded: false, countries: [] },
        { name: 'Europe', expanded: false, countries: [] },
        { name: 'Oceania', expanded: false, countries: [] }
    ]
};

export function countryReducer(state = initialState, action: countryAction.Actions): CountryState {
    switch (action.type) {

        case countryAction.SELECTREGION_COMPLETE:
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        case countryAction.COLLAPSEREGION:
            action.region.expanded = false;
            return Object.assign({}, state, {
                regions: state.regions.map((item: Region) => {
                    return item.name === action.region.name ? Object.assign({}, item, action.region ) : item;
                })
            });

        default:
            return state;

    }
}

Step 7: Configure the module.

Important here is how the StoreModule.forFeature is configured. The configuration must match the definitions in the components which use the store.

import { CommonModule } from '@angular/common';
import { HttpClientModule } from '@angular/common/http';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';

import { CountryComponent } from './components/country.component';
import { CountryRoutes } from './country.routes';

import { EffectsModule } from '@ngrx/effects';
import { StoreModule } from '@ngrx/store';
import { CountryEffects } from './store/country.effects';
import { countryReducer } from './store/country.reducer';
import * as countryAction from './store/country.action';

@NgModule({
    imports: [
        CommonModule,
        FormsModule,
        HttpClientModule,
        CountryRoutes,
        StoreModule.forFeature('world', {
            regions: countryReducer, countryAction
        }),
        EffectsModule.forFeature([CountryEffects])
    ],

    declarations: [
        CountryComponent
    ],

    exports: [
        CountryComponent
    ]
})

export class CountryModule { }

Step 8: Create the component

Create the component which uses the store. The constructor configures the store matching the module configuration from the forFeature and the state as required. User actions dispatch events using the actions, which if required calls the an effect function, which then calls an action and then a reducer function which changes the state.

import { Component, OnInit } from '@angular/core';
import { Store } from '@ngrx/store';
import { Observable } from 'rxjs/Observable';

import { CountryState } from '../store/country.state';
import * as CountryActions from '../store/country.action';
import { Country } from './../../models/country';
import { Region } from './../../models/region';

@Component({
    selector: 'app-country-component',
    templateUrl: './country.component.html',
    styleUrls: ['./country.component.scss']
})

export class CountryComponent implements OnInit {

    public async: any;

    regionsState$: Observable<CountryState>;

    constructor(private store: Store<any>) {
        this.regionsState$ = this.store.select<CountryState>(state => state.world.regions);
    }

    ngOnInit() {
        this.store.dispatch(new CountryActions.SelectAllAction());
    }

    public getCountries(region: Region) {
        this.store.dispatch(new CountryActions.SelectRegionAction(region));
    }

    public collapse(region: Region) {
         this.store.dispatch(new CountryActions.CollapseRegionAction(region));
    }
}

Step 9: Use the state objects in the HTML template.

It is important not to forget to use the async pipe when using the state from ngrx. Now the view is independent from the API calls and when the state is changed, it is automatically updated, or other components which use the same state.

<div class="container-fluid">
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length > 0">
        <div class="table-responsive">
            <table class="table">
                <thead>
                    <tr>
                        <th>#</th>
                        <th>Name</th>
                        <th>Population</th>
                        <th>Capital</th>
                        <th>Flag</th>
                    </tr>
                </thead>
                <tbody>
                    <ng-container *ngFor="let region of (regionsState$|async)?.regions; let i = index">
                        <tr>
                            <td class="text-left td-table-region" *ngIf="!region.expanded">
                                <span (click)="getCountries(region)">►</span>
                            </td>
                            <td class="text-left td-table-region" *ngIf="region.expanded">
                                <span type="button" (click)="collapse(region)">▼</span>
                            </td>
                            <td class="td-table-region">{{region.name}}</td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                            <td class="td-table-region"> </td>
                        </tr>
                        <ng-container *ngIf="region.expanded">
                            <tr *ngFor="let country of region.countries; let i = index">
                                <td class="td-table-country">    {{i + 1}}</td>
                                <td class="td-table-country">{{country.name}}</td>
                                <td class="td-table-country" >{{country.population}}</td>
                                <td>{{country.capital}}</td>
                                <td><img width="100" [src]="country.flag"></td>
                            </tr>
                        </ng-container>
                    </ng-container>                                         
                </tbody>
            </table>
        </div>
    </div>

    <!--▼ ►   <span class="glyphicon glyphicon-ok" aria-hidden="true" style="color: darkgreen;"></span>-->
    <div class="row" *ngIf="(regionsState$|async)?.regions?.length <= 0">
        <span>No items found</span>
    </div>
</div>

Redux DEV Tools

The redux-devtools chrome extension is really excellent. Add this to Chrome and start the application.

When you start the application, and open it in Chrome, and the Redux state can be viewed, explored changed and tested. This gives you an easy way to view the state and also display what happened inside the application. You can even remove state changes using this tool, too see a different history and change the value of the actual state.

The actual state can be viewed:

Links:

https://github.com/ngrx

https://egghead.io/courses/getting-started-with-redux

http://redux.js.org/

https://github.com/ngrx/platform/blob/master/docs/store-devtools/README.md

https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd?hl=en

https://restcountries.eu/

http://onehungrymind.com/build-better-angular-2-application-redux-ngrx/

https://egghead.io/courses/building-a-time-machine-with-angular-2-and-rxjs



Andrew Lock: Creating a rolling file logging provider for ASP.NET Core 2.0

Creating a rolling file logging provider for ASP.NET Core 2.0

ASP.NET Core includes a logging abstraction that makes writing logs to multiple locations easy. All of the first-party libraries that make up ASP.NET Core and EF Core use this abstraction, and the vast majority of libraries written for ASP.NET Core will too. That means its easy to aggregate the logs from your entire app, including the framework and your own code, into one place.

In this post I'll show how to create a logging provider that writes logs to the file system. In production, I'd recommended using a more fully-featured system like Serilog instead of this library, but I wanted to see what was involved to get a better idea of the process myself.

The code for the file logging provider is available on GitHub, or as the NetEscapades.Extensions.Logging.RollingFile package on NuGet.

The ASP.NET Core logging infrastructure

The ASP.NET Core logging infrastructure consists of three main components:

  • ILogger - Used by your app to create log messages.
  • ILoggerFactory - Creates instances of ILogger
  • ILoggerProvider - Controls where log messages are output. You can have multiple logging providers - every log message you write to an ILogger is written to the output locations for every configured logging provider in your app.


Creating a rolling file logging provider for ASP.NET Core 2.0

When you want to write a log message in your application you typically use DI to inject an ILogger<T> into the class, where T is the name of the class. The T is used to control the category associated with the class.

For example, to write a log message in an ASP.NET Core controller, HomeController, you would inject the ILogger<HomeController> and call one of the logging extension methods on ILogger:

public class HomeController: Controller  
{
    private readonly ILogger<HomeController> _logger;
    public HomeController(ILogger<HomeController> logger)
    {
         _logger = logger;
    }

    public IActionResult Get()
    {
        _logger.LogInformation("Calling home controller action");
        return View();
    }
}

This will write a log message to each output of the configured logging providers, something like this (for the console logger):

info: ExampleApplication.Controllers.HomeController[0]  
      Calling home controller action

ASP.NET Core includes several logging providers out of the box, which you can use to write your log messages to various locations:

  • Console provider - writes messages to the Console
  • Debug provider - writes messages to the Debug window (e.g. when debugging in Visual Studio)
  • EventSource provider - writes messages using Event Tracing for Windows (ETW)
  • EventLog provider - writes messages to the Windows Event Log
  • TraceSource provider - writes messages using System.Diagnostics.TraceSource libraries
  • Azure App Service provider - writes messages to blob storage or files when running your app in Azure.

In ASP.NET Core 2.0, the console and Debug loggers are configured by default, but in production you'll probably want to write your logs to somewhere more durable. In modern applications, you'll likely want to write to a centralised location, such as an Elastic Search cluster, Seq, elamh.io, or Loggr.

You can write your logs to most of these locations by adding logging providers for them directly to your application, but one provider is particularly conspicuous by its absence - a file provider. In this post I'll show how to implement a logging provider that writes your application logs to rolling files.

The logging library Serilog includes support for logging to files, as well as a multitude of other sinks. Rather than implementing your own logging provider as I have here, I strongly recommend you check it out. Nicholas Blumhardt has a post on adding Serilog to your ASP.NET Core 2.0 application here.

Creating A rolling file based logging provider

In actual fact, the ASP.NET Core framework does include a file logging provider, but it's wrapped up behind the Azure App Service provider. To create the file provider I mostly used files already part of the Microsoft.Extensions.Logging.AzureAppServices package, and exposed it as a logging provider in it's own right. A bit of a cheat, but hey, "shoulders of giants" and all that.

Implementing a logging provider basically involves implementing two interfaces:

  • ILogger
  • ILoggerProvider

The AzureAppServices library includes some base classes for batching log messages up, and writing them on a background thread. That's important as logging should inherently be a quick and synchronous operation. Your app shouldn't know or care where the logs are being written, and it certainly shouldn't be waiting on file IO!

The batching logger provider

The BatchingLoggerProvider is an abstract class that encapsulates the process of writing logs to a concurrent collection and writing them on a background thread. The full source is here but the abridged version looks something like this:

public abstract class BatchingLoggerProvider : ILoggerProvider  
{
    protected BatchingLoggerProvider(IOptions<BatchingLoggerOptions> options)
    {
        // save options etc
        _interval = options.Value.Interval
        // start the background task
        _outputTask = Task.Factory.StartNew<Task>(
            ProcessLogQueue,
            null,
            TaskCreationOptions.LongRunning);
    }

    // Implemented in derived classes to actually write the messages out
    protected abstract Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken token);

    // Take messages from concurrent queue and write them out
    private async Task ProcessLogQueue(object state)
    {
        while (!_cancellationTokenSource.IsCancellationRequested)
        {
            // Add pending messages to the current batch
            while (_messageQueue.TryTake(out var message))
            {
                _currentBatch.Add(message);
            }

            // Write the current batch out
            await WriteMessagesAsync(_currentBatch, _cancellationTokenSource.Token);
            _currentBatch.Clear();

            // Wait before writing the next batch
            await Task.Delay(interval, cancellationToken);
        }
    }

    // Add a message to the concurrent queue
    internal void AddMessage(DateTimeOffset timestamp, string message)
    {
        if (!_messageQueue.IsAddingCompleted)
        {
            _messageQueue.Add(new LogMessage { Message = message, Timestamp = timestamp }, _cancellationTokenSource.Token);
        }
    }

    public void Dispose()
    {
        // Finish writing messages out etc
    }

    // Create an instance of an ILogger, which is used to actually write the logs
    public ILogger CreateLogger(string categoryName)
    {
        return new BatchingLogger(this, categoryName);
    }

    private readonly List<LogMessage> _currentBatch = new List<LogMessage>();
    private readonly TimeSpan _interval;
    private BlockingCollection<LogMessage> _messageQueue = new BlockingCollection<LogMessage>(new ConcurrentQueue<LogMessage>());
    private Task _outputTask;
    private CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
}

The BatchingLoggerProvider starts by creating a Task on a background thread that runs the ProcessLogQueue method. This method sits in a loop until the provider is disposed and the CancellationTokenSource is cancelled. It takes log messages off the concurrent (thread safe) queue, and adds them to a temporary list, _currentBatch. This list is passed to the abstract WriteMessagesAsync method, implemented by derived classes, which writes the actual logs to the destination.

The other most important method is CreateLogger(categoryName), which creates an instance of an ILogger that is injected into your classes. Our actual non-abstract provider implementation, the FileLoggerProvider, derives from the BatchingLoggerProvider:

[ProviderAlias("File")]
public class FileLoggerProvider : BatchingLoggerProvider  
{
    private readonly string _path;
    private readonly string _fileName;
    private readonly int? _maxFileSize;
    private readonly int? _maxRetainedFiles;

    public FileLoggerProvider(IOptions<FileLoggerOptions> options) : base(options)
    {
        var loggerOptions = options.Value;
        _path = loggerOptions.LogDirectory;
        _fileName = loggerOptions.FileName;
        _maxFileSize = loggerOptions.FileSizeLimit;
        _maxRetainedFiles = loggerOptions.RetainedFileCountLimit;
    }

    // Write the provided messages to the file system
    protected override async Task WriteMessagesAsync(IEnumerable<LogMessage> messages, CancellationToken cancellationToken)
    {
        Directory.CreateDirectory(_path);

        // Group messages by log date
        foreach (var group in messages.GroupBy(GetGrouping))
        {
            var fullName = GetFullName(group.Key);
            var fileInfo = new FileInfo(fullName);
            // If we've exceeded the max file size, don't write any logs
            if (_maxFileSize > 0 && fileInfo.Exists && fileInfo.Length > _maxFileSize)
            {
                return;
            }

            // Write the log messages to the file
            using (var streamWriter = File.AppendText(fullName))
            {
                foreach (var item in group)
                {
                    await streamWriter.WriteAsync(item.Message);
                }
            }
        }

        RollFiles();
    }

    // Get the file name
    private string GetFullName((int Year, int Month, int Day) group)
    {
        return Path.Combine(_path, $"{_fileName}{group.Year:0000}{group.Month:00}{group.Day:00}.txt");
    }

    private (int Year, int Month, int Day) GetGrouping(LogMessage message)
    {
        return (message.Timestamp.Year, message.Timestamp.Month, message.Timestamp.Day);
    }

    // Delete files if we have too many
    protected void RollFiles()
    {
        if (_maxRetainedFiles > 0)
        {
            var files = new DirectoryInfo(_path)
                .GetFiles(_fileName + "*")
                .OrderByDescending(f => f.Name)
                .Skip(_maxRetainedFiles.Value);

            foreach (var item in files)
            {
                item.Delete();
            }
        }
    }
}

The FileLoggerProvider implements the WriteMessagesAsync method by writing the log messages to the file system. Files are created with a standard format, so a new file is created every day. Only the last _maxRetainedFiles files are retained, as defined by the FileLoggerOptions.RetainedFileCountLimit property set on the IOptions<> object provided in the constructor.

Note In this implementation, once files exceed a maximum size, no further logs are written for that day. The default is set to 10MB, but you can change this on the FileLoggerOptions object.

The [ProviderAlias("File")] attribute defines the alias for the logger that you can use to configure log filtering. You can read more about log filtering in the docs.

The FileLoggerProvider is used by the ILoggerFactory to create an instance of the BatchingLogger, which implements ILogger, and is used to actually write the log messages.

The batching logger

The BatchingLogger is pretty simple. The main method, Log, passes messages to the provider by calling AddMessage. The methods you typically use in your app, such as LogError and LogInformation are actually just extension methods that call down to this underlying Log method.

public class BatchingLogger : ILogger  
{
    private readonly BatchingLoggerProvider _provider;
    private readonly string _category;

    public BatchingLogger(BatchingLoggerProvider loggerProvider, string categoryName)
    {
        _provider = loggerProvider;
        _category = categoryName;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel != LogLevel.None;
    }

    // Write a log message
    public void Log<TState>(DateTimeOffset timestamp, LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        var builder = new StringBuilder();
        builder.Append(timestamp.ToString("yyyy-MM-dd HH:mm:ss.fff zzz"));
        builder.Append(" [");
        builder.Append(logLevel.ToString());
        builder.Append("] ");
        builder.Append(_category);
        builder.Append(": ");
        builder.AppendLine(formatter(state, exception));

        if (exception != null)
        {
            builder.AppendLine(exception.ToString());
        }

        _provider.AddMessage(timestamp, builder.ToString());
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        Log(DateTimeOffset.Now, logLevel, eventId, state, exception, formatter);
    }
}

Hopefully this class is pretty self explanatory - most of the work is done in the logger provider.

The remaining piece of the puzzle is to provide the extension methods that let you easily configure the provider for your own app.

Extension methods to add the provider yo your application

In ASP.NET Core 2.0, logging providers are added to your application by adding them directly to the WebHostBuilder in Program.cs. This is typically done using extension methods on the ILoggingBuilder. We can create a simple extension method, and even add an override to allow configuring the logging provider's options (filenames, intervals, file size limits etc).

public static class FileLoggerFactoryExtensions  
{
    public static ILoggingBuilder AddFile(this ILoggingBuilder builder)
    {
        builder.Services.AddSingleton<ILoggerProvider, FileLoggerProvider>();
        return builder;
    }

    public static ILoggingBuilder AddFile(this ILoggingBuilder builder, Action<FileLoggerOptions> configure)
    {
        builder.AddFile();
        builder.Services.Configure(configure);

        return builder;
    }
}

In ASP.NET Core 2.0, logging providers are added using DI, so adding our new logging provider just requires adding the FileLoggerProvider to DI, as in the AddFile() method above.

With the provider complete, we can add it to our application:

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureLogging(builder => builder.AddFile()) // <- add this line
            .UseStartup<Startup>()
            .Build();
}

This adds the FileLoggerProvider to the application, in addition to the Console and Debug provider. Now when we write logs to our application, logs will also be written to a file:

Creating a rolling file logging provider for ASP.NET Core 2.0

Summary

Creating an ILoggerProvider will rarely be necessary, especially thanks to established frameworks like Serilog and NLog that integrate with ASP.NET Core. Wherever possible, I suggest looking at one of these, but if you don't want to use a replacement framework like this, then using a dedicated ILoggerProvider is an option.

Implementing a new logging provider requires creating an ILogger implementation and an ILoggerProvider implementation. In this post I showed an example of a rolling file provider. For the full details and source code, check out the project on GitHub, or the NuGet package. All comments, bugs and suggestions welcome, and credit to the ASP.NET team for creating the code I based this on!


Andrew Lock: Aligning strings within string.Format and interpolated strings

Aligning strings within string.Format and interpolated strings

I was browsing through the MSDN docs the other day, trying to remind myself of the various standard ToString() format strings, when I spotted something I have somehow missed in all my years of .NET - alignment components.

This post is for those of you who have also managed to miss this feature, looking at how you can use alignment components both with string.Format and when you are using string interpolation.

Right-aligning currencies in format strings

I'm sure the vast majority of people already know how format strings work in general, so I won't dwell on it much here. In this post I'm going to focus on formatting numbers, as formatting currencies seems like the canonical use case for alignment components.

The following example shows a simple console program that formats three decimals as currencies:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1:C}");
        Console.WriteLine($"Number 2 {val2:C}");
        Console.WriteLine($"Number 3 {val3:C}");
    }
}

As you can see, we are using the standard c currency formatter in an interpolated string. Even though we are using interpolated strings, the output is identical to the output you get if you use string.Format or pass arguments to Console.WriteLine directly. All of the following are the same:

Console.WriteLine($"Number 1 {val1:C}");  
Console.WriteLine("Number 1 {0:C}", val1);  
Console.WriteLine(string.Format("Number 1 {0:C}", val1));  

When you run the original console app, you'll get something like the following (depending on your current culture):

Number 1 £1.00  
Number 2 £12.00  
Number 3 £1,234.12  

Note that the numbers are slightly hard to read - the following is much clearer:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  

This format is much easier to scan - you can easily see that Number 3 is significantly larger than the other numbers.

To right-align formatted strings as we have here, you can use an alignment component in your string.Format format specifiers. An alignment component specifies the total number of characters to use to format the value.

The formatter formats the number as usual, and then adds the necessary number of whitespace characters to make the total up to the specific alignment component. You specify the alignment component after the number to format and a comma ,. For example, the following format string "{value,5}" when value=1 would give the string " 1": 1 formatted character, 4 spaces, 5 characters in total.

You can use a formatting string (such as standard values like c or custom values like dd-mmm-yyyy and ###) in combination with an alignment component. Simply place the format component after the alignment component and :, for example "value,10:###". The integer after the comma is the alignment component, and the string after the colon is the formatting component.

So, going back to our original requirement of right aligning three currency strings, the following would do the trick, with the values previously presented:

decimal val1 = 1;  
decimal val2 = 12;  
decimal val3 = 1234.12m;

Console.WriteLine($"Number 1 {val1,10:C}");  
Console.WriteLine($"Number 2 {val2,10:C}");  
Console.WriteLine($"Number 3 {val3,10:C}");

// Number 1      £1.00
// Number 2     £12.00
// Number 3  £1,234.12

Oversized strings

Now, you may have spotted a slight issue with this alignment example. I specified that the total width of the formatted string should be 10 characters - what happens if the number is bigger that that?

In the following example, I'm formatting a long in the same ways as the previous, smaller, numbers:

class Program  
{
    readonly static decimal val1 = 1;
    readonly static decimal val2 = 12;
    readonly static decimal val3 = 1234.12m;
    readonly static long _long = 999_999_999_999;

    static void Main(string[] args)
    {
        Console.OutputEncoding = System.Text.Encoding.Unicode;

        Console.WriteLine($"Number 1 {val1,10:C}");
        Console.WriteLine($"Number 2 {val2,10:C}");
        Console.WriteLine($"Number 3 {val3,10:C}");
        Console.WriteLine($"Number 3 {_long,10:C}");
    }
}

You can see the effect of this 'oversized' number below:

Number 1      £1.00  
Number 2     £12.00  
Number 3  £1,234.12  
Number 3 £999,999,999,999.00  

As you can see, when a formatted number doesn't fit in the requested alignment characters, it spills out to the right. Essentially the alignment component indicates the minimum number of characters the formatted value should occupy.

Padding left-aligned strings

You've seen how to left-align currencies, but what if the labels associated with these values were not all the same length, as in the following example:

Console.WriteLine($"A small number {val1,10:C}");  
Console.WriteLine($"A bit bigger {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

Written like this, our good work aligning the currencies is completely undone by the unequal length of our labels:

A small number      £1.00  
A bit bigger     £12.00  
A bit bigger again  £1,234.12  

Now, there's an easy way to fix the problem in this case, just manually pad with whitespace:

Console.WriteLine($"A small number     {val1,10:C}");  
Console.WriteLine($"A bit bigger       {val2,10:C}");  
Console.WriteLine($"A bit bigger again {val3,10:C}");  

But what if these labels were dynamic? In that case, we could use the same alignment component trick. Again, the integer passed to the alignment component indicates the minimum number of characters, but this time we use a negative value to indicate the values should be left aligned:

var label1 = "A small number";  
var label2 = "A bit bigger";  
var label3 = "A bit bigger again";

Console.WriteLine($"{label1,-18} {val1,10:C}");  
Console.WriteLine($"{label2,-18} {val2,10:C}");  
Console.WriteLine($"{label3,-18} {val3,10:C}");  

With this technique, when the strings are formatted, we get nicely formatted currencies and labels.

A small number          £1.00  
A bit bigger           £12.00  
A bit bigger again  £1,234.12  

Limitations

Now, there's one big limitation when it comes to using alignment components. In the previous example, we had to explicitly set the alignment component to a length of 18 characters. That feels a bit clunky.

Ideally, we'd probably prefer to do something like the following:

var maxLength = Math.Max(label1.Length, label2.Length);  
Console.WriteLine($"{label1,-maxLength} {val1,10:C}");  
Console.WriteLine($"{label2,-maxLength} {val2,10:C}");  

Unfortunately, this doesn't compile - maxLength has to be a constant. Ah well.

Summary

You can use alignment components in your format strings to both right-align and left-align your formatted values. This pads the formatted values with whitespace to either right-align (positive values) or left-align (negative values) the formatted value. This is particularly useful for right-aligning currencies in strings.


Darrel Miller: HTTP Pattern Index

When building HTTP based applications we are limited to a small set of HTTP methods in order to achieve the goals of our application. Once our needs go beyond simple CRUD style manipulation of resource representations, we need to be a little more creative in the way we manipulate resources in order to achieve more complex goals.

The following patterns are based on scenarios that I myself have used in production applications, or I have seen others implement. These patterns are language agnostic, domain agnostic and to my knowledge, exist within the limitations of the REST constraints.


Name Description
Alias A resource designed to provide a logical identifier but without being responsible for incurring the costs of transferring the representation bytes.
Action Coming soon A processing resource used to convey a client's intent to invoke some kind of unsafe action on a secondary resource.
Bouncer A resource designed to accept a request body containing complex query parameters and redirect to a new location to enable the results of complex and expensive queries to be cached.
Builder Coming soon: A builder resource is much like a factory resource in that it is used to create another resource, however, a builder is a transient resource that enables idempotent creation and allows the client to specify values that cannot change over the lifetime of the created resource.
Bucket A resource used to indicate the status of a "child" resource.
Discovery This type of resource is used to provide a client with the information it needs to be able to access other resources.
Factory A factory resource is one that is used to create another resource.
Miniput A resource designed to enable doing a partial updates to another resource.
Progress A progress resource is usually a temporary resource that is created automatically by the server to provide status on some long running process that has been initiated by a client.
Sandbox Coming soon: A processing resource that is paired with a regular resource to enable making "whatif" style updates and seeing what the results would have been if applied against the regular resource.
Toggle Coming soon: A resource that has two distinct states and can easily be switched between those states.
Whackamole A type of resource that when deleted, re-appears as a different resource.
Window Coming soon: A resource that provides access to a subset of a larger set of information through the use of parameters that filter, project and zoom information from the complete set.


Anuraj Parameswaran: Introduction to Razor Pages in ASP.NET Core

This post is about Razor Pages in ASP.NET Core. Razor Pages is a new feature of ASP.NET Core MVC that makes coding page-focused scenarios easier and more productive. With ASP.NET Core 2.0, Microsoft released Razor Pages. Razor Pages is another way of building applications, built on top of ASP.NET Core MVC. Razor Pages will be helpful for the beginners as well as the developers, who are coming from other web application development backgrounds like PHP or Old ASP. Razor Pages will fit well in small scenarios where building an application in MVC is an overkill.


Dominick Baier: Authorization is hard! Slides and Video from NDC Oslo 2017

A while ago I wrote a controversial article about the problems that can arise when mixing authentication and authorization systems – especially when using identity/access tokens to transmit authorization data – you can read it here.

In the meanwhile Brock and I sat down to prototype a possible solution (or at least an improvement) to the problem and presented it to various customers and at conferences.

Also many people asked me for a more detailed version of my blog post – and finally there is now a recording of our talk from NDC – video here – and slides here. HTH!

 


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an interview – so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.