Anuraj Parameswaran: Yammer external login setup with ASP.NET Core

This post shows you how to enable your users to sign in with their Yammer account. Similar to the other social networks, the authentication is an OAuth 2 flow, beginning with the user authenticating with their Yammer credentials. The user then authorizes your app to connect to their Yammer network. The end result is a token that your app will use to write events to Yammer and retrieve Yammer data.


Anuraj Parameswaran: Working with Entity Framework Core - Hybrid Approach

Recently I started working on Dictionary Web API, which converts English to Malayalam(my native language). I am able to find out the word definitions database as CSV, by running Import Data wizard in SQL Server, I created a SQL Server database with definitions. The definitions table is a contains thousands of rows, so I don’t want to create it and insert the data, instead I want to use Database first approach for creating the entity. So here is the command to which build DBContext and POCO classes using existing database.


Andrew Lock: Setting ASP.NET Core version numbers for a Docker ONBUILD builder image

Setting ASP.NET Core version numbers for a Docker ONBUILD builder image

In a previous post, I showed how you can create NuGet packages when you build your app in Docker using the .NET Core CLI. As part of that, I showed how to set the version number for the package using MSBuild commandline switches.

That works well when you're directly calling dotnet build and dotnet pack yourself, but what if you want to perform those tasks in a "builder" Dockerfile, like I showed previously. In those cases you need to use a slightly different approach, which I'll describe in this post.

I'll start with a quick recap on using an ONBUILD builder, and how to set the version number of an app, and then I'll show the solution for how to combine the two. In particular, I'll show how to create a builder and a "downstream" app's Dockerfile where

  • Calling docker build with --build-arg Version=0.1.0 on your app's Dockerfile, will set the version number for your app in the builder image
  • You can provide a default version number in your app's Dockerfile, which is used if you don't provide a --build-arg
  • If the downstream image does not set the version, the builder Dockerfile uses a default version number.

Previous posts in this series:

Using ONBUILD to create builder images

The ONBUILD command allows you to specify a command that should be run when a "downstream" image is built. This can be used to create "builder" images that specify all the steps to build an application or library, reducing the boilerplate in your application's Dockerfile.

For example, in a previous post I showed how you could use ONBUILD to create a generic ASP.NET Core builder Dockerfile, reproduced below:

# Build image
FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
WORKDIR /sln

ONBUILD COPY ./*.sln ./NuGet.config  ./

# Copy the main source project files
ONBUILD COPY src/*/*.csproj ./  
ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done

# Copy the test project files
ONBUILD COPY test/*/*.csproj ./  
ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done 

ONBUILD RUN dotnet restore

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src  
ONBUILD RUN dotnet build -c Release --no-restore

ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  

By basing your app Dockerfile on this image (in the FROM statement), your application would be automatically restored, built and tested, without you having to include those steps yourself. Instead, your app image could be very simple, for example:

# Build image
FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder

# Publish
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.7  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

Setting the version number when building your application

You often want to set the version number of a library or application when you build it - you might want to record the app version in log files when it runs for example. Also, when building NuGet packages you need to be able to set the package version number. There are a variety of different version numbers available to you (as I discussed in a previous post), all of which can be set from the command line when building your application.

In my last post I described how to set version numbers using MSBuild switches. For example, to set the Version MSBuild property when building (which, when set, updates all the other version numbers of the assembly) you could use the following command

dotnet build /p:Version=0.1.2-beta -c Release --no-restore  

Setting the version in this way is the same whether you're running it from the command line, or in Docker. However, in your Dockerfile, you will typically want to pass the version to set as a build argument. For example, the following command:

docker build --build-arg Version="0.1.0" .  

could be used to set the Version property to 0.1.0 by using the ARG command, as shown in the following Dockerfile:

FROM microsoft/dotnet:2.0.3-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build  

Using ARGs in a parent Docker image that uses ONBUILD

The two techniques described so far work well in isolation, but getting them to play nicely together requires a little bit more work. The initial problem is to do with the way Docker treats builder images that use ONBUILD.

To explore this, imagine you have the following, simple, builder image, tagged as andrewlock/testbuild:

FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
WORKDIR /sln

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src

ONBUILD RUN dotnet build -c Release  

Warning: This Dockerfile has no optimisations, don't use it for production!

As a first attempt, you might try just adding the ARG command to your downstream image, and passing the --build-arg in. The following is a very simple Dockerfile that uses the builder, and accepts an argument.

# Build image
FROM andrewlock/testbuild as builder

ARG Version

# Publish
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

Calling docker build --build-arg Version="0.1.0" . will build the image, and set the $Version parameter in the downstream dockerfile to 0.1.0, but that won't be used in the builder Dockerfile at all, so it would only be useful if you're running dotnet pack in your downstream image for example.

Instead, you can use a couple of different characteristics about Dockerfiles to pass values up from your downstream app's Dockerfile to the builder Dockerfile.

  • Any ARG defined before the first FROM is "global", so it's not tied to a builder stage. Any stage that wants to use it, still needs to declare its own ARG command
  • You can provide default values to ARG commands using the format ARG value=default
  • You can combine ONBUILD with ARG

Lets combine all these features, and create our new builder image.

A builder image that supports setting the version number

I've cut to the chase a bit here - needless to say I spent a while fumbling around, trying to get the Dockerfiles doing what I wanted. The solution shown in this post is based on the excellent description in this issue.

The annotated builder image is as follows. I've included comments in the file itself, rather than breaking it down afterwards. As before, this is a basic builder image, just to demonstrate the concept. For a Dockerfile with all the optimisations see my builder image on Dockerhub.

FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  

# This defines the `ARG` inside the build-stage (it will be executed after `FROM`
# in the child image, so it's a new build-stage). Don't set a default value so that
# the value is set to what's currently set for `BUILD_VERSION`
ONBUILD ARG BUILD_VERSION

# If BUILD_VERSION is set/non-empty, use it, otherwise use a default value
ONBUILD ARG VERSION=${BUILD_VERSION:-1.0.0}

WORKDIR /sln

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src

ONBUILD RUN dotnet build -c Release /p:Version=$VERSION  

I've actually defined two arguments here, BUILD_VERSION and VERSION. We do this to ensure that we can set a default version in the builder image, while also allowing you to override it from the downstream image or by using --build-arg.

Those two additional ONBUILD ARG lines are all you need in your builder Dockerfile. You need to either update your downstream app's Dockerfile as shown below, or use --build-arg to set the BUILD_VERSION argument for the builder to use.

If you want to set the version number with --build-arg

If you just want to provide the version number as a --build-arg value, then you don't need to change your downstream image. You could use the following:

FROM andrewlock/testbuild as builder  
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

And then set the version number when you build:

docker build --build-arg BUILD_VERSION="0.3.4-beta" .  

That would pass the BUILD_VERSION value up to the builder image, which would in turn pass it to the dotnet build command, setting the Version property to 0.3.4-beta.

If you don't provide the --build-arg argument, the builder image will use its default value (1.0.0) as the build number.

Note that this will overwrite any version number you've set in your csproj files, so this approach is only any good for you if you're relying on a CI process to set your version numbers

If you want to set a default version number in your downstream Dockerfile

If you want to have the version number of your app checked in to source, then you can set a version number in your downstream Dockerfile. Set the BUILD_VERSION argument before the first FROM command in your app's Dockerfile:

ARG BUILD_VERSION=0.2.3  
FROM andrewlock/testbuild as builder  
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

Running docker build . on this file will ensure that the libraries built in the builder file have a version of 0.2.3.

If you wish to overwrite this at runtime you can simply pass in the build argument as before:

docker build --build-arg BUILD_VERSION="0.3.4-beta" .  

And there you have it! ONBUILD playing nicely with ARG. If you decide to adopt this pattern in your builder images, just be aware that you will no longer be able to change the version number by setting it in your csproj files.

Summary

In this post I described how you can use ONBUILD and ARG to dynamically set version numbers for your .NET libraries when you're using a generalised builder image. For an alternative description (and the source of this solution), see this issue on GitHub and the provided examples.


Dominick Baier: The State of HttpClient and .NET Multi-Targeting

IdentityModel is a library that uses HttpClient internally – it should also run on all recent versions of the .NET Framework and .NET Core.

HttpClient is sometimes “built-in”, e.g. in the .NET Framework, and sometimes not, e.g. in .NET Core 1.x. So fundamentally there is a “GAC version” and a “Nuget version” of the same type.

We had lots of issues with this because it seemed regardless in which combination you are using the flavours of HttpClient, this will lead to a problem one way or another (github issues). The additional confusion was added by the fact that the .NET tooling had certain bugs in the past that needed workarounds that lead to other problems when those bugs were fixes in later tooling.

Long story short – every time I had to change the csproj file, it broke someone. The latest issue was related to Powershell and .NET 4.7.x (see here).

I once and for all wanted an official statement, how to deal with HttpClient – so I reached out to Immo (@terrajobst) over various channels. Turns out I was not alone with this problem.

Screenshot 2018-05-21 07.43.06

Despite him being on holidays during that time, he gave a really elaborate answer that contains both excellent background information and guidance.

I thought I should copy it here, so it becomes more search engine friendly and hopefully helps out other people that are in the same situation (original thread here).

“Alright, let me try to answer your question. It will probably have more detail than you need/asked for but I might be helpful to start with intention/goals and then the status quo. HttpClient started out as a NuGet package (out-of-band) and was added to the .NET Framework in 4.5 as well (in-box).

With .NET Core/.NET Standard we originally tried to model the .NET platform as a set of packages where being in-box vs. out-of-band no longer mattered. However, this was messier and more complicated than we anticipated.

As a result, we largely abandoned the idea of modeling the .NET platform as a NuGet graph with Core/Standard 2.0.

With .NET Core 2.0 and .NET Standard 2.0 you shouldn’t need to reference the SystemNetHttpClient NuGet package at all. It might get pulled from 1.x dependencies though.

Same goes for .NET Framework: if you target 4.5 and up, you should generally use the in-box version instead of the NuGet package. Again, you might end up pulling it in for .NET Standard 1.x and PCL dependencies, but code written directly against .NET Framework shouldn’t use it.

So why does the package still exist/why do we still update it? Simply because we want to make existing code work that took a dependency on it. However, as you discovered that isn’t smooth sailing on .NET Framework.

The intended model for the legacy package is: if you consume the package from .NET Framework 4.5+, .NET Core 2+, .NET Standard 2+ the package only forwards to the platform provided implementation as opposed to bring it’s own version.

That’s not what actually happens in all cases though: the HTTP Client package will (partially) replace in-box components on .NET Framework which happen to work for some customers and fails for others. Thus, we cannot easily fix the issue now.

On top of that we have the usual binding issues with the .NET Framework so this only really works well if you add binding redirects. Yay!

So, as a library author my recommendation is to avoid taking a dependency on this package and prefer the in-box versions in .NET Framework 4.5, .NET Core 2.0 and .NET Standard 2.0.

Thanks Immo!


Andrew Lock: Creating NuGet packages in Docker using the .NET Core CLI

Creating NuGet packages in Docker using the .NET Core CLI

This is the next post in a series on building ASP.NET Core apps in Docker. In this post, I discuss how you can create NuGet packages when you build your app in Docker using the .NET Core CLI.

There's nothing particularly different about doing this in Docker compared to another system, but there are a couple of gotchas with versioning you can run into if you're not careful.

Previous posts in this series:

Creating NuGet packages with the .NET CLI

The .NET Core SDK and new "SDK style" .csproj format makes it easy to create NuGet packages from your projects, without having to use NuGet.exe, or mess around with .nuspec files. You can use the dotnet pack command to create a NuGet package by providing the path to a project file.

For example, imagine you have a library in your solution you want to package:

Creating NuGet packages in Docker using the .NET Core CLI

You can pack this project by running the following command from the solution directory - the .csproj file is found and a NuGet package is created. I've used the -c switch to ensure we're building in Release mode:

dotnet pack ./src/AspNetCoreInDocker -c Release  

By default, this command runs dotnet restore and dotnet build before producing the final NuGet package, in the bin folder of your project:

Creating NuGet packages in Docker using the .NET Core CLI

If you've been following along with my previous posts, you'll know that when you build apps in Docker, you should think carefully about the layers that are created in your image. In previous posts I described how to structure your projects so as to take advantage of this layer caching. In particular, you should ensure the dotnet restore happens early in the Docker layers, so that is is cached for subsequent builds.

You will typically run dotnet pack at the end of a build process, after you've confirmed all the tests for the solution pass. At that point, you will have already run dotnet restore and dotnet build so, running it again is unnecessary. Luckily, dotnet pack includes switches to do just this:

dotnet pack ./src/AspNetCoreInDocker -c Release --no-build --no-restore  

If your project has multiple projects that you want to package, you can pass in the path to a solution file, or just call dotnet pack in the solution directory:

dotnet pack -c Release --no-build --no-restore  

This will attempt to package all projects in your solution. If you don't want to package a particular project, you can add <IsPackable>false</IsPackable> to the project's .csproj file. For example:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <IsPackable>false</IsPackable>
  </PropertyGroup>

</Project>  

That's pretty much all there is to it. You can add this command to the end of your Dockerfile, and NuGet packages will be created for all your packable projects. There's one major point I've left out with regard to creating packages - setting the version number.

Setting the version number for your NuGet packages

Version numbers seem to be a continual bugbear of .NET; ASP.NET Core has gone through so many numbering iterations and mis-aligned versions that it can be hard for newcomers to figure out what's going on.

Sadly, the same is almost true when it comes to versioning of your .NET Project dlls. There are no less than seven different version properties you can apply to your project. Each of these has slightly different rules, and meaning, as I discussed in a previous post.

Luckily, you can typically get away with only worrying about one: Version.

As I discussed in my previous post, the MSBuild Version property is used as the default value for the various version numbers that are embedded in your assembly: AssemblyVersion, FileVersion, and InformationalVersion, as well as the NuGet PackageVersion when you pack your library. When you're building NuGet packages to share with other applications, you will probably want to ensure that these values are all updated.

Creating NuGet packages in Docker using the .NET Core CLI

There's two primary ways you can set the Version property for your project

  • Set it in your .csproj file
  • Provide it at the command line when you dotnet build your app.

Which you choose is somewhat a matter of preference - if in your .csproj, then the build number is checked into source code and will picked up automatically by the .NET CLI. However, be aware that if you're building in Docker (and have been following my optimisation series), then updating the .csproj will break your layer cache, so you'll get a slower build immediately after bumping the version number.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <Version>0.1.0</Version>
  </PropertyGroup>

</Project>  

One reason to provide the Version number on the command line is if your app version comes from a CI build. If you create a NuGet package in AppVeyor/Travis/Jenkins with every checkin, then you might want your version numbers to be provided by the CI system. In that case, the easiest approach is to set the version at runtime.

In principle, setting the Version just requires passing the correct argument to set the MSBuild property when you call dotnet:

RUN dotnet build /p:Version=0.1.0 -c Release --no-restore  
RUN dotnet pack /p:Version=0.1.0 -c Release --no-restore --no-build  

However, if you're using a CI system to build your NuGet packages, you need some way of updating the version number in the Dockerfile dynamically. There's several ways you could do this, but one way is to use a Docker build argument.

Build arguments are values passed in when you call docker build. For example, I could pass in a build argument called Version when building my Dockerfile using:

docker build --build-arg Version="0.1.0" .  

Note that as you're providing the version number on the command line when you call docker build you can pass in a dynamic value, for example an Environment Variable set by your CI system.

In order for your Dockerfile to use the provided build argument, you need to declare it using the ARG instruction:

ARG Version  

To put that into context, the following is a very basic Dockerfile that uses a version provided in --build-args when building the app

FROM microsoft/dotnet:2.0.3-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore -c Release  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build  

Warning: This Dockerfile is VERY basic - don't use it for anything other than as an example of using ARG!

After building this Dockerfile you'll have an image that contains the NuGet packages for your application. It's then just a case of using dotnet nuget push to publish your package to a NuGet server. I won't go into details on how to do that in this post, so check the documentation for details.

Summary

Building NuGet packages in Docker is much like building them anywhere else with dotnet pack. The main things you need to take into account are optimising your Dockerfile to take advantage of layer caching, and how to set the version number for the generated packages. In this post I described how to use the --build-args argument to update the Version property at build time, to give the smallest possible effect on your build cache.


Anuraj Parameswaran: Code coverage in .NET Core with Coverlet

Few days back I wrote a post about code coverage in ASP.NET Core. In that post I was using Visual Studio 2017 Enterprise, which doesn’t support Linux or Mac and it is costly. Later I found one alternative, Coverlet - Coverlet is a cross platform code coverage library for .NET Core, with support for line, branch and method coverage. Coverlet integrates with the MSBuild system, so it doesn’t require any additional setup other than including the NuGet package in the unit test project. It integrates with the dotnet test infrastructure built into the .NET Core CLI and when enabled, will automatically generate coverage results after tests are run.


Ali Kheyrollahi: CacheCow 2.0 is here - now supporting .NET Standard and ASP.NET Core MVC


CacheCow 2.0 Series:

  • Part 1 - CacheCow 2.0 is here - supporting .NET Standard and ASP.NET Core MVC [This post]
  • Part 2 - CacheCow.Client 2.0: HTTP Caching for your API calls
  • Part 3 - CacheCow.Server for ASP.NET Core MVC [Coming Soon]
  • Part 4 - CacheCow.Server for ASP.NET Web API [Coming Soon]
  • Epilogue: side-learnings from supporting Core [Coming]

    So, no CacheCore in the end!

    Yeah. I did announce last year that the new updated CacheCow will live under the name CacheCore. The more I worked on it, the more it became evident that only a tiny amount of CacheCow will ever be Core-related. And frankly trends come and go, while HTTP Caching is pretty much unchanged for the last 20 years.

    So the name CacheCow lives on, although in the end what matters for a library is if it can solve any of your problems. I hope it will and carry on doing so. Now you can use CacheCow.Client with .NET 4.52+ and .NET Standard 2.0+. Also CacheCow.Server also supports both Web API and ASP.NET Core MVC - and possibly Nancy soon!

    CacheCow 2.0 has lots of documentation and the project now has 3 sample projects covering both client and server sides in the same project.

    CacheCow.Server has changed radically

    The design for the server-side of the CacheCow 0.x and 1.x was based on the assumption that your API is a pure RESTful API and the data only changes through calling its endpoints so the API layer gets to see all changes to its underlying resources. The more I explored and over the years, this turned out to be a pretty big assumption in the end, and is realistic only in the REST La La Land - a big learning for me. And even if the case is true, the relationship between resources resulted in server-side cache directive management to be a mammoth task. For example in the familiar scenario of customer-product-orders, if an order changes, the cache for the collection of orders is now invalidated - hence the API needs to understand which resource is collection of which. What is more, change in customer could change the order data (depending on implementation of course, but just the take it for the sake of argument). So it meant that the API now has to know a lot more: single vs collection resources, relationship between resources... it was a slippery slope to a very bad place.

    With removing that assumption, the responsibility now lies with the back-end stores which provide data for the resources - they will be queried by a couple of constructs added to CacheCow.Server. If you opt to implement that part for your API, then you have a supper-efficient API. If not, there are some defaults there to do the work for you - although super-optimal. All of this will be explained in the CacheCow.Server posts, but the point is CacheCow.Server is now a clean abstraction for HTTP Caching, as clean as I could make it. Judge for yourself.

    What is HTTP Caching?

    Caching is a very familiar notion in programming and pretty much every developer uses it on a regular basis. This familiarity has a downside to it since HTTP Caching is more complex and in many ways different to the routing caching in code - hence it is very common to see misunderstandings even amongst senior developers. If you ask an average developer this question: "In HTTP Caching, where the cache data gets stored?" it is probably more likely to hear the wrong answer "server" than the correct answer "client". In fact, many developers are looking for to improve their server-side code's performance by turning on the caching on the server, while if the callers ignore the caching directives it will not result in any benefit.

    This reminds me of a blog post I wrote 6 years ago where I used HTTP Caching as an example of mixed-concern (as opposed to server-concern or client-concern) where "For HTTP caching to work, client and server need to work in tandem".  This a key difference with the usual caching scenarios seen everyday. What makes HTTP Caching even more complex is the concurrency primitives, built-in starting with HTTP 1.1 - we will look into those below. 

    I know HTTP Caching is hardly new and has been explained many times before. But considering number of times I have seen being completely misunderstood, I think it deserves your 5-10 minutes - even though as refresher.


    Resources vs. Representations

    REST advocates exposing services through a uniform API (where HTTP is one such implementation) allowing resources to be created, modified and queried using the API. A resource is addressed by its location identifier or URL (e.g. /api/car/123). When a client requests a resource, only a representation of the resource is sent back. This means that the client receives only a representation out of many possible representations. This also would mean that when the client caches the representation, this representation is only valid if the the representation requested matches the one cached. And finally, a client might cache different representations of the same resource. But what does all of this mean?

    HTTP GET - The server serving a representation of the resource. Server also send cache directives.
    A resource could be represented differently in terms of format, encoding, language and other presentation concerns. HTTP provides semantic for the client to express its preferences in such concerns with headers such as Accept, Accept-Language and Accept-Encoding. There could be other headers that can result in alternative representations. The server is responsible for returning the definitive list of such headers in the Vary header.

    Cache Directives

    Server is responsible for returning cache directives along with the representation. Cache-Control header is the de-factor cache directive defining whether the representation can be cached, for how long, whether by the end client or also by the HTTP intermediaries/proxies, etc. HTTP 1.0 had the simple Expires header which only defined absolute expire time of the representation.

    You could also think of other cache-related headers as cache directives (although purely speaking they are not) such as ETag, Last-Modified and Vary.

    Resource Version Identifiers (Validators)

    HTTP 1.1 defines ETag as an opaque identifier which defines the version of the resource. ETag (or EntityTag) can be strong or weak. Normally a strong ETag identifies version of the representation while a weak ETag is only at the resource level.

    Last-Modified header was the main validator in HTTP 1.0 but since it is based on a date with up-to-a-second precision, it is not suitable for achieving high consistency since a resource could change multiple times in a second.

    CacheCow supports both validators (ETag and Last-Modified) and combines these two notions in the construct TimedETag.

    Validating (conditional) HTTP Calls

    A GET call can request the server for the resource with the condition that the resource has been modified with respect to its validator. In this case, the client sends ETag(s) in the If-None-Match header or Last-Modified date in the If-Modified-Since header. If validation matches and no change was made, the server returns status 304 otherwise the resource is sent back.

    For a PUT (and DELETE) call, the client sends validators in If-Match or If-Unmodified-Since.  The server performs the action if validation matches otherwise status 412 is sent back.

    Consistency

    The client normally caches representations longer than the expiry and after the expiry it resorts to validating calls and if they succeed it can carry on using the representations.

    In fact the sever can return representations with immediate expiry forcing the client to validate every time before using the cache resource. This scenario can be called High-Consistency caching since it ensures the client always uses the most recent version.

    Is HTTP Caching suitable for my scenario?

    Consider using HTTP Caching if:
    • Both your client and server are cache-aware. The client either is a browser which is the ultimate HTTP machine well capable of handling cache directives or a client that understands caching such as HttpClient + CacheCow.Client.
    • You need a High-Consistency caching and you cannot afford clients to use outdated data
    • Saving on network bandwidth is important

    HTTP Caching is unsuitable for you if:
    • Your client does not understand/implement HTTP caching
    • The server is unable to provide cache directives


    In the next post, we will look into CacheCow.Client.


    Damien Bowden: Uploading and sending image messages with ASP.NET Core SignalR

    This article shows how images could be uploaded using a file upload with a HTML form in an ASP.MVC Core view, and then sent to application clients using SignalR. The images are uploaded as an ICollection of IFormFile objects, and sent to the SignalR clients using a base64 string. Angular is used to implement the SignalR clients.

    Code https://github.com/damienbod/AspNetCoreAngularSignalR

    Posts in this series

    SignalR Server

    The SignalR Hub is really simple. This implements a single method which takes an ImageMessage type object as a parameter.

    using System.Threading.Tasks;
    using AspNetCoreAngularSignalR.Model;
    using Microsoft.AspNetCore.SignalR;
    
    namespace AspNetCoreAngularSignalR.SignalRHubs
    {
        public class ImagesMessageHub : Hub
        {
            public Task ImageMessage(ImageMessage file)
            {
                return Clients.All.SendAsync("ImageMessage", file);
            }
        }
    }
    

    The ImageMessage class has two properties, one for the image byte array, and a second for the image information, which is required so that the client application can display the image.

    public class ImageMessage
    {
    	public byte[] ImageBinary { get; set; }
    	public string ImageHeaders { get; set; }
    }
    

    In this example, SignalR is added to the ASP.NET Core application in the Startup class, but this could also be done directly in the kestrel server. The AddSignalR middleware is added and then each Hub explicitly with a defined URL in the Configure method.

    public void ConfigureServices(IServiceCollection services)
    {
    	...
    	
    	services.AddTransient<ValidateMimeMultipartContentFilter>();
    
    	services.AddSignalR()
    	  .AddMessagePackProtocol();
    
    	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    }
    
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
    	...
    
    	app.UseSignalR(routes =>
    	{
    		routes.MapHub<ImagesMessageHub>("/zub");
    	});
    
    	app.UseMvc(routes =>
    	{
    		routes.MapRoute(
    			name: "default",
    			template: "{controller=FileClient}/{action=Index}/{id?}");
    	});
    }
    
    

    A File Upload ASP.NET Core MVC controller is implemented to support the file upload. The SignalR IHubContext interface is added per dependency injection for the type ImagesMessageHub. When files are uploaded, the IFormFile collection which contain the images are read to memory and sent as a byte array to the SignalR clients. Maybe this could be optimized.

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Threading.Tasks;
    using AspNetCoreAngularSignalR.Model;
    using AspNetCoreAngularSignalR.SignalRHubs;
    using Microsoft.AspNetCore.Mvc;
    using Microsoft.AspNetCore.SignalR;
    using Microsoft.Net.Http.Headers;
    
    namespace AspNetCoreAngularSignalR.Controllers
    {
        [Route("api/[controller]")]
        public class FileUploadController : Controller
        {
            private readonly IHubContext<ImagesMessageHub> _hubContext;
    
            public FileUploadController(IHubContext<ImagesMessageHub> hubContext)
            {
                _hubContext = hubContext;
            }
    
            [Route("files")]
            [HttpPost]
            [ServiceFilter(typeof(ValidateMimeMultipartContentFilter))]
            public async Task<IActionResult> UploadFiles(FileDescriptionShort fileDescriptionShort)
            {
                if (ModelState.IsValid)
                {
                    foreach (var file in fileDescriptionShort.File)
                    {
                        if (file.Length > 0)
                        {
                            using (var memoryStream = new MemoryStream())
                            {
                                await file.CopyToAsync(memoryStream);
    
                                var imageMessage = new ImageMessage
                                {
                                    ImageHeaders = "data:" + file.ContentType + ";base64,",
                                    ImageBinary = memoryStream.ToArray()
                                };
    
                                await _hubContext.Clients.All.SendAsync("ImageMessage", imageMessage);
                            }
                        }
                    }
                }
    
                return Redirect("/FileClient/Index");
            }
        }
    }
    
    
    

    SignalR Angular Client

    The Angular SignalR client uses the HubConnection to receive ImageMessage messages. Each message is pushed to the client array which is used to display the images. The @aspnet/signalr npm package is required to use the HubConnection.

    import { Component, OnInit } from '@angular/core';
    import { HubConnection } from '@aspnet/signalr';
    import * as signalR from '@aspnet/signalr';
    
    import { ImageMessage } from '../imagemessage';
    
    @Component({
        selector: 'app-images-component',
        templateUrl: './images.component.html'
    })
    
    export class ImagesComponent implements OnInit {
        private _hubConnection: HubConnection | undefined;
        public async: any;
        message = '';
        messages: string[] = [];
    
        images: ImageMessage[] = [];
    
        constructor() {
        }
    
        ngOnInit() {
            this._hubConnection = new signalR.HubConnectionBuilder()
                .withUrl('https://localhost:44324/zub')
                .configureLogging(signalR.LogLevel.Trace)
                .build();
    
            this._hubConnection.stop();
    
            this._hubConnection.start().catch(err => {
                console.error(err.toString())
            });
    
            this._hubConnection.on('ImageMessage', (data: any) => {
                console.log(data);
                this.images.push(data);
            });
        }
    }
    

    The Angular template displays the images using the header and the binary data properties.

    <div class="container-fluid">
    
        <h1>Images</h1>
    
       <a href="https://localhost:44324/FileClient/Index" target="_blank">Upload Images</a> 
    
        <div class="row" *ngIf="images.length > 0">
            <img *ngFor="let image of images;" 
            width="150" style="margin-right:5px" 
            [src]="image.imageHeaders + image.imageBinary">
        </div>
    </div>
    

    File Upload

    The images are uploaded using an ASP.NET Core MVC View which uses a multiple file input HTML control. This sends the files to the MVC Controller as a multipart/form-data request.

    <form enctype="multipart/form-data" method="post" action="https://localhost:44324/api/FileUpload/files" id="ajaxUploadForm" novalidate="novalidate">
    
        <fieldset>
            <legend style="padding-top: 10px; padding-bottom: 10px;">Upload Images</legend>
    
            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload</label>
                </div>
                <div class="col-xs-7">
                    <input type="file" id="fileInput" name="file" multiple>
                </div>
            </div>
    
            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <input type="submit" value="Upload" id="ajaxUploadButton" class="btn">
                </div>
                <div class="col-xs-7">
    
                </div>
            </div>
    
        </fieldset>
    
    </form>
    

    When the application is run, n instances of the clients can be opened. Then one can be used to upload images to all the other SignalR clients.

    This soultion works good, but has many ways, areas which could be optimized for performance.

    Links

    https://github.com/aspnet/SignalR

    https://github.com/aspnet/SignalR#readme

    https://radu-matei.com/blog/signalr-core/

    https://www.npmjs.com/package/@aspnet/signalr-client

    https://msgpack.org/

    https://stackoverflow.com/questions/40214772/file-upload-in-angular?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa

    https://stackoverflow.com/questions/39272970/angular-2-encode-image-to-base64?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa


    Andrew Lock: Version vs VersionSuffix vs PackageVersion: What do they all mean?

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    In this post I look at the various different version numbers you can set for a .NET Core project, such as Version, VersionSuffix, and PackageVersion. For each one I'll describe the format it can take, provide some examples, and what it's for.

    This post is very heavily inspired by Nate McMaster's question (which he also answered) on Stack Overflow. I'm mostly just reproducing it here so I can more easily find it again later!

    Version numbers in .NET

    .NET loves version numbers - they're sprinkled around everywhere, so figuring out what version of a tool you have is sometimes easier said than done.

    Leaving aside the tooling versioning, .NET also contains a plethora of version numbers for you to add to your assemblies and NuGet packages. There are at least seven different version numbers you can set when you build your assemblies. In this post I'll describe what they're for, how you can set them, and how you can read/use them.

    The version numbers available to you break logically into two different groups. The first group, below, exist only as MSBuild properties. You can set them in your csproj file, or pass them as command line arguments when you build your app, but their values are only used to control other properties; as far as I can tell, they're not visible directly anywhere in the final build output:

    So what are they for then? They control the default values for the version numbers which are visible in the final build output.

    I'll explain each number in turn, then I'll explain how you can set the version numbers when you build your app.

    VersionPrefix

    • Format: major.minor.patch[.build]
    • Examples: 0.1.0, 1.2.3, 100.4.222, 1.0.0.3
    • Default: 1.0.0
    • Typically used to set the overall SemVer version number for your app/library

    You can use VersionPrefx to set the "base" version number for your library/app. It indirectly controls all of the other version numbers generated by your app (though you can override it for other specific versions). Typically, you would use a SemVer 1.0 version number with three numbers, but technically you can use between 1 and 4 numbers. If you don't explicitly set it, VersionPrefix defaults to 1.0.0.

    VersionSuffix

    • Format: Alphanumberic (+ hyphen) string: [0-9A-Za-z-]*
    • Examples: alpha, beta, rc-preview-2-final
    • Default: (blank)
    • Sets the pre-release label of the version number

    VersionSuffix is used to set the pre-release label of the version number, if there is one, such as alpha or beta. If you don't set VersionSuffix, then you won't have any pre-release labels. VersionSuffix is used to control the Version property, and will appear in PackageVersion and InformationalVersion.

    Version

    • Format: major.minor.patch[.build][-prerelease]
    • Examples: 0.1.0, 1.2.3.5, 99.0.3-rc-preview-2-final
    • Default: VersionPrefix-VersionSuffix (or just VersionPrefix if VersionSuffix is empty)
    • The most common property set in a project, used to generate versions embedded in assembly.

    The Version property is the value most commonly set when building .NET Core applications. It controls the default values of all the version numbers embedded in the build output, such as PackageVersion and AssemblyVersion so it's often used as the single source of the app/library version.

    By default, Version is formed from the combination of VersionPrefix and VersionSuffix, or if VersionSuffix is blank, VersionPrefix only. For example,

    • If VersionPrefix = 0.1.0 and VersionSuffix = beta, then Version = 0.1.0-beta
    • If VersionPrefix = 1.2.3 and VersionSuffix is empty, then Version = 1.2.3

    Alternatively, you can explicitly overwrite the value of Version. If you do that, then the values of VersionPrefix and VersionSuffix are effectively unused.

    The format of Version, as you might expect, is a combination of the VersionPrefix and VersionSuffix formats. The first part is typically a SemVer three-digit string, but it can be up to four digits. The second part, the pre-release label, is an alphanumeric-plus-hyphen string, as for VersionSuffix.

    AssemblyVersion

    • Format: major.minor.patch.build
    • Examples: 0.1.0.0, 1.2.3.4, 99.0.3.99
    • Default: Version without pre-release label
    • The main value embedded into the generated .dll. An important part of assembly identity.

    Every assembly you produce as part of your build process has a version number embedded in it, which forms an important part of the assembly's identity. It's stored in the assembly manifest and is used by the runtime to ensure correct versions are loaded etc.

    The AssemblyVersion is used along with name, public key token and culture information only if the assemblies are strong-named signed. If assemblies are not strong-named signed, only file names are used for loading. You can read more about assembly versioning in the docs.

    The value of AssemblyVersion defaults to the value of Version, but without the pre-release label, and expanded to 4 digits. For example:

    • If Version = 0.1.2, AssemblyVersion = 0.1.2.0
    • If Version = 4.3.2.1-beta, AssemblyVersion = 4.3.2.1
    • If Version = 0.2-alpha, AssemblyVersion = 0.2.0.0

    The AssemblyVersion is embedded in the output assembly as an attribute, System.Reflection.AssemblyVersionAttribute. You can read this value by inspecting the executing Assembly object:

    using System;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var assemblyVersion = assembly.GetName().Version;
            Console.WriteLine($"AssemblyVersion {assemblyVersion}");
        }
    }
    

    FileVersion

    • Format: major.minor.patch.build
    • Examples: 0.1.0.0, 1.2.3.100
    • Default: AssemblyVersion
    • The file-system version number of the .dll file, that doesn't have to match the AssemblyVersion, but usually does.

    The file version is literally the version number exposed by the DLL to the file system. It's the number displayed in Windows explorer, which often matches the AssemblyVersion, but it doesn't have to. The FileVersion number isn't part of the assembly identity as far as the .NET Framework or runtime are concerned.

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    When strong naming was more heavily used, it was common to keep the same AssemblyVersion between different builds and increment FileVersion instead, to avoid apps having to update references to the library so often.

    The FileVersion is embedded in the System.Reflection.AssemblyFileVersionAttribute in the assembly. You can read this attribute from the assembly at runtime, or you can use the FileVersionInfo class by passing the full path of the assembly (Assembly.Location) to the FileVersionInfo.GetVersionInfo() method:

    using System;  
    using System.Diagnostics;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var fileVersionInfo = FileVersionInfo.GetVersionInfo(assembly.Location);
            var fileVersion = fileVersionInfo.FileVersion;
            Console.WriteLine($"FileVersion {fileVersion}");
        }
    }
    

    InformationalVersion

    • Format: anything
    • Examples: 0.1.0.0, 1.2.3.100-beta, So many numbers!
    • Default: Version
    • Another information number embedded into the DLL, can contain any text.

    The InformationalVersion is a bit of an odd-one out, in that it doesn't need to contain a "traditional" version number per-se, it can contain any text you like, though by default it's set to Version That makes it generally less useful for programmatic uses, though the value is still displayed in Windows explorer:

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    The InformationalVersion is embedded into the assembly as a System.Reflection.AssemblyInformationalVersionAttribute, so you can read it at runtime using the following:

    using System;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var informationVersion = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>().InformationalVersion;
            Console.WriteLine($"InformationalVersion  {informationVersion}");
        }
    }
    

    PackageVersion

    • Format: major.minor.patch[.build][-prerelease]
    • Examples: 0.1.0, 1.2.3.5, 99.0.3-rc-preview-2-final
    • Default: Version
    • Used to generate the NuGet package version when building a package using dotnet pack

    PackageVersion is the only version number that isn't embedded in the output dll directly. Instead, it's used to control the version number of the NuGet package that's generated when you call dotnet pack.

    By default, PackageVersion takes the same value as Version, so it's typically a three value SemVer version number, with or without a pre-release label. As with all the other version numbers, it can be overridden at build time, so it can differ from all the other assembly version numbers.

    How to set the version number when you build your app/library

    That's a lot of numbers, and you can technically set every one to a different value! But if you're a bit overwhelmed, don't worry. It's likely that you'll only want to set one or two values: either VersionPrefix and VersionSuffix, or Version directly.

    You can set the value of any of these numbers in several ways. I'll walk through them below.

    Setting an MSBuild property in your csproj file

    With .NET Core, and the simplification of the .csproj project file format, adding properties to your project file is no longer an arduous task. You can set any of the version numbers I've described in this post by setting a property in your .csproj file.

    For example, the following .csproj file sets the Version number of a console app to 1.2.3-beta, and adds a custom InformationalVersion:

    <Project Sdk="Microsoft.NET.Sdk">
    
      <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>netcoreapp2.0</TargetFramework>
        <Version>1.2.3-beta</Version>
        <InformationalVersion>This is a prerelease package</InformationalVersion>
      </PropertyGroup>
    
    </Project>  
    

    Overriding values when calling dotnet build

    As well as hard-coding the version numbers into your project file, you can also pass them as arguments when you build your app using dotnet build.

    If you just want to override the VersionSuffix, you can use the --version-suffix argument for dotnet build. For example:

    dotnet build --configuration Release --version-suffix preview2-final  
    

    If you want to override any other values, you'll need to use the MSBuild property format instead. For example, to set the Version number:

    dotnet build --configuration Release /p:Version=1.2.3-preview2-final  
    

    Similarly, if you're creating a NuGet package with dotnet pack, and you want to override the PackageVersion, you'll need to use MSBuild property overrides

    dotnet pack --no-build /p:PackageVersion=9.9.9-beta  
    

    Using assembly attributes

    Before .NET Core, the standard way to set the AssemblyVersion, FileVersion, and InformationalVersion were through attributes, for example:

    [assembly: AssemblyVersion("1.2.3.4")]
    [assembly: AssemblyFileVersion("6.6.6.6")]
    [assembly: AssemblyInformationalVersion("So many numbers!")]
    

    However, if you try to do that with a .NET Core project you'll be presented with errors!

    > Error CS0579: Duplicate 'System.Reflection.AssemblyFileVersionAttribute' attribute
    > Error CS0579: Duplicate 'System.Reflection.AssemblyInformationalVersionAttribute' attribute
    > Error CS0579: Duplicate 'System.Reflection.AssemblyVersionAttribute' attribute
    

    As the SDK sets these attributes automatically as part of the build, you'll get build time errors. Simply delete the assembly attributes, and use the MSBuild properties instead.

    Alternatively, as James Gregory points out on Twitter, you can still use the Assembly attributes in your code if you turn off the auto-generated assembly attributes. You can do this by setting the following property in your csproj file:

    <PropertyGroup>  
       <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
    </PropertyGroup>  
    

    This could be useful if you already have tooling or a CI process to update the values in the files, but otherwise I'd encourage you to embrace the new approach to setting your project's version numbers.

    Summary

    In this post I described the difference between the various version numbers you can set for your apps and libraries in .NET Core. There's an overwhelming number of versions to choose from, but generally it's best to just set the Version and use it for all of the version numbers.


    Anuraj Parameswaran: Getting started with SignalR service in Azure

    This post is about how to work with SignalR service in Azure. In Build 2018, Microsoft introduced SignalR in Azure as a service.


    Anuraj Parameswaran: Static Code Analysis of .NET Core Projects with SonarCloud

    This post is about how to use SonarCloud application for running static code analysis in .NET Core projects. Static analysis is a way of automatically analysing code without executing it. SonarCloud is cloud offering of SonarQube app. It is Free for Open source projects.


    Andrew Lock: Creating a generalised Docker image for building ASP.NET Core apps using ONBUILD

    Creating a generalised Docker image for building ASP.NET Core apps using ONBUILD

    This is a follow-up to my recent posts on building ASP.NET Core apps in Docker:

    In this post I'll show how to create a generalised Docker image that can be used to build multiple ASP.NET Core apps. If your app conforms to a standard format (e.g. projects in the src directory, test projects in a test directory) then you can use it as the base image of a Dockerfile to create very simple Docker images for building your own apps.

    As an example, if you use the Docker image described in this post (andrewlock/aspnetcore-build:2.0.7-2.1.105), you can build your ASP.NET Core application using the following Docker image:

    # Build image
    FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder
    
    # Publish
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This multi-stage build image can build a complete app - the builder only has two commands, a FROM statement, and a single RUN statement to publish the app. The runtime image build itself is the same as it would be without the generalised build image. If you wish to use the builder image yourself, you can use the andrewlock/aspnetcore-build repository, available on Docker Hub.

    In this post I'll describe the motivation for creating the generalised image, how to use Docker's ONBUILD command, and how the generalised image itself works.

    The Docker build image to generalise

    When you build an ASP.NET Core application (whether "natively" or in Docker), you typically move through the following steps:

    • Restore the NuGet packages
    • Build the libraries, test projects, and app
    • Test the test projects
    • Publish the app

    In Docker, these steps are codified in a Dockerfile by the layers you add to your image. A basic, non-general, Dockerfile to build your app could look something like the following:

    Note, this doesn't include the optimisation described in my earlier post or the follow up:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln  
    
    # Copy solution folders and NuGet config
    COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
    COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj
    
    # Copy the test project files
    COPY test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj
    
    # Restore to cache the layers
    RUN dotnet restore
    
    # Copy all the source code and build
    COPY ./test ./test  
    COPY ./src ./src  
    RUN dotnet build -c Release --no-restore
    
    # Run dotnet test on the solution
    RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore
    
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This Dockerfile will build and test a specific ASP.NET Core app, but there are a lot of hard-coded paths in there. When you create a new app, you can copy and paste this Dockerfile, but you'll need to tweak all the commands to use the correct paths.

    By the time you get to your third copy-and-paste (and your n-th inevitable typo, you'll be wondering if there's a better, more general, way to achievev the same result. That's where Docker's ONBUILD command comes in. We can use it to create a generalised "builder" image for building our apps, and remove a lot of the repetition in the process.

    The ONBUILD Docker command

    In the Dockerfile shown above, the COPY and RUN commands are all executed in the context of your app. For normal builds, that's fine - the files that you want to copy are in the current directory. You're defining the commands to be run when you call docker build ..

    But we're trying to build a generalised "builder" image that we can use as the base for building other ASP.NET Core apps. Instead of defining the commands we want to execute when building our "builder" file, the commands should be run when an image that uses our "builder" as a base is built.

    The Docker documentation describes it as a "trigger" - you're defining a command to be triggered when the downstream build runs. I think of ONBUILD as effectively automating copy-and-paste; the ONBUILD command is copy-and-pasted into the downstream build.

    For example, consider this simple builder Dockerfile which uses ONBUILD:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src
    
    ONBUILD RUN dotnet build -c Release  
    

    This simple Dockerfile doesn't have any optimisations, but it uses ONBUILD to register triggers for downstream builds. Imagine you build this image using docker build . -tag andrewlock/testbuild. That creates a builder image called andrewlock/testbuild.

    The ONBUILD commands don't actually run when you build the "builder" image, they only run when you build the downstream image.

    You can then use this image as a basic "builder" image for your ASP.NET Core apps. For example, you could use the following Dockerfile to build your ASP.NET Core app:

    FROM andrewlock/testbuild
    
    ENTRYPOINT ["dotnet", "./src/MyApp/MyApp.dll"]  
    

    Note, for simplicity this example doesn't publish the app, or use multi-stage builds to optimise the runtime container size. Be sure to use those optimisations in production.

    That's a very small Dockerfile for building and running a whole app! The use of ONBUILD means that our downstream Dockerfile is equivalent to:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    COPY ./test ./test  
    COPY ./src ./src
    
    RUN dotnet build -c Release
    
    ENTRYPOINT ["dotnet", "./src/MyApp/MyApp.dll"]  
    

    When you build this Dockerfile, the ONBUILD commands will be triggered in the current directory, and the app will be built. You only had to include the "builder" base image, and you got all that for free.

    That's the goal I want to achieve with a generalised builder image. You should be able to include the base image, and it'll handle all your app building for you. In the next section, I'll show the solution I came up with, and walk through the layers it contains.

    The generalised Docker builder image

    The image I've come up with, is very close to the example shown at the start of this post. It uses the dotnet restore optimisation I described in my previous post, along with a workaround to allow running all the test projects in a solution:

    # Build image
    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    ONBUILD COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    ONBUILD COPY src/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    ONBUILD COPY test/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done 
    
    ONBUILD RUN dotnet restore
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src  
    ONBUILD RUN dotnet build -c Release --no-restore
    
    ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  
    

    If you've read my previous posts, then much of this should look familiar (with extra ONBUILD prefixes), but I'll walk through each layer below.

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln  
    

    This defines the base image and working directory for our builder, and hence for the downstream apps. I've used the microsoft/aspnetcore-builder image, as we're going to build ASP.NET Core apps.

    Note, the microsoft/aspnetcore-builder image is being retired in .NET Core 2.1 - you will need to switch to the microsoft/dotnet image instead.

    The next line shows our first use of ONBUILD:

    ONBUILD COPY ./*.sln ./NuGet.config ./*.props ./*.targets  ./  
    

    This will copy the .sln file, NuGet.config, and any .props or .targets files in the root folder of the downstream build.

    # Copy the main source project files
    ONBUILD COPY src/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    ONBUILD COPY test/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done  
    

    The Dockerfile uses the optimisation described in my previous post to copy the .csproj files from the src and test directories. As we're creating a generalised builder, we have to use an approach like this in which we don't explicitly specify the filenames.

    ONBUILD RUN dotnet restore
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src  
    ONBUILD RUN dotnet build -c Release --no-restore  
    

    The next section is the meat of the Dockerfile - we restore the NuGet packages, copy the source code across, and then build the app (using the release configuration).

    ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  
    

    Which brings us to the final statement in the Dockerfile, in which we run all the test projects in the test directory. Unfortunately, due to limitations with dotnet test, this line is a bit of a hack.

    Ideally, we'd be able to call dotnet test on the solution file, and it would test all the projects that are test projects. However, this won't give you the result you want - it will try to test non-test projects which will give you errors. There are several different issues looking at this problem, along with some workarounds, but most of them require changes to the app itself, or the addition of extra files. I decided to use a simple scripting approach based on this comment instead.

    Using find with xargs is a common approach in Linux to execute a command against a number of different files.

    The find command lists all the .csproj files in the test sub-directory, i.e. our test project files. The -print0 argument means that each filename is suffixed with a null character. The

    The xargs command takes each filename provided by the file command and executes it with the command dotnet test -c Release --no-build --no-restore. The additional -0 argument indicates that we're using a null character delimiter, and the -L1 argument indicates we should only use a single filename with each dotnet test command.

    This approach isn't especially elegant, but it does the job, and it means we can avoid having to explicitly specify the paths to the test project.

    That's as much as we can do in the builder image - the publishing step is very specific to each app, so it's not feasible to include that in the builder. Instead, you have to specify that step in your own downstream Dockerfile, as shown in the next section.

    Using the generalised build image

    You can use the generalised Docker image, to create much simpler Dockerfiles for your downstream apps. You can use andrewlock/aspnetcore-build as your base image, then all you need to do is publish your app, and copy it to the runtime image. The following shows an example of what this might look like, for a simple ASP.NET Core app.

    # Build image
    FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder
    
    # Publish
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This obviously only works if you apps use the same conventions as the builder app assumes, namely:

    • Your app and library projects are in a src subdirectory
    • Your test projectts are in a test subdirectory
    • All project files have the same name as their containing folders
    • There is only a single solution file

    If these conventions don't match your requirements, then my builder image won't work for you. But now you know how to create your own builder images using the ONBUILD command.

    Summary

    In this post I showed how you could use the Docker ONBUILD command to create custom app-builder Docker images. I showed an example image that uses a number of optimisations to create a generalised ASP.NET Core builder image which will restore, build, and test your ASP.NET Core app, as long as it conforms to a number of standard conventions.


    Anuraj Parameswaran: Working with Microsoft Library Manager for ASP.NET Core

    This post is about Working with Microsoft Library Manager for ASP.NET Core. Recently ASP.NET Team posted a blog - about bower deprecation. And in there they mentioned about a new tool called Library Manager - Library Manager (“LibMan” for short) is Visual Studio’s experimental client-side library acquisition tool. It provides a lightweight, simple mechanism that helps users find and fetch library files from an external source (such as CDNJS) and place them in your project. LibMan is not a package management system. If you’re using npm / yarn / (or something else), you can continue use it. LibMan was not developed as a replacement for these tools.


    Damien Bowden: OAuth using OIDC Authentication with PKCE for a .NET Core Console Native Application

    This article shows how to use a .NET Core console application securely with an API using the RFC 7636 specification. The app logs into IdentityServer4 using the OIDC authorization code flow with a PKCE (Proof Key for Code Exchange). The app can then use the access token to consume data from a secure API. This would be useful for power shell script clients, or .NET Core console apps. Identity.Model.Samples provide a whole range of native client examples, and this code was built using the .NET Core native code example.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    History

    2018-05-15 Updated title because it is confusing, OAuth Authentication replaced with OAuth using OIDC Authentication

    Native App PKCE Authorization Code Flow

    The RFC 7636 specification provides a safe way in which native applications can get access tokens to use with secure applications. Native applications have similar problems to web applications, single sign on is required sometimes, the native apps may not handle passwords, the server requires a way of validating the identity, and the client app requires a way of validating the token and so on.

    The RFC 7636 provides one of the best ways of doing this and by using a RFC standard, tested libraries which implement this, can be used. There is no need to re-invent the wheel.

    Flow overview src: 1.1. Protocol Flow

    The Proof Key for Code Exchange by OAuth Public Clients was designed so that the code cannot be intercepted in the Authorization Code Flow and used to get an access token. This can help for example, when the code is leaked to shared logs on a mobile device and a malicious application uses this to get an access token.

    The extra protection is added on this flow by using a code_verifier, code_challenge and a code_challenge_method. The code_challenge and the code_challenge_method are sent to the server with the authorization request. The code_challenge is the derived version of the code_verifier. When requesting the access token, the code_verifier is sent to the server, and this is then validated on the OIDC server using the values sent in the orignal authorization request.

    STS Server Configuration

    On IdentityServer4 ,the Proof Key for Code Exchange by OAuth can be configured as follows:

    new Client
    {
    	ClientId = "native.code",
    	ClientName = "Native Client (Code with PKCE)",
    
    	RedirectUris = { "http://127.0.0.1:45656" },
    	PostLogoutRedirectUris = { "http://127.0.0.1:45656" },
    
    	RequireClientSecret = false,
    
    	AllowedGrantTypes = GrantTypes.Code,
    	RequirePkce = true,
    	AllowedScopes = { "openid", "profile", "email", "native_api" },
    
    	AllowOfflineAccess = true,
    	RefreshTokenUsage = TokenUsage.ReUse
     }
    

    The RequirePkce is set to true, and no secrets are used, unlike the Authorization Code flow for web applications, as it makes no sense on public mobile native devices. Depending on the native device, the RedirectUris can be configured as required.

    Native client using .NET Core

    Implementing the client for a .NET Core application is really easy thanks to the IdentityModel.OidcClient nuget package and the examples provided on github. This repo provides reference examples for lots of different native client types, really impressive.

    This example was built used the following project: .NET Core Native Code

    IdentityModel.OidcClient takes care of the PKCE handling and the flow.

    The login can be implemented as follows:

    private static async Task Login()
    {
    	var browser = new SystemBrowser(45656);
    	string redirectUri = "http://127.0.0.1:45656";
    
    	var options = new OidcClientOptions
    	{
    		Authority = _authority,
    		ClientId = "native.code",
    		RedirectUri = redirectUri,
    		Scope = "openid profile native_api",
    		FilterClaims = false,
    		Browser = browser,
    		Flow = OidcClientOptions.AuthenticationFlow.AuthorizationCode,
    		ResponseMode = OidcClientOptions.AuthorizeResponseMode.Redirect,
    		LoadProfile = true
    	};
    
    	_oidcClient = new OidcClient(options); 
    	 var result = await _oidcClient.LoginAsync(new LoginRequest());
    	 ShowResult(result);
    }
    

    The SystemBrowser class uses this implementation from the IdentityModel.OidcClient samples. The results can be displayed as follows:

    private static void ShowResult(LoginResult result)
    {
    	if (result.IsError)
    	{
    		Console.WriteLine("\n\nError:\n{0}", result.Error);
    		return;
    	}
    
    	Console.WriteLine("\n\nClaims:");
    	foreach (var claim in result.User.Claims)
    	{
    		Console.WriteLine("{0}: {1}", claim.Type, claim.Value);
    	}
    
    	Console.WriteLine($"\nidentity token: {result.IdentityToken}");
    	Console.WriteLine($"access token:   {result.AccessToken}");
    	Console.WriteLine($"refresh token:  {result?.RefreshToken ?? "none"}");
    }
    

    And the API can be called then using the access token.

    private static async Task CallApi(string currentAccessToken)
    {
    	_apiClient.SetBearerToken(currentAccessToken);
    	var response = await _apiClient.GetAsync("");
    
    	if (response.IsSuccessStatusCode)
    	{
    		var json = JArray.Parse(await response.Content.ReadAsStringAsync());
    		Console.WriteLine(json);
    	}
    	else
    	{
    		Console.WriteLine($"Error: {response.ReasonPhrase}");
    	}
    }
    

    The IdentityModel.OidcClient can be used to implement almost any native device which needs to or should implement the Proof Key for Code Exchange by OAuth for authorization. There is no need to do the password handling in your native application.

    Links:

    http://openid.net/2015/05/26/enhancing-oauth-security-for-mobile-applications-with-pkse/

    https://tools.ietf.org/html/rfc7636

    https://github.com/IdentityModel/IdentityModel.OidcClient.Samples

    https://connect2id.com/blog/connect2id-server-3.9

    https://www.davidbritch.com/2017/08/using-pkce-with-identityserver-from_9.html

    https://developer.okta.com/authentication-guide/implementing-authentication/auth-code-pkce

    OAuth 2.0 and PKCE

    https://community.apigee.com/questions/47397/why-do-we-need-pkce-specification-rfc-7636-in-oaut.html

    https://oauth.net/articles/authentication/

    https://www.scottbrady91.com/OAuth/OAuth-is-Not-Authentication


    Andrew Lock: Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    This is a follow-up to my recent posts on building ASP.NET Core apps in Docker:

    In this post I expand on a comment Aidan made on my last post:

    Something that we do instead of the pre-build tarball step is the following, which relies on the pattern of naming the csproj the same as the directory it lives in. This appears to match the structure of your project, so it should work for you too.

    I'll walk through the code he provides to show how it works, and how to use it to build a standard ASP.NET Core application with Docker. The technique in this post can be used instead of the tar-based approach from my previous post, as long as your solution conforms to some standard conventions.

    I'll start by providing some background to why it's important to optimise the order of your Dockerfile, the options I've already covered, and the solution provided by Aidan in his comment.

    Background - optimising your Dockerfile for dotnet restore

    When building ASP.NET Core apps using Docker, it's important to consider the way Docker caches layers to build your app. I discussed this process in a previous post on building ASP.NET Core apps using Cake in Docker, so if that's new to you, i suggest checking it out.

    A common way to take advantage of the build cache when building your ASP.NET Core app, is to copy across only the .csproj, .sln and nuget.config files for your app before doing dotnet restore, instead of copying the entire source code. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the result of the restore, so it doesn't need to run again if all you do is change a .cs file for example.

    Due to the nature of Docker, there are many ways to achieve this, and I've discussed two of them previously, as summarised below.

    Option 1 - Manually copying the files across

    The easiest, and most obvious way to copy all the .csporj files from the Docker context into the image is to do it manually using the Docker COPY command. For example:

    # Build image
    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./  
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
    COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
    COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj
    
    RUN dotnet restore  
    

    Unfortunately, this has one major downside: You have to manually reference every .csproj (and .sln) file in the Dockerfile.

    Ideally, you'd be able to do something like the following, but the wildcard expansion doesn't work like you might expect:

    # Copy all csproj files (WARNING, this doesn't work!)
    COPY ./**/*.csproj ./  
    

    That led to my alternative solution: creating a tar-ball of the .csproj files and expanding them inside the image.

    Option 2 - Creating a tar-ball of the project files

    In order to create a general solution, I settled on an approach that required scripting steps outside of the Dockerfile. For details, see my previous post, but in summary:

    1. Create a tarball of the project files using

    find . -name "*.csproj" -print0 \  
        | tar -cvf projectfiles.tar --null -T -`
    

    2. Expand the tarball in the Dockerfile

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./  
    COPY projectfiles.tar .  
    RUN tar -xvf projectfiles.tar
    
    RUN dotnet restore  
    

    3. Delete the tarball once build is complete

    rm projectfiles.tar  
    

    This process works, but it's messy. It involves running bash scripts both before and after docker build, which means you can't do things like build automatically using DockerHub. This brings us to the hybrid alternative, proposed by Aidan.

    The new-improved solution

    The alternative solution actually uses the wildcard technique I previously dismissed, but with some assumptions about your project structure, a two-stage approach, and a bit of clever bash-work to work around the wildcard limitations.

    I'll start by presenting the complete solution, and I'll walk through and explain the steps later.

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    COPY src/*/*.csproj ./  
    RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    COPY test/*/*.csproj ./  
    RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
    
    RUN dotnet restore
    
    # Remainder of build process
    

    This solution is much cleaner than my previous tar-based effort, as it doesn't require any external scripting, just standard docker COPY and RUN commands. It gets around the wildcard issue by copying across csproj files in the src directory first, moving them to their correct location, and then copying across the test project files.

    This requires a project layout similar to the following, where your project files have the same name as their folders. For the Dockerfile in this post, it also requires your projects to all be located in either the src or test sub-directory:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    Step-by-step breakdown of the new solution

    Just to be thorough, I'll walk through each stage of the Dockerfile below.

    1. Set the base image

    The first steps of the Dockerfile are the same for all solutions: it sets the base image, and copies across the .sln and NuGet.config file.

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./*.sln ./NuGet.config  ./  
    

    After this stage, your image will contain 2 files:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    2. Copy src .csproj files to root

    In the next step, we copy all the .csproj files from the src folder, and dump them in the root directory.

    COPY src/*/*.csproj ./  
    

    The wildcard expands to match any .csproj files that are one directory down, in the src folder. After it runs, your image contains the following file structure:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    3. Restore src folder hierarchy

    The next stage is where the magic happens. We take the flat list of csproj files, and move them back to their correct location, nested inside sub-folders of src.

    RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done  
    

    I'll break this command down, so we can see what it's doing

    1. for file in $(ls *.csproj); do ...; done - List all the .csproj files in the root directory. Loop over them, and assign the file variable to the filename. In our case, the loop will run twice, once with AspNetCoreInDocker.Lib.csproj and once with AspNetCoreInDocker.Web.csproj.

    2. ${file%.*} - use bash's string manipulation library to remove the extension from the filename, giving AspNetCoreInDocker.Lib and AspNetCoreInDocker.Web.

    3. mkdir -p src/${file%.*}/ - Create the sub-folders based on the file names. the -p parameter ensures the src parent folder is created if it doesn't already exist.

    4. mv $file src/${file%.*} - Move the csproj file into the newly created sub-folder.

    After this stage executes, your image will contain a file system like the following:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    4. Copy test .csproj files to root

    Now the src folder is successfully copied, we can work on the test folder. The first step is to copy them all into the root directory again:

    COPY test/*/*.csproj ./  
    

    Which gives a hierarchy like the following:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    5. Restore test folder hierarchy

    The final step is to restore the test folder as we did in step 3. We can use pretty much the same code as in step 3, but with src replaced by test:

    RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done  
    

    After this stage we have our complete skeleton project, consisting of just our sln, NuGet.config, and .csproj files, all in their correct place.

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    That leaves us free to build and restore the project while taking advantage of Docker's layer-caching optimisations, without having to litter our Dockerfile with specific project names, or use outside scripting to create a tar-ball.

    Summary

    For performance purposes, it's important to take advantage of Docker's caching mechanisms when building your ASP.NET Core applications. Some of the biggest gains can be had by caching the restore phase of the build process.

    In this post I showed an improved way to achieve this without having to resort to external scripting using tar, or having to list every .csproj file in your Dockerfile. This solution was based on a comment by Aidan on my previous post, so a big thanks to him!


    Damien Bowden: ASP.NET Core Authorization for Windows, Local accounts

    This article shows how authorization could be implemented for an ASP.NET Core MVC application. The authorization logic is extracted into a separate project, which is required by some certification software requirements. This could also be deployed as a separate service.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    Blogs in this series:

    Application Authorization Service

    The authorization service uses the claims returned for the identity of the MVC application. The claims are returned from the ASP.NET Core MVC client app which authenticates using the OpenID Connect Hybrid flow. The values are then used to create or define the authorization logic.

    The authorization service supports a single API method, IsAdmin. This method checks if the username is a defined admin, and that the person/client used a Windows account to login.

    using System;
    
    namespace AppAuthorizationService
    {
        public class AppAuthorizationService : IAppAuthorizationService
        {
            public bool IsAdmin(string username, string providerClaimValue)
            {
                return RulesAdmin.IsAdmin(username, providerClaimValue);
            }
        }
    }
    
    

    The rules define the authorization process. This is just a simple static configuration class, but any database, configuration files, authorization API could be used to check, define the rules.

    In this example, the administrators are defined in the class, and the Windows value is checked for the claim parameter.

    using System;
    using System.Collections.Generic;
    using System.Text;
    
    namespace AppAuthorizationService
    {
        public static class RulesAdmin
        {
    
            private static List<string> adminUsers = new List<string>();
    
            private static List<string> adminProviders = new List<string>();
    
            public static bool IsAdmin(string username, string providerClaimValue)
            {
                if(adminUsers.Count == 0)
                {
                    AddAllowedUsers();
                    AddAllowedProviders();
                }
    
                if (adminUsers.Contains(username) && adminProviders.Contains(providerClaimValue))
                {
                    return true;
                }
    
                return false;
            }
    
            private static void AddAllowedUsers()
            {
                adminUsers.Add("SWISSANGULAR\\Damien");
            }
    
            private static void AddAllowedProviders()
            {
                adminProviders.Add("Windows");
            }
        }
    }
    
    

    ASP.NET Core Policies

    The application authorization service also defines the ASP.NET Core policies which can be used by the client application. An IAuthorizationRequirement is implemented.

    using Microsoft.AspNetCore.Authorization;
     
    namespace AppAuthorizationService
    {
        public class IsAdminRequirement : IAuthorizationRequirement{}
    }
    

    The IAuthorizationRequirement implementation is then used in the AuthorizationHandler implementation IsAdminHandler. This handler checks, validates the claims, using the IAppAuthorizationService service.

    using Microsoft.AspNetCore.Authorization;
    using System;
    using System.Linq;
    using System.Threading.Tasks;
    
    namespace AppAuthorizationService
    {
        public class IsAdminHandler : AuthorizationHandler<IsAdminRequirement>
        {
            private IAppAuthorizationService _appAuthorizationService;
    
            public IsAdminHandler(IAppAuthorizationService appAuthorizationService)
            {
                _appAuthorizationService = appAuthorizationService;
            }
    
            protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsAdminRequirement requirement)
            {
                if (context == null)
                    throw new ArgumentNullException(nameof(context));
                if (requirement == null)
                    throw new ArgumentNullException(nameof(requirement));
    
                var claimIdentityprovider = context.User.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/identityprovider");
    
                if (claimIdentityprovider != null && _appAuthorizationService.IsAdmin(context.User.Identity.Name, claimIdentityprovider.Value))
                {
                    context.Succeed(requirement);
                }
    
                return Task.CompletedTask;
            }
        }
    }
    

    As an example, a second policy is also defined, which checks that the http://schemas.microsoft.com/identity/claims/identityprovider claim has a Windows value.

    using Microsoft.AspNetCore.Authorization;
    
    namespace AppAuthorizationService
    {
        public static class MyPolicies
        {
            private static AuthorizationPolicy requireWindowsProviderPolicy;
    
            public static AuthorizationPolicy GetRequireWindowsProviderPolicy()
            {
                if (requireWindowsProviderPolicy != null) return requireWindowsProviderPolicy;
    
                requireWindowsProviderPolicy = new AuthorizationPolicyBuilder()
                      .RequireClaim("http://schemas.microsoft.com/identity/claims/identityprovider", "Windows")
                      .Build();
    
                return requireWindowsProviderPolicy;
            }
        }
    }
    
    

    Using the Authorization Service and Policies

    The Authorization can then be used, by adding the services to the Startup of the client application.

    services.AddSingleton<IAppAuthorizationService, AppAuthorizationService.AppAuthorizationService>();
    services.AddSingleton<IAuthorizationHandler, IsAdminHandler>();
    
    services.AddAuthorization(options =>
    {
    	options.AddPolicy("RequireWindowsProviderPolicy", MyPolicies.GetRequireWindowsProviderPolicy());
    	options.AddPolicy("IsAdminRequirementPolicy", policyIsAdminRequirement =>
    	{
    		policyIsAdminRequirement.Requirements.Add(new IsAdminRequirement());
    	});
    });
    

    The policies can then be used in a controller and validate that the IsAdminRequirementPolicy is fulfilled.

    using Microsoft.AspNetCore.Authorization;
    using Microsoft.AspNetCore.Mvc;
    
    namespace MvcHybridClient.Controllers
    {
        [Authorize(Policy = "IsAdminRequirementPolicy")]
        public class AdminController : Controller
        {
            public IActionResult Index()
            {
                return View();
            }
        }
    }
    

    Or the IAppAuthorizationService can be used directly if you wish to mix authorization within a controller.

    private IAppAuthorizationService _appAuthorizationService;
    
    public HomeController(IAppAuthorizationService appAuthorizationService)
    {
    	_appAuthorizationService = appAuthorizationService;
    }
    
    public IActionResult Index()
    {
    	// Windows or local => claim http://schemas.microsoft.com/identity/claims/identityprovider
    	var claimIdentityprovider = 
    	  User.Claims.FirstOrDefault(t => 
    	    t.Type == "http://schemas.microsoft.com/identity/claims/identityprovider");
    
    	if (claimIdentityprovider != null && 
    	  _appAuthorizationService.IsAdmin(
    	     User.Identity.Name, 
    		 claimIdentityprovider.Value)
    	)
    	{
    		// yes, this is an admin
    		Console.WriteLine("This is an admin, we can do some specific admin logic!");
    	}
    
    	return View();
    }
    

    If an admin user from Windows logged in, the admin view can be accessed.

    Or the local guest user only sees the home view.


    Notes:

    This is a good way of separating the authorization logic from the business application in your software. Some certified software processes, require that the application authorization, authentication is audited before each release, for each new deployment if anything changed.
    By separating the logic, you can deploy, update the business application without doing a security audit. The authorization process could also be deployed to a separate process if required.

    Links:

    https://docs.microsoft.com/en-us/aspnet/core/security/authorization/views?view=aspnetcore-2.1&tabs=aspnetcore2x

    https://docs.microsoft.com/en-us/aspnet/core/security/authentication/?view=aspnetcore-2.1

    https://mva.microsoft.com/en-US/training-courses/introduction-to-identityserver-for-aspnet-core-17945

    https://stackoverflow.com/questions/34951713/aspnet5-windows-authentication-get-group-name-from-claims/34955119

    https://github.com/IdentityServer/IdentityServer4.Templates

    https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/authentication/windowsauthentication/


    Anuraj Parameswaran: How to reuse HTML snippets inside a Razor view in ASP.NET Core

    This post is a small tip about reusing HTML snippets inside a Razor view in ASP.NET Core. In earlier versions of ASP.NET MVC this could be achieved with the help of helper - A helper is a reusable component that includes code and markup to perform a task that might be tedious or complex. But there is no equivalent implementation is available in ASP.NET Core MVC. In this post I am explaining how we can achieve similar functionality in ASP.NET Core. Unlike ASP.NET MVC, this implementation, you can’t use it in multiple page. This is very helpful if you want to do some complex logic in view.


    Andrew Lock: Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    In this post I shown how you can use an IActionFilter in ASP.NET Core MVC to read the method parameters for an action method before it executes. I'll show two different approaches to solve the problem, depending on your requirements.

    In the first approach, you know that the parameter you're interested in (a string parameter called returnUrl for this post) is always passed as a top level argument to the action, e.g.

    public class AccountController  
    {
        public IActionResult Login(string returnUrl)
        {
            return View();
        }
    }
    

    In the second approach, you know that the returnUrl parameter will be in the request, but you don't know that it will be passed as a top-level parameter to a method. For example:

    public class AccountController  
    {
        public IActionResult Login(string returnUrl)
        {
            return View();
        }
    
        public IActionResult Login(LoginInputModel model)
        {
            var returnUrl = model.returnUrl
            return View();
        }
    }
    

    The action filters I describe in this post can be used for lots of different scenarios. To give a concrete example, I'll describe the original use case that made me investigate the options. If you're just interested in the implementation, feel free to jump ahead.

    Background: why would you want to do this?

    I was recently working on an IdentityServer 4 application, in which we wanted to display a slightly different view depending on which tenant a user was logging in to. OpenID Connect allows you to pass additional information as part of an authentication request as acr_values in the querystring. One of the common acr_values is tenant - it's so common that IdentityServer provides specific methods for pulling the tenant from the request URL.

    When an unauthenticated user attempts to use a client application that relies on IdentityServer for authentication, the client app calls the Authorize endpoint, which is part of the IdentityServer middleware. As the user is not yet authenticated, they are redirected to the login page for the application, with the returnUrl parameter pointing back to the middleware authorize endpoint:

    Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    After the user has logged in, they'll be redirected to the IdentityServer Authorize endpoint, which will return an access/id token back to the original client.

    In my scenario, I needed to determine the tenant that the original client provided in the request to the Authorize endpoint. That information is available in the returnUrl parameter passed to the login page. You can use the IdentityServer Interaction Service (IIdentityServerInteractionService) to decode the returnUrl parameter and extract the tenant with code similar to the following:

    public class AccountController  
    {
        private readonly IIdentityServerInteractionService _service;
        public AccountController(IIdentityServerInteractionService  service)
        {
            _service = service;
        }
    
        public IActionResult Login(string returnUrl)
        {
            var context = await _service.GetAuthorizationContextAsync(returnUrl);
            ViewData["Tenant"] = context?.Tenant;
            return View();
        }
    }
    

    You could then use the ViewData in a Razor view to customise the display. For example, in the following _Layout.cshtml, the Tenant name is added to the page as a class on the <body> tag.

    @{
        var tenant = ViewData["Tenant"] as string;
        var tenantClass = "tenant-" + (string.IsNullOrEmpty(tenant) ? "unknown" : tenant);
    }
    <!DOCTYPE html>  
    <html>  
      <head></head>
      <body class="@tenantClass">
        @RenderBody
      </body>
    </html>  
    

    This works fine, but unfortunately it means you need to duplicate the code to extract the tenant in every action method that has a returnUrl - for example the GET and POST version of the login methods, all the 2FA action methods, the external login methods etc.

    var context = await _service.GetAuthorizationContextAsync(returnUrl);  
    ViewData["Tenant"] = context?.Tenant;  
    

    Whenever you have a lot of duplication in your action methods, it's worth thinking whether you can extract that work into a filter (or alternatively, push it down into a command handler using a mediator).

    Now we have the background, lets look at creating an IActionFilter to handle this for us.

    Creating an IActionFilter that reads action method parameters

    One of the good things about using an IActionFilter (as opposed to some other MVC Filter) is that it executes after model binding, but before the action method has been executed. That gives you a ton of context to work with.

    The IActionFilter below reads an action method's parameters, looks for one called returnUrl and sets it as an item in ViewData. There's a bunch of assumptions in this code, so I'll walk through it below.

    public class SetViewDataFilter : IActionFilter  
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            if (context.ActionArguments.TryGetValue("returnUrl", out object value))
            {
                // NOTE: this assumes all your controllers derive from Controller.
                // If they don't, you'll need to set the value in OnActionExecuted instead
                // or use an IAsyncActionFilter
                if (context.Controller is Controller controller)
                {
                    controller.ViewData["ReturnUrl"] = value.ToString();
                }
            }
        }
    
        public void OnActionExecuted(ActionExecutedContext context) { }
    }
    

    The ActionExecutingContext object contains details about the action method that's about to be executed, model binding details, the ModelState - just about anything you could want! In this filter, I'm calling ActionArguments and looking for a parameter named returnUrl. This is a case-insensitive lookup, so any method parameters called returnUrl, returnURL, or RETURNURL would all be a match. If the action method has a match, we extract the value (as an object) into the value variable.

    Note that we are getting the value after it's been model bound to the action method's parameter. We didn't need to inspect the querystring, form data, or route values; however the MVC middleware managed it, we get the value.

    We've extracted the value of the returnUrl parameter, but now we need to store it somewhere. ASP.NET Core doesn't have any base-class requirements for your MVC controllers, so unfortunately you can't easily get a reference to the ViewData collection. Having said that, if all your controllers derive from the Controller base class, then you could cast to the type and access ViewData as I have in this simple example. This may work for you, it depends on the conventions you follow, but if not, I show an alternative later.

    You can register your action filter as a global filter when you call AddMvc in Startup.ConfigureServices. Be sure to also register the filter as a service with the DI container

    public void ConfigureServices(IServiceCollection services)  
    {
        services.AddTransient<SetViewDataFilter>();
        services.AddMvc(options =>
        {
            options.Filters.AddService<SetViewDataFilter>();
        });
    }
    

    In this example, I chose to not make the filter an attribute. If you want to use SetViewDataFilter to decorate specific action methods, you should derive from ActionFilterAttribute instead.

    In this example, SetViewDataFilter implements the synchronous version of IActionFilter, so unfortunately it's not possible to use IdentityServer's interaction service to obtain the Tenant from the returnUrl (as it requires an async call). We can get round that by implementing IAsyncActionFilter instead.

    Converting to an asynchronous filter with IAsyncActionFilter

    If you need to make async calls in your action filters, you'll need to implement the asynchronous interface, IAsyncActionFilter. Conceptually, this combines the two action filter methods (OnActionExecuting() and OnActionExecuted()) into a single OnActionExecutionAsync().

    When your filter executes, you're provided the ActionExecutingContext as before, but also an ActionExecutionDelegate delegate, which represents the rest of the MVC filter pipeline. This lets you control exactly when the rest of the pipeline executes, as well as allowing you to make async calls.

    Lets rewrite the action filter, and extend it to actually lookup the tenant with IdentityServer:

    public class SetViewDataFilter : IAsyncActionFilter  
    {
        readonly IIdentityServerInteractionService _service;
        public SetViewDataFilter(IIdentityServerInteractionService service)
        {
            _service = service;
        }
    
        public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
        {
            var tenant = await GetTenant(context);
    
            // Execute the rest of the MVC filter pipeline
            var resultContext = await next();
    
            if (resultContext.Result is ViewResult view)
            {
                view.ViewData["Tenant"] = tenant;
            }
        }
    
        async Task<string> GetTenant(ActionExecutingContext context)
        {
            if (context.ActionArguments.TryGetValue("returnURl", out object value)
                && value is string returnUrl)
            {
                var authContext = await _service.GetAuthorizationContextAsync(returnUrl);
                return authContext?.Tenant;
            }
    
            // no string parameter called returnUrl
            return null;
        }
    }
    

    I've moved the code to extract the returnUrl parameter from the action context into it's own method, in which we also use the IIdentityServerInteractionService to check the returnUrl is valid, and to fetch the provided tenant (if any).

    I've also used a slightly different construct to pass the value in the ViewData. Instead of putting requirements on the base class of the controller, I'm checking that the result of the action method was a ViewResult, and setting the ViewData that way. This seems like a better option - if we're not returning a ViewResult then ViewData is a bit pointless anyway!

    This action filter is very close to what I used to meet my requirements, but it makes one glaring assumption: that action methods always have a string parameter called returnUrl. Unfortunately, that may not be the case, for example:

    public class AccountController  
    {
        public IActionResult Login(LoginInputModel model)
        {
            var returnUrl = model.ReturnUrl
            return View();
        }
    }
    

    Even though the LoginInputModel has a ReturnUrl parameter that would happily bind to a returnUrl parameter in the querystring, our action filter will fail to retrieve it. That's because we're looking specifically at the action arguments for a parameter called returnUrl, but we only have model. We're going to need a different approach to satisfy both action methods.

    Using the ModelState to build an action filter

    It took me a little while to think of a solution to this issue. I toyed with the idea of introducing an interface IReturnUrl, and ensuring all the binding models implemented it, but that felt very messy to me, and didn't feel like it should be necessary. Alternatively, I could have looked for a parameter called model and used reflection to check for a ReturnUrl property. That didn't feel right either.

    I knew the model binder would treat string returnUrl and LoginInputModel.ReturnUrl the same way: they would both be bound correctly if I passed a querystring parameter of ?returnUrl=/the/value. I just needed a way of hooking into the model binding directly, instead working with the final method parameters.

    The answer was to use context.ModelState. ModelState contains a list of all the values that MVC attempted to bind to the request. You typically use it at the top of an MVC action to check that model binding and validation was successful using ModelState.IsValid, but it's also perfect for my use case.

    Based on the async version of our attribute you saw previously, I can update the GetTenant method to retrieve values from the ModelState instead of the action arguments:

    async Task<string> GetTenantFromAuthContext(ActionExecutingContext context)  
    {
        if (context.ModelState.TryGetValue("returnUrl", out var modelState)
            && modelState.RawValue is string returnUrl
            && !string.IsNullOrEmpty(returnUrl))
        {
            var authContext = await _interaction.GetAuthorizationContextAsync(returnUrl);
            return authContext?.Tenant;
        }
    
        // reutrnUrl wasn't in the request
        return null;
    }
    

    And that's it! With this quick change, I can retrieve the tenant both for action methods that have a string returnUrl parameter, and those that have a model with a ReturnUrl property.

    Summary

    In this post I showed how you can create an action filter to read the values of an action method before it executes. I then showed how to create an asynchronous version of an action filter using IAsyncActionFilter, and how to access the ViewData after an action method has executed. Finally, I showed how you can use the ModelState collection to access all model-bound values, instead of only the top-level parameters passed to the action method.


    Damien Bowden: Supporting both Local and Windows Authentication in ASP.NET Core MVC using IdentityServer4

    This article shows how to setup an ASP.NET Core MVC application to support both users who can login in with a local login account, solution specific, or use a windows authentication login. The identity created from the windows authentication could then be allowed to do different tasks, for example administration, or a user from the local authentication could be used for guest accounts, etc. To do this, IdentityServer4 is used to handle the authentication. The ASP.NET Core MVC application uses the OpenID Connect Hybrid Flow.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    Posts in this series:

    Setting up the STS using IdentityServer4

    The STS is setup using the IdentityServer4 dotnet templates. Once installed, the is4aspid template was used to create the application from the command line.

    The windows authentication is activated in the launchSettings.json. To setup the windows authentication for the deployment, refer to the Microsoft Docs.

    {
      "iisSettings": {
        "windowsAuthentication": true,
        "anonymousAuthentication": true,
        "iisExpress": {
          "applicationUrl": "https://localhost:44364/",
          "sslPort": 44364
        }
      },
    

    The OpenID Connect Hybrid Flow was then configured for the client application.

    new Client
    {
    	ClientId = "hybridclient",
    	ClientName = "MVC Client",
    
    	AllowedGrantTypes = GrantTypes.HybridAndClientCredentials,
    	ClientSecrets = { new Secret("hybrid_flow_secret".Sha256()) },
    
    	RedirectUris = { "https://localhost:44381/signin-oidc" },
    	FrontChannelLogoutUri = "https://localhost:44381/signout-oidc",
    	PostLogoutRedirectUris = { "https://localhost:44381/signout-callback-oidc" },
    
    	AllowOfflineAccess = true,
    	AllowedScopes = { "openid", "profile", "offline_access",  "scope_used_for_hybrid_flow" }
    }
    

    ASP.NET Core MVC Hybrid Client

    The ASP.NET Core MVC application is configured to authenticate using the STS server, and to save the tokens in a cookie. The AddOpenIdConnect method configures the OIDC Hybrid client, which must match the settings in the IdentityServer4 application.

    The TokenValidationParameters MUST be used, to set the NameClaimType property, otherwise the User.Identity.Name property will be null. This value is returned in the ‘name’ claim, which is not the default.

    services.AddAuthentication(options =>
    {
    	options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    	options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
    })
    .AddCookie()
    .AddOpenIdConnect(options =>
    {
    	options.SignInScheme = "Cookies";
    	options.Authority = stsServer;
    	options.RequireHttpsMetadata = true;
    	options.ClientId = "hybridclient";
    	options.ClientSecret = "hybrid_flow_secret";
    	options.ResponseType = "code id_token";
    	options.GetClaimsFromUserInfoEndpoint = true;
    	options.Scope.Add("scope_used_for_hybrid_flow");
    	options.Scope.Add("profile");
    	options.Scope.Add("offline_access");
    	options.SaveTokens = true;
    	// Set the correct name claim type
    	options.TokenValidationParameters = new TokenValidationParameters
    	{
    		NameClaimType = "name"
    	};
    });
    

    Then all controllers can be secured using the Authorize attribute. The anti forgery cookie should also be used, because the application uses cookies to store the tokens.

    [Authorize]
    public class HomeController : Controller
    {
    

    Displaying the login type in the ASP.NET Core Client

    Then application then displays the authentication type in the home view. To do this, a requireWindowsProviderPolicy policy is defined, which requires that the identityprovider claim has the value Windows. The policy is added using the AddAuthorization method options.

    var requireWindowsProviderPolicy = new AuthorizationPolicyBuilder()
     .RequireClaim("http://schemas.microsoft.com/identity/claims/identityprovider", "Windows")
     .Build();
    
    services.AddAuthorization(options =>
    {
    	options.AddPolicy(
    	  "RequireWindowsProviderPolicy", 
    	  requireWindowsProviderPolicy
    	);
    });
    

    The policy can then be used in the cshtml view.

    @using Microsoft.AspNetCore.Authorization
    @inject IAuthorizationService AuthorizationService
    @{
        ViewData["Title"] = "Home Page";
    }
    
    <br />
    
    @if ((await AuthorizationService.AuthorizeAsync(User, "RequireWindowsProviderPolicy")).Succeeded)
    {
        <p>Hi Admin, you logged in with an internal Windows account</p>
    }
    else
    {
        <p>Hi local user</p>
    
    }
    

    Both applications can then be started. The client application is redirected to the STS server and the user can login with either the Windows authentication, or a local account.

    The text in the client application is displayed depending on the Identity returned.

    Identity created for the Windows Authentication:

    Local Identity:

    Next Steps

    The application now works for Windows authentication, or a local account authentication. The authorization now needs to be set, so that the different types have different claims. The identities returned from the Windows Authentication will have different claims, to the identities returned form the local logon, which will be used for guest accounts.

    Links:

    https://docs.microsoft.com/en-us/aspnet/core/security/authorization/views?view=aspnetcore-2.1&tabs=aspnetcore2x

    https://docs.microsoft.com/en-us/aspnet/core/security/authentication/?view=aspnetcore-2.1

    https://mva.microsoft.com/en-US/training-courses/introduction-to-identityserver-for-aspnet-core-17945

    https://stackoverflow.com/questions/34951713/aspnet5-windows-authentication-get-group-name-from-claims/34955119

    https://github.com/IdentityServer/IdentityServer4.Templates

    https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/authentication/windowsauthentication/


    Andrew Lock: Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    This post was inspired by Scott Brady's recent post on implementing "passwordless authentication" using ASP.NET Core Identity.. In this post I show how to implement his "optimisation" suggestions to reduce the lifetime of "magic link" tokens.

    I start by providing some some background on the use case, but I strongly suggest reading Scott's post first if you haven't already, as mine builds strongly on his. I'll show:

    I'll start with the scenario: passwordless authentication.

    Passwordless authentication using ASP.NET Core Identity

    Scott's post describes how to recreate a login workflow similar to that of Slack's mobile app, or Medium:

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Instead of providing a password, you enter your email and they send you a magic link:

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Clicking the link automatically, logs you into the app. In nhis post, Scott shows how you can recreate the "magic link" login workflow using ASP.NET Core Identity. In this post, I want to address the very final section in his post, titled Optimisations:Existing Token Lifetime.

    Scott points out that the implementation he provided uses the default token provider, the DataProtectorTokenProvider to generate tokens, which generates large, long-lived tokens, something like the following:

    CfDJ8GbuL4IlniBKrsiKWFEX/Ne7v/fPz9VKnIryTPWIpNVsWE5hgu6NSnpKZiHTGZsScBYCBDKx/  
    oswum28dUis3rVwQsuJd4qvQweyvg6vxTImtXSSBWC45sP1cQthzXodrIza8MVrgnJSVzFYOJvw/V  
    ZBKQl80hsUpgZG0kqpfGeeYSoCQIVhm4LdDeVA7vJ+Fn7rci3hZsdfeZydUExnX88xIOJ0KYW6UW+  
    mZiaAG+Vd4lR+Dwhfm/mv4cZZEJSoEw==  
    

    By default, these tokens last for 24 hours. For a passwordless authentication workflow, that's quite a lot longer than we'd like. Medium uses a 15 minute expiry for example.

    Scott describes several options you could use to solve this:

    • Change the default lifetime for all tokens that use the default token provider
    • Use a different token provider, for example one of the TOTP-based providers
    • Create a custom data-protection base token provider with a different token lifetime

    All three of these approaches work, so I'll discuss each of them in turn.

    Changing the default token lifetime

    When you generate a token in ASP.NET Core Identity, by default you will use the DataProtectorTokenProvider. We'll take a closer look at this class shortly, but for now it's sufficient to know it's used by workflows such as password reset (when you click the "forgot your password?" link) and for email confirmation.

    The DataProtectorTokenProvider depends on a DataProtectionTokenProviderOptions object which has a TokenLifespan property:

    public class DataProtectionTokenProviderOptions  
    {
        public string Name { get; set; } = "DataProtectorTokenProvider";
        public TimeSpan TokenLifespan { get; set; } = TimeSpan.FromDays(1);
    }
    

    This property defines how long tokens generated by the provider are valid for. You can change this value using the standard ASP.NET Core Options framework inside your Startup.ConfigureServices method:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure<DataProtectionTokenProviderOptions>(
                x => x.TokenLifespan = TimeSpan.FromMinutes(15));
    
            // other services configuration
        }
        public void Configure() { /* pipeline config */ }
    }
    

    In this example, I've configured the token lifespan to be 15 minutes using a lambda, but you could also configure it by binding to IConfiguration etc.

    The downside to this approach, is that you've now reduced the token lifetime for all workflows. 15 minutes might be fine for password reset and passwordless login, but it's potentially too short for email confirmation, so you might run into issues with lots of rejected tokens if you choose to go this route.

    Using a different provider

    As well as the default DataProtectorTokenProvider, ASP.NET Core Identity uses a variety of TOTP-based providers for generating short multi-factor authentication codes. For example, it includes providers for sending codes via email or via SMS. These providers both use the base TotpSecurityStampBasedTokenProvider to generate their tokens. TOTP codes are typically very short-lived, so seem like they would be a good fit for the passwordless login scenario.

    Given we're emailing the user a short-lived token for signing in, the EmailTokenProvider might seem like a good choice for our paswordless login. But the EmailTokenProvider is designed for providing 2FA tokens, and you probably shouldn't reuse providers for multiple purposes. Instead, you can create your own custom TOTP provider based on the built-in types, and use that to generate tokens.

    Creating a custom TOTP token provider for passwordless login

    Creating your own token provider sounds like a scary (and silly) thing to do, but thankfully all of the hard work is already available in the ASP.NET Core Identity libraries. All you need to do is derive from the abstract TotpSecurityStampBasedTokenProvider<> base class, and override a couple of simple methods:

    public class PasswordlessLoginTotpTokenProvider<TUser> : TotpSecurityStampBasedTokenProvider<TUser>  
        where TUser : class
    {
        public override Task<bool> CanGenerateTwoFactorTokenAsync(UserManager<TUser> manager, TUser user)
        {
            return Task.FromResult(false);
        }
    
        public override async Task<string> GetUserModifierAsync(string purpose, UserManager<TUser> manager, TUser user)
        {
            var email = await manager.GetEmailAsync(user);
            return "PasswordlessLogin:" + purpose + ":" + email;
        }
    }
    

    I've set CanGenerateTwoFactorTokenAsync() to always return false, so that the ASP.NET Core Identity system doesn't try to use the PasswordlessLoginTotpTokenProvider to generate 2FA codes. Unlike the SMS or Authenticator providers, we only want to use this provider for generating tokens as part of our passwordless login workflow.

    The GetUserModifierAsync() method should return a string consisting of

    ... a constant, provider and user unique modifier used for entropy in generated tokens from user information.

    I've used the user's email as the modifier in this case, but you could also use their ID for example.

    You still need to register the provider with ASP.NET Core Identity. In traditional ASP.NET Core fashion, we can create an extension method to do this (mirroring the approach taken in the framework libraries):

    public static class CustomIdentityBuilderExtensions  
    {
        public static IdentityBuilder AddPasswordlessLoginTotpTokenProvider(this IdentityBuilder builder)
        {
            var userType = builder.UserType;
            var totpProvider = typeof(PasswordlessLoginTotpTokenProvider<>).MakeGenericType(userType);
            return builder.AddTokenProvider("PasswordlessLoginTotpProvider", totpProvider);
        }
    }
    

    and then we can add our provider as part of the Identity setup in Startup:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddIdentity<IdentityUser, IdentityRole>()
                .AddEntityFrameworkStores<IdentityDbContext>() 
                .AddDefaultTokenProviders()
                .AddPasswordlessLoginTotpTokenProvider(); // Add the custom token provider
        }
    }
    

    To use the token provider in your workflow, you need to provide the key "PasswordlessLoginTotpProvider" (that we used when registering the provider) to the UserManager.GenerateUserTokenAsync() call.

    var token = await userManager.GenerateUserTokenAsync(  
                    user, "PasswordlessLoginTotpProvider", "passwordless-auth");
    

    If you compare that line to Scott's post, you'll see that we're passing "PasswordlessLoginTotpProvider" as the provider name instead of "Default".

    Similarly, you'll need to pass the new provider key in the call to VerifyUserTokenAsync:

    var isValid = await userManager.VerifyUserTokenAsync(  
                      user, "PasswordlessLoginTotpProvider", "passwordless-auth", token);
    

    If you're following along with Scott's post, you will now be using tokens witth a much shorter lifetime than the 1 day default!

    Creating a data-protection based token provider with a different token lifetime

    TOTP tokens are good for tokens with very short lifetimes (nominally 30 seconds), but if you want your link to be valid for 15 minutes, then you'll need to use a different provider. The default DataProtectorTokenProvider uses the ASP.NET Core Data Protection system to generate tokens, so they can be much more long lived.

    If you want to use the DataProtectorTokenProvider for your own tokens, and you don't want to change the default token lifetime for all other uses (email confirmation etc), you'll need to create a custom token provider again, this time based on DataProtectorTokenProvider.

    Given that all you're trying to do here is change the passwordless login token lifetime, your implementation can be very simple. First, create a custom Options object, that derives from DataProtectionTokenProviderOptions, and overrides the default values:

    public class PasswordlessLoginTokenProviderOptions : DataProtectionTokenProviderOptions  
    {
        public PasswordlessLoginTokenProviderOptions()
        {
            // update the defaults
            Name = "PasswordlessLoginTokenProvider";
            TokenLifespan = TimeSpan.FromMinutes(15);
        }
    }
    

    Next, create a custom token provider, that derives from DataProtectorTokenProvider, and takes your new Options object as a parameter:

    public class PasswordlessLoginTokenProvider<TUser> : DataProtectorTokenProvider<TUser>  
    where TUser: class  
    {
        public PasswordlessLoginTokenProvider(
            IDataProtectionProvider dataProtectionProvider,
            IOptions<PasswordlessLoginTokenProviderOptions> options) 
            : base(dataProtectionProvider, options)
        {
        }
    }
    

    As you can see, this class is very simple! Its token generating code is completely encapsulated in the base DataProtectorTokenProvider<>; all you're doing is ensuring the PasswordlessLoginTokenProviderOptions token lifetime is used instead of the default.

    You can again create an extension method to make it easier to register the provider with ASP.NET Core Identity:

    public static class CustomIdentityBuilderExtensions  
    {
        public static IdentityBuilder AddPasswordlessLoginTokenProvider(this IdentityBuilder builder)
        {
            var userType = builder.UserType;
            var provider= typeof(PasswordlessLoginTokenProvider<>).MakeGenericType(userType);
            return builder.AddTokenProvider("PasswordlessLoginProvider", provider);
        }
    }
    

    and add it to the IdentityBuilder instance:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddIdentity<IdentityUser, IdentityRole>()
                .AddEntityFrameworkStores<IdentityDbContext>() 
                .AddDefaultTokenProviders()
                .AddPasswordlessLoginTokenProvider(); // Add the token provider
        }
    }
    

    Again, be sure you update the GenerateUserTokenAsync and VerifyUserTokenAsync calls in your authentication workflow to use the correct provider name ("PasswordlessLoginProvider" in this case). This will give you almost exactly the same tokens as in Scott's original example, but with the TokenLifespan reduced to 15 minutes.

    Summary

    You can implement passwordless authentication in ASP.NET Core Identity using the approach described in Scott Brady's post, but this will result in tokens and magic-links that are valid for a long time period: 1 day by default. In this post I showed three different ways you can reduce the token lifetime: you can change the default lifetime for all tokens; use very short-lived tokens by creating a TOTP provider; or use the ASP.NET Core Data Protection system to create medium-length lifetime tokens.


    Anuraj Parameswaran: Getting started with Blazor

    This post is about how to get started with Blazor. Blazor is an experimental .NET web framework using C#/Razor and HTML that runs in the browser with WebAssembly. Blazor enables full stack web development with the stability, consistency, and productivity of .NET. While this release is alpha quality and should not be used in production, the code for this release was written from the ground up with an eye towards building a production quality web UI framework.


    Anuraj Parameswaran: Dockerize an ASP.NET MVC 5 Angular application with Docker for Windows

    Few days back I wrote a post about working with Angular 4 in ASP.NET MVC. I received multiple queries on deployment aspects - how to setup the development environment or how to deploy it in IIS, or in Azure etc. In this post I am explaining how to deploy a ASP.NET MVC - Angular application to Docker environment.


    Andrew Lock: Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    In this post I describe a .NET Core CLI global tool I created that can be used to compress images using the TinyPNG developer API. I'll give some background on .NET Core CLI tools, describe the changes to tooling in .NET Core 2.1, and show some of the code required to build your own global tools. You can find the code for the tool in this post at https://github.com/andrewlock/dotnet-tinify.

    The code for my global tool was heavily based on the dotnet-serve tool by Nate McMaster. If you're interested in global tools, I strongly suggest reading his post on them, as it provides background, instructions, and an explanation of what's happening under the hood. He's also created a CLI template you can install to get started.

    .NET CLI tools prior to .NET Core 2.1

    The .NET CLI (which can be used for .NET Core and ASP.NET Core development) includes the concept of "tools" that you can install into your project. This includes things like the EF Core migration tool, the user-secrets tool, and the dotnet watch tool.

    Prior to .NET Core 2.1, you need to specifically install these tools in every project where you want to use them. Unfortunately, there's no tooling for doing this either in the CLI or in Visual Studio. Instead, you have to manually edit your .csproj file and add a DotNetCliToolReference:

    <ItemGroup>  
        <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="2.0.0" />
    </ItemGroup>  
    

    The tools themselves are distributed as NuGet packages, so when you run a dotnet restore on the project, it will restore the tool at the same time.

    Adding tool references like this to every project has both upsides and downsides. On the one hand, adding them to the project file means that everyone who clones your repository from source control will automatically have the correct tools installed. Unfortunately, having to manually add this line to every project means that I rarely bother installing non-essential-but-useful tools like dotnet watch anymore.

    .NET Core 2.1 global tools

    In .NET Core 2.1, a feature was introduced that allows you to globally install a .NET Core CLI tool. Rather than having to install the tool manually in every project, you install it once globally on your machine, and then you can run the tool from any project.

    You can think of this as synonymous with npm -g global packages

    The intention is to expose all the first-party CLI tools (such as dotnet-user-secrets and dotnet-watch) as global tools, so you don't have to remember to explicitly install them into your projects. Obviously this has the downside that all your team have to have the same tools (and potentially the same version of the tools) installed already.

    You can install a global tool using the .NET Core 2.1 SDK (preview 1). For example, to install Nate's dotnet serve tool, you just need to run:

    dotnet install tool --global dotnet-serve  
    

    You can then run dotnet serve from any folder.

    In the next section I'll describe how I built my own global tool dotnet-tinify that uses the TinyPNG api to compress images in a folder.

    Compressing images using the TinyPNG API

    Images make up a huge proportion of the size of a website - a quick test on the Amazon home page shows that 94% of the page's size is due to images. That means it's important to make sure your images aren't using more data than they need too, as it will slow down your page load times.

    Page load times are important when you're running an ecommerce site, but they're important everywhere else too. I'm much more likely to abandon a blog if it takes 10 seconds to load the page, than if it pops in instantly.

    Before I publish images on my blog, I always wake sure they're as small as they can be. That means resizing them as necessary, using the correct format (.png for charts etc, .jpeg for photos), but also squashing them further.

    Different programs will save images with different quality, different algorithms, and different metadata. You can often get smaller images without a loss in quality by just stripping the metadata and using a different compression algorithm. When I as using a Mac, I typically used ImageOptim; now I typically use the TinyPNG website.

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    To improve my workflow, rather than manually uploading and downloading images, I decided a global tool would be perfect. I could install it once, and run dotnet tinify . to squash all the images in the current folder.

    Creating a .NET Core global tool

    Creating a .NET CLI global tool is easy - it's essentially just a console app with a few additions to the .csproj file. Create a .NET Core Console app, for example using dotnet new console, and update your .csproj to add the IsPackable and PackAsTool elements:

    <Project Sdk="Microsoft.NET.Sdk">
    
      <PropertyGroup>
        <OutputType>Exe</OutputType>
        <IsPackable>true</IsPackable>
        <PackAsTool>true</PackAsTool>
        <TargetFramework>netcoreapp2.1</TargetFramework>
      </PropertyGroup>
    
    </Project>
    

    It's as easy as that!

    You can add NuGet packages to your project, reference other projects, anything you like; it's just a .NET Core console app! In the final section of this post I'll talk briefly about the dontet-tinify tool I created.

    dotnet-tinify: a global tool for squashing images

    To be honest, creating the tool for dotnet-tinify really didn't take long. Most of the hard work had already been done for me, I just plugged the bits together.

    TinyPNG provides a developer API you can use to access their service. It has an impressive array of client libraries to choose from (e.g HTTP, Ruby, PHP, Node.js, Python, Java and .NET), and is even free to use for the first 500 compressions per month. To get started, head to https://tinypng.com/developers and signup (no credit card) to get an API key:

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    Given there's already an official client library (and it's .NET Standard 1.3 too!) I decided to just use that in dotnet-tinify. Compressing an image is essentially a 4 step process:

    1. Set the API key on the static Tinify object:

    Tinify.Key = apiKey;  
    

    2. Validate the API key

    await Tinify.Validate();  
    

    3. Load a file

    var source = Tinify.FromFile(file);  
    

    4. Compress the file and save it to disk

    await source.ToFile(file);  
    

    There's loads more you can with the API: resizing images, loading and saving to buffers, saving directly to s3. For details, take a look at the documentation.

    With the functionality aspect of the tool sorted, I needed a way to pass the API key and path to the files to compress to the tool. I chose to use Nate McMaster's CommandLineUtils fork, McMaster.Extensions.CommandLineUtils, which is one of many similar libraries you can use to handle command-line parsing and help message generation.

    You can choose to use either the builder API or an attribute API with the CommandLineUtils package, so you can choose whichever makes you happy. With a small amount of setup I was able to get easy command line parsing into strongly typed objects, along with friendly help messages on how to use the tool with the --help argument:

    > dotnet tinify --help
    Usage: dotnet tinify [arguments] [options]
    
    Arguments:  
      path  Path to the file or directory to squash
    
    Options:  
      -?|-h|--help            Show help information
      -a|--api-key <API_KEY>  Your TinyPNG API key
    
    You must provide your TinyPNG API key to use this tool  
    (see https://tinypng.com/developers for details). This
    can be provided either as an argument, or by setting the  
    TINYPNG_APIKEY environment variable. Only png, jpeg, and  
    jpg, extensions are supported  
    

    And that's it, the tool is finished. It's very basic at the moment (no tests 😱!), but currently that's all I need. I've pushed an early package to NuGet and the code is on GitHub so feel free to comment / send issues / send PRs.

    You can install the tool using

    dotnet install tool --global dotnet-tinify  
    

    You need to set your tiny API key in the TINYPNG_APIKEY environment for your machine (e.g. by executing setx TINYPNG_APIKEY abc123 in a command prompt), or you can pass the key as an argument to the dotnet tinify command (see below)

    Typical usage might be

    • dotnet tinify image.png - compress image.png in the current directory
    • dotnet tinify . - compress all the png and jpeg images in the current directory
    • dotnet tinify "C:\content" - compress all the png and jpeg images in the "C:\content" path
    • dotnet tinify image.png -a abc123 - compress image.png , providing your API key as an argument

    So give it a try, and have a go at writing your own global tool, it's probably easier than you think!

    Summary

    In this post I described the upcoming .NET Core global tools, and how they differ from the existing .NET Core CLI tools. I then described how I created a .NET Core global tool to compress my images using the TinyPNG developer API. Creating a global tool is as easy as setting a couple of properties in your .csproj file, so I strongly suggest you give it a try. You can find the dotnet-tinify tool I created on NuGet or on GitHub. Thanks to Nate McMaster for (heavily) inspiring this post!


    Damien Bowden: Comparing the HTTPS Security Headers of Swiss banks

    This post compares the security HTTP Headers used by different banks in Switzerland. securityheaders.io is used to test each of the websites. The website of each bank as well as the e-banking login was tested. securityheaders.io views the headers like any browser.

    The tested security headers help protect against some of the possible attacks, especially during the protected session. I would have expected all the banks to reach at least a grade of A, but was surprised to find, even on the login pages, many websites are missing some of the basic ways of protecting the application.

    Credit Suisse provide the best protection for the e-banking login, and Raiffeisen have the best usage of the security headers on the website. Strange that the Raiffeisen webpage is better protected than the Raiffeisen e-banking login.

    Scott Helme explains each of the different headers here, and why you should use them:

    TEST RESULTS

    Best A+, Worst F

    e-banking

    1. Grade A Credit Suisse
    1. Grade A Basler Kantonalbank
    3. Grade B Post Finance
    3. Grade B Julius Bär
    3. Grade B WIR Bank
    3. Grade B DC Bank
    3. Grade B Berner Kantonalbank
    3. Grade B St. Galler Kantonalbank
    3. Grade B Thurgauer Kantonalbank
    3. Grade B J. Safra Sarasin
    11. Grade C Raiffeisen
    12. Grade D Zürcher Kantonalbank
    13. Grade D UBS
    14. Grade D Valiant

    web

    1. Grade A Raiffeisen
    2. Grade A Credit Suisse
    2. Grade A WIR Bank
    2. Grade A J. Safra Sarasin
    5. Grade A St. Galler Kantonalbank
    6. Grade B Post Finance
    6. Grade B Valiant
    8. Grade C Julius Bär
    9. Grade C Migros Bank
    10. Grade D UBS
    11. Grade D Zürcher Kantonalbank
    12. Grade D Berner Kantonalbank
    13. Grade F DC Bank
    14. Grade F Thurgauer Kantonalbank
    15. Grade F Basler Kantonalbank

    TEST RESULTS DETAILS

    UBS

    https://www.ubs.com

    This is one of the worst protected of all the bank e-banking logins tested. It is missing most of the security headers. The website is also missing most of the security headers.

    https://ebanking-ch.ubs.com

    The headers returned from the e-banking login is even worst than the D rating, as it is also missing the X-Frame-options protection.

    cache-control →no-store, no-cache, must-revalidate, private
    connection →Keep-Alive
    content-encoding →gzip
    content-type →text/html;charset=UTF-8
    date →Tue, 27 Mar 2018 11:46:15 GMT
    expires →Thu, 1 Jan 1970 00:00:00 GMT
    keep-alive →timeout=5, max=10
    p3p →CP="OTI DSP CURa OUR LEG COM NAV INT"
    server →Apache
    strict-transport-security →max-age=31536000
    transfer-encoding →chunked
    

    No CSP is present here…

    Credit Suisse

    The Credit Suisse website and login are protected with most of the headers and have a good CSP. The no-referrer header is missing from the e-banking login and could be added.

    https://www.credit-suisse.com/ch/en.html

    CSP

    default-src 'self' 'unsafe-inline' 'unsafe-eval' data: *.credit-suisse.com 
    *.credit-suisse.cspta.ch *.doubleclick.net *.decibelinsight.net 
    *.mookie1.com *.demdex.net *.adnxs.com *.facebook.net *.google.com 
    *.google-analytics.com *.googletagmanager.com *.google.ch *.googleapis.com 
    *.youtube.com *.ytimg.com *.gstatic.com *.googlevideo.com *.twitter.com 
    *.twimg.com *.qq.com *.omtrdc.net *.everesttech.net *.facebook.com 
    *.adobedtm.com *.ads-twitter.com t.co *.licdn.com *.linkedin.com 
    *.credit-suisse.wesit.rowini.net *.zemanta.com *.inbenta.com 
    *.adobetag.com sc-static.net
    

    The CORS header is present, but it allows all origins, which is a bit lax, but CORS is not really a securtiy feature. I think is still should be more strict.

    https://direct.credit-suisse.com/dn/c/cls/auth?language=en

    CSP

    default-src dnmb: 'self' *.credit-suisse.com *.directnet.com *.nab.ch; 
    script-src dnmb: 'self' 'unsafe-inline' 'unsafe-eval' *.credit-suisse.com 
    *.directnet.com *.nab.ch ; style-src 'self' 'unsafe-inline' *.credit-suisse.com *.directnet.com *.nab.ch; img-src 'self' http://img.youtube.com data: 
    *.credit-suisse.com *.directnet.com *.nab.ch; connect-src 'self' wss: ; 
    font-src 'self' data:
    

    Raiffeisen

    The Raiffeisen website is the best protected of all the tested banks. The e-banking could be improved.

    https://www.raiffeisen.ch/rch/de.html

    CSP

    This is pretty good, but it allows unsafe-eval, probably due to the javascript lib used to implement the UI. This could be improved.

    Security-Policy	default-src 'self' ; script-src 'self' 'unsafe-inline' 
    'unsafe-eval' assets.adobedtm.com maps.googleapis.com login.raiffeisen.ch ;
     style-src 'self' 'unsafe-inline' fonts.googleapis.com ; img-src 'self' 
    statistics.raiffeisen.ch dmp.adform.net maps.googleapis.com maps.gstatic.com 
    csi.gstatic.com khms0.googleapis.com khms1.googleapis.com www.homegate.ch 
    dpm.demdex.net raiffeisen.demdex.net ; font-src 'self' fonts.googleapis.com 
    fonts.gstatic.com ; connect-src 'self' api.raiffeisen.ch statistics.raiffeisen.ch 
    www.homegate.ch prod1.solid.rolotec.ch dpm.demdex.net login.raiffeisen.ch ;
     media-src 'self' ruz.ch ; child-src * ; frame-src * ;
    

    https://ebanking.raiffeisen.ch/

    Zürcher Kantonalbank

    https://www.zkb.ch/

    The website is pretty bad. It has a mis-configuration in the X-Frame-Options. The e-banking login is missing most of the headers.

    https://onba.zkb.ch/page/logon/logon.page

    Post Finance

    Post Finance is missing the CSP header and the no-referrer header in both the website and the login. This could be improved.

    https://www.postfinance.ch/de/privat.html

    https://www.postfinance.ch/ap/ba/fp/html/e-finance/home?login

    Julius Bär

    Julius Bär is missing the CSP header and the no-referrer header for the e-banking login, and the X-Frame-Options is also missing from the website.

    https://www.juliusbaer.com/global/en/home/

    https://ebanking.juliusbaer.com/bjbLogin/login?lang=en

    Migros Bank

    The website is missing a lot of headers as well.

    https://www.migrosbank.ch/de/privatpersonen.html

    Migro Bank provided no login link from the browser.

    WIR Bank

    The WIR bank have one of the best websites, and is missing the the no-referrer header. It’s e-banking solution is missing both a CSP Header as well as a referrer policy. Here the website is more secure than the e-banking, strange.

    https://www.wir.ch/

    CSP

    frame-ancestors 'self' https://www.jobs.ch;
    

    https://wwwsec.wir.ch/authen/login?lang=de

    DC Bank

    The DC Bank is missing all the security headers on the website. This could really be improved! The e-banking is better, but missing the CSP and the referrer policies.

    https://www.dcbank.ch/

    https://banking.dcbank.ch/login/login.jsf?bank=74&lang=de&path=layout/dcb

    Basler Kantonalbank

    This is an interesting test. Basler Kantonalbank has a no security headers in the website, and even an incorrect X-Frame-Options. The e-banking is good, but missing the no-referrer policy. So it has the best and the worst of the banks tested.

    https://www.bkb.ch/en

    https://login.bkb.ch/auth/login

    CSP

    default-src https://*.bkb.ch https://*.mybkb.ch; 
    img-src data: https://*.bkb.ch https://*.mybkb.ch; 
    script-src 'unsafe-inline' 'unsafe-eval' 
    https://*.bkb.ch https://*.mybkb.ch; style-src 
    https://*.bkb.ch https://*.mybkb.ch 'unsafe-inline';
    

    Berner Kantonalbank

    https://www.bekb.ch/

    The Berner Kantonalbank has implemented 2 security headers on the website , but is missing the HSTS header. The e-banking is missing 2 of the security headers, no-referrer policy and the CSP.

    CSP

    frame-ancestors 'self'
    

    https://banking.bekb.ch/login/login.jsf?bank=5&lang=de&path=layout/bekb

    Valiant

    Valiant has one of the better websites, but the worst e-banking concerning the security headers. Only has the X-Frame-Options supported.

    https://www.valiant.ch/privatkunden

    https://wwwsec.valiant.ch/authen/login

    St. Galler Kantonalbank

    The website is an A-Grade, but missing 2 headers, the X-Frame-Options and the no-referrer header. The e-banking is less protected compared to the website, has a grade B. It is missing the CSP and the referrer policy.

    https://www.sgkb.ch/

    CSP

    default-src 'self' 'unsafe-inline' 'unsafe-eval' recruitingapp-1154.umantis.com 
    *.googleapis.com *.gstatic.com prod1.solid.rolotec.ch beta.idisign.ch 
    test.idisign.ch dis.swisscom.ch www.newhome.ch www.wuestpartner.com; 
    img-src * data: android-webview-video-poster:; font-src * data:
    

    https://www.onba.ch/login/login

    Thurgauer Kantonalbank

    The Thurgauer website is missing all the security headers, not even the HSTS supported, and the e-banking is missing the CSP and the no-referrer headers.

    https://www.tkb.ch/

    https://banking.tkb.ch/login/login

    J. Safra Sarasin

    J. Safra Sarasin website uses most security headers, it is only missing the no-referrer header. The e-banking webite is missing the CSP and the referrer headers.

    https://www.jsafrasarasin.ch

    CSP

    frame-ancestors 'self'
    

    https://ebanking-ch.jsafrasarasin.com/ebankingLogin/login

    It would be nice if the this part of the security could be improved for all of these websites.


    Andrew Lock: How to create a Helm chart repository using Amazon S3

    How to create a Helm chart repository using Amazon S3

    Helm is a package manager for Kubernetes. You can bundle Kubernetes resources together as charts that define all the necessary resources and dependencies of an application. You can then use the Helm CLI to install all the pods, services, and ingresses for an application in one simple command.

    Just like Docker or NuGet, there's a common public repository for Helm charts that the helm CLI uses by default. And just like Docker and NuGet, you can host your own Helm repository for your charts.

    In this post, I'll show how you can use an AWS S3 bucket to host a Helm chart repository, how to push custom charts to it, and how to install charts from the chart repository. I won't be going into Helm or Kubernetes in depth, I suggest you check the Helm quick start guide if they're new to you.

    If you're not using AWS, and you'd like to store your charts on Azure, Michal Cwienczek has a post on how to create a Helm chart repository using Blob Storage instead.

    Installing the prerequisites

    Before you start working with Helm properly, youu need to do some setup. The Helm S3 plugin you'll be using later requires that you have the AWS CLI installed and configured on your machine. You'll also need an S3 bucket to use as your repository.

    Installing the AWS CLI

    I'm using an Ubuntu 16.04 virtual machine for this post, so all the instructions assume you have the same setup.

    The suggested approach to install the AWS CLI is to use pip, the Python package index. This obviously requires Python, which you can confirm is installed using:

    $ python -V
    Python 2.7.12  
    

    According to the pip website:

    pip is already installed if you are using Python 2 >=2.7.9 or Python 3 >=3.4

    However, running which pip returned nothing for me, so I installed it anyway using

    $ sudo apt-get install python-pip
    

    Finally, we can install the AWS CLI using:

    $ pip install awscli
    

    The last thing to do is to configure your environment to access your AWS account. Add the ~./aws/config and ~./aws/credentials files to your home directory with the appropriate access keys, as described in the docs

    Creating the repository S3 bucket

    You're going to need an S3 bucket to store your charts. You can create the bucket anyway you like, either using the AWS CLI, or using the AWS Management Console. I used the Management Console to create a bucket called my-helm-charts:

    How to create a Helm chart repository using Amazon S3

    Whenever you create a new bucket, it's a good idea to think about who is able to access it, and what they're able to do. You can control this using IAM policies or S3 policies, whatever works for you. Just make sure you've looked into it!

    The policy below, for example, grants read and write access to the IAM user andrew.

    Once your repository is working correctly, you might want to update this so that only your CI/CD pipeline can push charts to your repository, but that any of your users can list and fetch charts. It may also be wise to remove the delete action completely.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowListObjects",
          "Effect": "Allow",
          "Principal": {
            "AWS": ["arn:aws:iam::111122223333:user/andrew"]
          },
          "Action": [
            "s3:ListBucket"
          ],
          "Resource": "arn:aws:s3:::my-helm-charts"
        },
        {
          "Sid": "AllowObjectsFetchAndCreate",
          "Effect": "Allow",
          "Principal": {
            "AWS": ["arn:aws:iam::111122223333:user/andrew"]
          },
          "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:PutObject"
          ],
          "Resource": "arn:aws:s3:::my-helm-charts/*"
        }
      ]
    }
    

    Installing the Helm S3 plugin

    You're almost set now. If you've haven't already, install Helm using the instructions in the quick start guide.

    The final prerequisite is the Helm S3 plugin. This acts as an intermediary between Helm and your S3 bucket. It's not the only way to create a custom repository, but it simplifies a lot of things.

    You can install the plugin from the GitHub repo by running:

    $ helm plugin install https://github.com/hypnoglow/helm-s3.git
    Downloading and installing helm-s3 v0.5.2 ...  
    Installed plugin: s3  
    

    This downloads the latest version of the plugin from GitHub, and registers it with Helm.

    Creating your Helm chart repository

    You're finally ready to start playing with charts properly!

    The first thing to do is to turn the my-helm-charts bucket into a valid chart repository. This requires adding an index.yaml to it. The Helm S3 plugin has a helper method to do that for you, which generates a valid index.yaml and uploads it to your S3 bucket:

    $ helm S3 init s3://my-helm-charts/charts
    Initialized empty repository at s3://my-helm-charts/charts  
    

    If you fetch the contents of the bucket now, you'll find an index.yamlfile under the /charts key

    How to create a Helm chart repository using Amazon S3

    Note, the /charts prefix is entirely optional. If you omit the prefix, the Helm chart repository will be in the root of the bucket. I just included it for demonstration purposes here.

    The contents of the index.yaml file is very basic at the moment:

    apiVersion: v1  
    entries: {}  
    generated: 2018-02-10T15:27:15.948188154-08:00  
    

    To work with the chart repository by name instead of needing the whole URL, you can add an alias. For example, to create a my-charts alias:

    $ helm repo add my-charts s3://my-helm-charts/charts
    "my-charts" has been added to your repositories
    

    If you run helm repo list now, you'll see your repo listed (along with the standard stable and local repos:

    $ helm repo list
    NAME            URL  
    stable          https://kubernetes-charts.storage.googleapis.com  
    local           http://127.0.0.1:8879/charts  
    my-charts       s3://my-helm-charts/charts  
    

    You now have a functioning chart repository, but it doesn't have any charts yet! In the next section I'll show how to push charts to, and install charts from, your S3 repository.

    Uploading a chart to the repository

    Before you can push a chart to the repository, you need to create one. If you already have one, you could use that, or you could copy one of the standard charts from the stable repository. For the sake of completion, I'll create a basic chart, and use that for the rest of the post.

    Creating a simple test Helm chart

    I used the example from the Helm docs for this test, which creates one of the simplest templates, a ConfigMap, and adds it at the path test-chart/templates/configmap.yaml:

    $ helm create test-chart
    Creating test-chart  
    # Remove the initial cruft
    $ rm -rf test-chart/templates/*.*
    # Create a ConfigMap template at test-chart/templates/configmap.yaml
    $ cat >test-chart/templates/configmap.yaml <<EOL
    apiVersion: v1  
    kind: ConfigMap  
    metadata:  
      name: test-chart-configmap
    data:  
      myvalue: "Hello World"
    EOL  
    

    You can install this chart into your kubernetes cluster using:

    $ helm install ./test-chart
    NAME:   zeroed-armadillo  
    LAST DEPLOYED: Fri Feb  9 17:10:38 2018  
    NAMESPACE: default  
    STATUS: DEPLOYED
    
    RESOURCES:  
    ==> v1/ConfigMap
    NAME               DATA  AGE  
    test-chart-configmap  1     0s  
    

    and remove it again completely using the release name presented when you installed it (zeroed-armadillo) :

    # --purge removes the release from the "store" completely
    $ helm delete --purge zeroed-armadillo
    release "zeroed-armadillo" deleted  
    

    Now you have a chart to work with it's time to push it to your repository.

    Uploading the test chart to the chart repository

    To push the test chart to your repository you must first package it. This takes all the files in your ./test-chart repository and bundles them into a single .tgz file:

    $ helm package ./test-chart
    Successfully packaged chart and saved it to: ~/test-chart-0.1.0.tgz  
    

    Once the file is packaged, you can push it to your repository using the S3 plugin, by specifying the packaged file name, and the my-charts alias you specified earlier.

    $ helm s3 push ./test-chart-0.1.0.tgz my-charts
    

    Note that without the plugin you would normally have to "manually" sync your local and remote repos, merging the remote repository with your locally added charts. The S3 plugin handles all that for you.

    If you check your S3 bucket after pushing the chart, you'll see that the tgz file has been uploaded:

    How to create a Helm chart repository using Amazon S3

    That's it, you've pushed a chart to an S3 repository!

    Searching and installing from the repository

    If you do a search for a test chart using helm search you can see your chart listed:

    $ helm search test-chart
    NAME                    CHART VERSION   APP VERSION     DESCRIPTION  
    my-charts/test-chart    0.1.0           1.0             A Helm chart for Kubernetes  
    

    You can fetch and/or unpack the chart locally using helm fetch my-charts/test-chart or you can jump straight to installing it using:

    $ helm install my-charts/test-chart
    NAME:   rafting-crab  
    LAST DEPLOYED: Sat Feb 10 15:53:34 2018  
    NAMESPACE: default  
    STATUS: DEPLOYED
    
    RESOURCES:  
    ==> v1/ConfigMap
    NAME               DATA  AGE  
    mychart-configmap  1     0s  
    

    To remove the test chart from the repository, you provide the chart name and version you wish to delete:

    $ helm s3 delete test-chart --version 0.1.0 my-charts
    

    That's basically all there is to it! You now have a central repository on S3 for storing your charts. You can fetch, search, and install charts from your repository, just as you would any other.

    A warning - make sure you version your charts correctly

    Helm charts should be versioned using Semantic versioning, so if you make a change to a chart, you should be sure to bump the version before pushing it to your repository. You should treat the chart name + version as immutable.

    Unfortunately, there's currently nothing in the tooling to enforce this, and prevent you overwriting an existing chart with a chart with the same name and version number. There's an open issue to address this in the S3 plugin, but in the mean time, just be careful, and potentially enable versioning of files in S3 to catch any issues.
    As of version 0.6.0, the plugin will block overwriting a chart if it already exists.

    In a similar vein, you may want to disable the ability to delete charts from a repository. I feel like it falls under the same umbrella as immutability of charts in general - you don't want to break downstream charts that have taken a dependency on your chart.

    Summary

    In this post I showed how to create a Helm chart repository in S3 using the Helm S3 plugin. I showed how to prepare an S3 bucket as a Helm repository, and how to push a chart to it. Finally, I showed how to search and install charts from the S3 repository.
    .


    Andrew Lock: Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    In ASP.NET core 2.1 (currently in preview 1) Microsoft have changed the way the ASP.NET core framework is deployed for .NET Core apps, by moving to a system of shared frameworks instead of using the runtime store.

    In this post, I look at some of the history and motivation for this change, the changes that you'll see when you install the ASP.NET Core 2.1 SDK or runtime on your machine, and what it all means for you as an ASP.NET Core developer.

    If you're not interested in the history side, feel free to skip ahead to the impact on you as an ASP.NET Core developer:

    The Microsoft.AspNetCore.All metapackage and the runtime store

    In this section, I'll recap over some of the problems that the Microsoft.AspNetCore.All was introduced to solve, as well as some of the issues it introduces. This is entirely based on my own understanding of the situation (primarily gleaned from these GitHub issues), so do let me know in the comments if I've got anything wrong or misrepresented the situation!

    In the beginning, there were packages. So many packages.

    With ASP.NET Core 1.0, Microsoft set out to create a highly modular, layered, framework. Instead of the monolithic .NET framework that you had to install in it's entirety in a central location, you could reference individual packages that provide small, discrete piece of functionality. Want to configure your app using JSON files? Add the Microsoft.Extensions.Configuration.Json package. Need environment variables? That's a different package (Microsoft.Extensions.Configuration.EnvironmentVariables).

    This approach has many benefits, for example:

    • You get a clear "layering" of dependencies
    • You can update packages independently of others
    • You only have to include the packages that you actually need, reducing the published size of your app.

    Unfortunately, these benefits diminished as the framework evolved.

    Initially, all the framework packages started at version 1.0.0, and it was simply a case of adding or removing packages as necessary for the required functionality. But bug fixes arrived shortly after release, and individual packages evolved at different rates. Suddenly .csproj files were awash with different version numbers, 1.0.1, 1.0.3, 1.0.2. It was no longer easy to tell at a glance whether you were on the latest version of a package, and version management became a significant chore. The same was true when ASP.NET Core 1.1 was released - a brief consolidation was followed by diverging package versions:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    On top of that, the combinatorial problem of testing every version of a package with every other version, meant that there was only one "correct" combination of versions that Microsoft would support. For example, using the 1.1.0 version of the StaticFiles middleware with the 1.0.0 MVC middleware was easy to do, and would likely work without issue, but was not a configuration Microsoft could support.

    It's worth noting that the Microsoft.AspNetCore metapackage partially solved this issue, but it only included a limited number of packages, so you would often still be left with a degree of external consolidation required.

    Add to that the discoverability problem of finding the specific package that contains a given API, slow NuGet restore times due to the sheer number of packages, and a large published output size (as all packages are copied to the bin folder) and it was clear a different approach was required.

    Unifying package versions with a metapackage

    In ASP.NET Core 2.0, Microsoft introduced the Microsoft.AspNetCore.All metapackage and the .NET Core runtime store. These two pieces were designed to workaround many of the problems that we've touched on, without sacrificing the ability to have distinct package dependency layers and a well factored framework.

    I discussed this metapackage and the runtime store in a previous post, but I'll recap here for convenience.

    The Microsoft.AspNetCore.All metapackage solves the issue of discoverability and inconsistent version numbers by including a reference to every package that is part of ASP.NET Core 2.0, as well as third-party packages referenced by ASP.NET Core. This includes both integral packages like Newtonsoft.Json, but also packages like StackExchange.Redis that are used by somewhat-peripheral packages like Microsoft.Extensions.Caching.Redis.

    On the face of it, you might expect shipping a larger metapackage to cause everything to get even slower - there would be more packages to restore, and a huge number of packages in your app's published output.

    However, .NET Core 2.0 includes a new feature called the runtime store. This essentially lets you pre-install packages on a machine, in a central location, so you don't have to include them in the publish output of your individual apps. When you install the .NET Core 2.0 runtime, all the packages required by the Microsoft.AspNetCore.All metapackage are installed globally (at C:\Program Files\dotnet\store. on Windows):

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    When you publish your app, the Microsoft.AspNetCore.All metapackage trims out all the dependencies that it knows will be in the runtime store, significantly reducing the number of dlls in your published app's folder.

    The runtime store has some additional benefits. It can use "ngen-ed" libraries that are already optimised for the the target machine, improving start up time. You can also use the store to "light-up" features at runtime such as Application insights, but you can create your own manifests too.

    Unfortunately, there are a few downsides to the store...

    The ever-growing runtime stores

    By design, if your app is built using the Microsoft.AspNetCore.All metapacakge, and hence uses the runtime store output-trimming, you can only run your app on a machine that has the correct version of the runtime store installed (via the .NET Core runtime installer).

    For example, if you use the Microsoft.AspNetCore.All metapackage for version 2.0.1, you must have the runtime store for 2.0.1 installed, version 2.0.0 and 2.0.2 are no good. That means if you need to fix a critical bug in production, you would need to install the next version of the runtime store, and you would need to update, recompile, and republish all of your apps to use it. This generally leads to runtime stores growing, as you can't easily delete old versions.

    This problem is a particular issue if you're running a platform like Azure, so Microsoft are acutely aware of the issue. If you deploy your apps using Docker for example, this doesn't seem like as big of a problem.

    The solution Microsoft have settled on is somewhat conceptually similar to the runtime store, but it actually goes deeper than that.

    Introducing Shared Frameworks in ASP.NET Core 2.1

    In ASP.NET Core 2.1 (currently at preview 1), ASP.NET Core is now a Shared Framework, very similar to the existing Microsoft.NETCore.App shared framework that effectively "is" .NET Core. When you install the .NET Core runtime you can also install the ASP.NET Core runtime:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    After you install the preview, you'll find you have three folders in C:\Program Files\dotnet\shared (on Windows):

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    These are the three Shared frameworks for ASP.NET Core 2.1:

    • Microsoft.NETCore.App - the .NET Core framework that previously was the only framework installed
    • Microsoft.AspNetCore.App - all the dlls from packages that make up the "core" of ASP.NET Core, with as many packages that have third-party dependencies removed
    • Microsoft.AspNetCore.All - all the packages that were previously referenced by the Microsoft.AspNetCore.All metapackage, including all their dependencies.

    Each of these frameworks "inherits" from the last, so there's no duplication of libraries between them, but the folder layout is much simpler - just a flat list of libraries:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    So why should I care?

    That's all nice and interesting, but how does it affect how we develop ASP.NET Core applications? Well for the most part, things are much the same, but there's a few points to take note of.

    Reference Microsoft.AspNetCore.App in your apps

    As described in this issue, Microsoft have introduced another metapackage called Microsoft.AspNetCore.App with ASP.NET Core 2.1. This contains all of the libraries that make up the core of ASP.NET Core that are shipped by the .NET and ASP.NET team themselves. Microsoft recommend using this package instead of the All metapackage, as that way they can provide direct support, instead of potentially having to rely on third-party libraries (like StackExchange.Redis or SQLite).

    In terms of behaviour, you'll still effectively get the same publish output dependency-trimming that you do currently (though the mechanism is slightly different), so there's no need to worry about that. If you need some of the extra packages that aren't part of the new Microsoft.AspNetCore.App metapackage, then you can just reference them individually.

    Note that you are still free to reference the Microsoft.AspNetCore.All metapackage, it's just not recommended as it locks you into specific versions of third-party dependencies. As you saw previously, the All shared framework inherits from the App shared framework, so it should be easy enough to switch between them

    Framework version mismatches

    By moving away from the runtime store, and instead moving to a shared-framework approach, it's easier for the .NET Core runtime to handle mis-matches between the requested runtime and the installed runtimes.

    With ASP.NET Core prior to 2.1, the runtime would automatically roll-forward patch versions if a newer version of the runtime was installed on the machine, but it would never roll forward minor versions. For example, if versions 2.0.2 and 2.0.3 were installed, then an app targeting 2.0.2 would use 2.0.3 automatically. However if only version 2.1.0 was installed and the app targeted version 2.0.0, the app would fail to start.

    With ASP.NET Core 2.1, the runtime can roll-forward by using a newer minor version of the framework than requested. So in the previous example, an app targeting 2.0.0 would be able to run on a machine that only has 2.1.0 or 2.2.1 installed for example.

    An exact minor match is always chosen preferentially; the minor version only rolls-forward when your app would otherwise be unable to run.

    Exact dependency ranges

    The final major change introduced in Microsoft.AspNetCore.App is the use of exact-version requirements for referenced NuGet packages. Typically, most NuGet packages specify their dependencies using "at least" ranges, where any dependent package will satisfy the requirement.

    For example, the image below shows some of the dependencies of the Microsoft.AspNetCore.All (version 2.0.6) package.

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Due to the way these dependencies are specified, it would be possible to silently "lift" a dependency to a higher version than that specified. For example, if you added a package which depended on a newer version, say 2.1.0 of Microsoft.AspNetCore.Authentication, to an app using version 2.0.0 of the All package then NuGet would select 2.1.0 as it satisfies all the requirements. That could result in you trying to use using untested combinations of the ASP.NET Core framework libraries.

    Consequently, the Microsoft.AspNetCore.App package specifies exact versions for it's dependencies (note the = instead of >=)

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Now if you attempt to pull in a higher version of a framework library transitively, you'll get an error from NuGet when it tries to restore, warning you about the issue. So if you attempt to use version 2.2.0 of Microsoft.AspNetCore.Antiforgery with version 2.1.0 of the App metapackage for example, you'll get an error.

    It's still possible to pull in a higher version of a framework package if you need to, by referencing it directly and overriding the error, but at that point you're making a conscious decision to head into uncharted waters!

    Summary

    ASP.NET Core 2.1 brings a surprising number of fundamental changes under the hood for a minor release, and fundamentally re-architects the way ASP.NET Core apps are delivered. However as a developer you don't have much to worry about. Other than switching to the Microsoft.AspNetCore.App metapackage and making some minor adjustments, the upgrade from 2.0 to 2.1 should be very smooth. If you're interested in digging further into the under-the-hood changes, I recommend checking out the links below:


    Damien Bowden: Using Message Pack with ASP.NET Core SignalR

    This post shows how SignalR could be used to send messages between different C# console clients using Message Pack as the protocol. An ASP.NET Core web application is used to host the SignalR Hub.

    Code: https://github.com/damienbod/AspNetCoreAngularSignalR

    Posts in this series

    History

    2018-05-08 Updated Microsoft.AspNetCore.SignalR 2.1 rc1

    Setting up the Message Pack SignalR server

    Add the Microsoft.AspNetCore.SignalR and the Microsoft.AspNetCore.SignalR.MsgPack NuGet packages to the ASP.NET Core server application where the SignalR Hub will be hosted. The Visual Studio NuGet Package Manager can be used for this.

    Or just add it directly to the .csproj project file.

    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR" 
      Version="1.0.0-rc1-final" />
    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Protocols.MessagePack" 
      Version="1.0.0-rc1-final" />
    

    Setup a SignalR Hub as required. This is done by implementing the Hub class.

    using Dtos;
    using Microsoft.AspNetCore.SignalR;
    using System.Threading.Tasks;
    
    namespace AspNetCoreAngularSignalR.SignalRHubs
    {
        // Send messages using Message Pack binary formatter
        public class LoopyMessageHub : Hub
        {
            public Task Send(MessageDto data)
            {
                return Clients.All.SendAsync("Send", data);
            }
        }
    }
    
    

    A DTO class is created to send the Message Pack messages. Notice that the class is a plain C# class with no Message Pack attributes, or properties.

    using System;
    
    namespace Dtos
    {
        public class MessageDto
        {
            public Guid Id { get; set; }
    
            public string Name { get; set; }
    
            public int Amount { get; set; }
        }
    }
    
    

    Then add the Message Pack protocol to the SignalR service.

    services.AddSignalR()
    .AddMessagePackProtocol();
    

    And configure the SignalR Hub in the Startup class Configure method of the ASP.NET Core server application.

    app.UseSignalR(routes =>
    {
    	routes.MapHub<LoopyMessageHub>("/loopymessage");
    });
    

    Setting up the Message Pack SignalR client

    Add the Microsoft.AspNetCore.SignalR.Client and the Microsoft.AspNetCore.SignalR.Client.MsgPack NuGet packages to the SignalR client console application.

    The packages are added to the project file.

    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Client" 
      Version="1.0.0-rc1-final" />
    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Protocols.MessagePack" 
      Version="1.0.0-rc1-final" />
    

    Create a Hub client connection using the Message Pack Protocol. The Url must match the URL configuration on the server.

    public static async Task SetupSignalRHubAsync()
    {
    	_hubConnection = new HubConnectionBuilder()
    		 .WithUrl("https://localhost:44324/loopymessage")
    		 .AddMessagePackProtocol()
    		 .ConfigureLogging(factory =>
    		 {
    			 factory.AddConsole();
    			 factory.AddFilter("Console", level => level >= LogLevel.Trace);
    		 }).Build();
    
    	 await _hubConnection.StartAsync();
    }
    

    The Hub can then be used to send or receive SignalR messages using the Message Pack as the binary serializer.

    using Dtos;
    using System;
    using System.Threading.Tasks;
    using Microsoft.AspNetCore.SignalR.Client;
    using Microsoft.Extensions.Logging;
    using Microsoft.AspNetCore.SignalR.Protocol;
    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.DependencyInjection.Extensions;
    
    namespace ConsoleSignalRMessagePack
    {
        class Program
        {
            private static HubConnection _hubConnection;
    
            public static void Main(string[] args) => MainAsync().GetAwaiter().GetResult();
    
            static async Task MainAsync()
            {
                await SetupSignalRHubAsync();
                _hubConnection.On<MessageDto>("Send", (message) =>
                {
                    Console.WriteLine($"Received Message: {message.Name}");
                });
                Console.WriteLine("Connected to Hub");
                Console.WriteLine("Press ESC to stop");
                do
                {
                    while (!Console.KeyAvailable)
                    {
                        var message = Console.ReadLine();
                        await _hubConnection.SendAsync("Send", new MessageDto() { Id = Guid.NewGuid(), Name = message, Amount = 7 });
                        Console.WriteLine("SendAsync to Hub");
                    }
                }
                while (Console.ReadKey(true).Key != ConsoleKey.Escape);
    
                await _hubConnection.DisposeAsync();
            }
    
            public static async Task SetupSignalRHubAsync()
            {
                _hubConnection = new HubConnectionBuilder()
                     .WithUrl("https://localhost:44324/loopymessage")
                     .AddMessagePackProtocol()
                     .ConfigureLogging(factory =>
                     {
                         factory.AddConsole();
                         factory.AddFilter("Console", level => level >= LogLevel.Trace);
                     }).Build();
    
                 await _hubConnection.StartAsync();
            }
        }
    }
    
    

    Testing

    Start the server application, and 2 console applications. Then you can send and receive SignalR messages, which use Message Pack as the protocol.


    Links:

    https://msgpack.org/

    https://github.com/aspnet/SignalR

    https://github.com/aspnet/SignalR#readme

    https://radu-matei.com/blog/signalr-core/


    Anuraj Parameswaran: Exploring Global Tools in .NET Core

    This post is about Global Tools in .NET Core, Global Tools is new feature in .NET Core. Global Tools helps you to write .NET Core console apps that can be packaged and delivered as NuGet packages. It is similar to npm global tools.


    Damien Bowden: First experiments with makecode and micro:bit

    At the MVP Global Summit, I heard about MakeCode for the first time. The project makes it really easy for people to get a first contact, introduction with code and computer science. I got the chance to play around with the Micro:bit which has a whole range of sensors and can easily be programmed from MakeCode.

    I decided to experiment and tried it out with two 12 year olds and a 10 ten old.

    MakeCode

    The https://makecode.com/ website provides a whole range of links, getting started lessons, or great ideas on how people can use this. We experimented firstly with the MakeCode Micro:bit. This software can be run from any browser, and the https://makecode.microbit.org/ can be used to experiment, or programme the Micro:bit.

    Micro:bit

    The Micro:bit is a 32 bit arm computer with all types of sensors, inputs and outputs. Here’s a link with the features: http://microbit.org/guide/features/

    The Micro:bit can be purchased almost anywhere in the world. Links for your country can be found here: https://www.microbit.org/resellers/

    The Micro:bit can be connected to your computer using a USB cable.

    Testing

    Once setup, I gave the kids a simple introduction, explained the different blocks, and very quickly they started to experiment themselves. The first results looked like this, which they called a magic show. Kind of like the name.

    I also explained the code, and they could understand how to change to code, and how it mapped back to the block code. The mapping between the block code, and the text code is a fantastic feature of MakeCode. First question was why bother with the text code, which meant they understood the relationship, so I could explain the advantages then.

    input.onButtonPressed(Button.A, () => {
        basic.showString("HELLO!")
        basic.pause(2000)
    })
    input.onGesture(Gesture.FreeFall, () => {
        basic.showIcon(IconNames.No)
        basic.pause(4000)
    })
    input.onButtonPressed(Button.AB, () => {
        basic.showString("DAS WARS")
    })
    input.onButtonPressed(Button.B, () => {
        basic.showIcon(IconNames.Angry)
    })
    input.onGesture(Gesture.Shake, () => {
        basic.showLeds(`
            # # # # #
            # # # # #
            # # # # #
            # # # # #
            # # # # #
            `)
    })
    basic.forever(() => {
        led.plot(2, 2)
        basic.pause(30)
        led.unplot(2, 2)
        basic.pause(300)
    })
    

    Then it was downloaded to the Micro:bit. The MakeCode Micro:bit software provides a download button. If you use this from the browser, it creates a hex file, which can be downloaded to the hardware per drag and drop. If you use the MakeCode Micro:bit Windows Store application, it will download it directly for you.

    Once downoaded, the magic show could begin.

    The following was produced by the 10 year, who needed a bit more help. He discovered the sound.

    Notes

    This is a super project, and would highly recommended it to schools, or as a present for kids. There are so many ways to try out new things, or code with different hardware, or even Minecraft. The kids have started to introduce it to other kids already. It would be great, if they could do this in school. If you have questions or queries, the MakeCode team are really helpful and can be reached here at twitter: @MsMakeCode, or you can create a github issue. The docs are really excellent if you require help with programming, and provides some really cool examples and ideas.

    Links:

    http://makecode.com/

    https://makecode.microbit.org/

    https://www.microbit.org/resellers/

    http://microbit.org/guide/features/


    Damien Bowden: Securing the CDN links in the ASP.NET Core 2.1 templates

    This article uses the the ASP.NET Core 2.1 MVC template and shows how to secure the CDN links using the integrity parameter.

    A new ASP.NET Core MVC application was created using the 2.1 template in Visual Studio.

    This template uses HTTPS per default and has added some of the required HTTPS headers like HSTS which is required for any application. The template has added the integrity parameter to the javascript CDN links, but on the CSS CDN links, it is missing.

    <script src="https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.2.0.min.js"
     asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
     asp-fallback-test="window.jQuery"  
     crossorigin="anonymous"
     integrity="sha384-K+ctZQ+LL8q6tP7I94W+qzQsfRV2a+AfHIi9k8z8l9ggpc8X+Ytst4yBo/hH+8Fk">
    </script>
    

    If the value of the integrity is changed, or the CDN script was changed, or for example a bitcoin miner was added to it, the MVC application will not load the script.

    To test this, you can change the value of the integrity parameter on the script, and in the production environment, the script will not load and fallback to the localhost deployed script. By changing the value of the integrity parameter, it simulates a changed script on the CDN. The following snapshot shows an example of the possible errors sent to the browser:

    Adding the integrity parameter to the CSS link

    The template creates a bootstrap link in the _Layout.cshtml as follows:

    <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
                  asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
                  asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />
    

    This is missing the integrity parameter. To fix this, the integrity parameter can be added to the link.

    <link rel="stylesheet" 
              integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" 
              crossorigin="anonymous"
              href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
              asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
              asp-fallback-test-class="sr-only"
              asp-fallback-test-property="position" 
              asp-fallback-test-value="absolute" />
    

    The value of the integrity parameter was created using SRI Hash Generator. When creating this, you have to be sure, that the link is safe. By using this CDN, your application trusts the CDN links.

    Now if the css file was changed on the CDN server, the application will not load it.

    The CSP Header of the application can also be improved. The application should only load from the required CDNs and no where else. This can be forced by adding the following CSP configuration:

    content-security-policy: 
    script-src 'self' https://ajax.aspnetcdn.com;
    style-src 'self' https://ajax.aspnetcdn.com;
    img-src 'self';
    font-src 'self' https://ajax.aspnetcdn.com;
    form-action 'self';
    frame-ancestors 'self';
    block-all-mixed-content
    

    Or you can use NWebSec and add it to the startup.cs

    app.UseCsp(opts => opts
    	.BlockAllMixedContent()
    	.FontSources(s => s.Self()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    	.FormActions(s => s.Self())
    	.FrameAncestors(s => s.Self())
    	.ImageSources(s => s.Self())
    	.StyleSources(s => s.Self()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    	.ScriptSources(s => s.Self()
    		.UnsafeInline()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    );
    

    Links:

    https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity

    https://www.srihash.org/

    https://www.troyhunt.com/protecting-your-embedded-content-with-subresource-integrity-sri/

    https://scotthelme.co.uk/tag/cdn/

    https://rehansaeed.com/tag/subresource-integrity-sri/

    https://rehansaeed.com/subresource-integrity-taghelper-using-asp-net-core/


    Andrew Lock: Fixing Nginx "upstream sent too big header" error when running an ingress controller in Kubernetes

    Fixing Nginx

    In this post I describe a problem I had running IdentityServer 4 behind an Nginx reverse proxy. In my case, I was running Nginx as an ingress controller for a Kubernetes cluster, but the issue is actually not specific to Kubernetes, or IdentityServer - it's an Nginx configuration issue.

    The error: "upstream sent too big header while reading response header from upstream"

    Initially, the Nginx ingress controller appeared to be configured correctly. I could view the IdentityServer home page, and could click login, but when I was redirected to the authorize endpoint (as part of the standard IdentityServer flow), I would get a 502 Bad Gateway error and a blank page.

    Looking through the logs, IdentityServer showed no errors - as far as it was concerned there were no problems with the authorize request. However, looking through the Nginx logs revealed this gem (formatted slightly for legibility):

    2018/02/05 04:55:21 [error] 193#193:  
        *25 upstream sent too big header while reading response header from upstream, 
    client:  
        192.168.1.121, 
    server:  
        example.com, 
    request:  
      "GET /idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2Fclient%2F%23%2Fcallback%3F&amp;response_type=id_token%20token&amp;scope=profile%20openid%20email&amp;acr_values=tenant%3Atenant1 HTTP/1.1",
    upstream:  
      "http://10.32.0.9:80/idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2F.client%2F%23%
    

    Apparently, this is a common problem with Nginx, and is essentially exactly what the error says. Nginx sometimes chokes on responses with large headers, because its buffer size is smaller than some other web servers. When it gets a response with large headers, as was the case for my IdentityServer OpenID Connect callback, it falls over and sends a 502 response.

    The solution is to simply increase Nginx's buffer size. If you're running Nginx on bare metal you could do this by increasing the buffer size in the config file, something like:

    proxy_buffers         8 16k;  # Buffer pool = 8 buffers of 16k  
    proxy_buffer_size     16k;    # 16k of buffers from pool used for headers  
    

    However, in this case, I was working with Nginx as an ingress controller to a Kubernetes cluster. The question was, how do you configure Nginx when it's running in a container?

    How to configure the Nginx ingress controller

    Luckily, the Nginx ingress controller is designed for exactly this situation. It uses a ConfigMap of values that are mapped to internal Nginx configuration values. By changing the ConfigMap, you can configure the underlying Nginx Pod.

    The Nginx ingress controller only supports changing a subset of options via the ConfigMap approach, but luckily proxy‑buffer‑size is one such option! There's two things you need to do to customise the ingress:

    1. Deploy the ConfigMap containing your customisations
    2. Point the Nginx ingress controller Deployment to your ConfigMap

    I'm just going to show the template changes in this post, assuming you have a cluster created using kubeadm and kubectl

    Creating the ConfigMap

    The ConfigMap is one of the simplest resources in kubernets; it's essentially just a collection of key-value pairs. The following manifest creates a ConfigMap called nginx-configuration and sets the proxy-buffer-size to "16k", to solve the 502 errors I was seeing previously.

    kind: ConfigMap  
    apiVersion: v1  
    metadata:  
      name: nginx-configuration
      namespace: kube-system
      labels:
        k8s-app: nginx-ingress-controller
    data:  
      proxy-buffer-size: "16k"
    

    If you save this to a file nginx-configuration.yaml then you can apply it to your cluster using

    kubectl apply -f nginx-configuration.yaml  
    

    However, you can't just apply the ConfigMap and have the ingress controller pick it up automatically - you have to update your Nginx Deployment so it knows which ConfigMap to use.

    Configuring the Nginx ingress controller to use your ConfigMap

    In order for the ingress controller to use your ConfigMap, you must pass the ConfigMap name (nginx-configuration) as an argument in your deployment. For example:

    args:  
      - /nginx-ingress-controller
      - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      - --configmap=$(POD_NAMESPACE)/nginx-configuration
    

    Without this argument, the ingress controller will ignore your ConfigMap. The complete deployment manifest will look something like the following (adapted from the Nginx ingress controller repo)

    apiVersion: extensions/v1beta1  
    kind: Deployment  
    metadata:  
      name: nginx-ingress-controller
      namespace: ingress-nginx 
    spec:  
      replicas: 1
      template:
        metadata:
          labels:
            app: ingress-nginx
          annotations:
            prometheus.io/port: '10254'
            prometheus.io/scrape: 'true' 
        spec:
          initContainers:
          - command:
            - sh
            - -c
            - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
            image: alpine:3.6
            imagePullPolicy: IfNotPresent
            name: sysctl
            securityContext:
              privileged: true
          containers:
            - name: nginx-ingress-controller
              image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
              args:
                - /nginx-ingress-controller
                - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
              - name: http
                containerPort: 80
              - name: https
                containerPort: 443
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
    

    Summary

    While deploying a local Kubernetes cluster locally, the Nginx ingess controller was returning 502 errors for some requests. This was due to the headers being too large for Nginx to handle. Increasing the proxy_buffer_size configuration parmeter solved the problem. To achieve this with the ingress controller, you must provide a ConfigMap and point your ingress controller to it by passing an additional arg in your Deployment.


    Ben Foster: Injecting UrlHelper in ASP.NET Core MVC

    One of our APIs has a dynamic routing system that invokes a different handler based on attributes of the incoming HTTP request.

    Each of these handlers is responsible for building the API response which includes generating hypermedia links that describe the state and capabilities of the resource, for example:

    {
      "total_count": 3,
      "limit": 10,
      "from": "2018-01-25T06:36:08Z",
      "to": "2018-03-10T07:13:24Z",
      "data": [
        {
          "event_id": "evt_b7ykb47ryaouznsbmbn7ul4uai",
          "event_type": "payment.declined",
          "created_on": "2018-03-10T07:13:24Z",
          "_links": {
            "self": {
              "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai"
            },
            "webhooks-retry": {
              "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai/webhooks/retry"
            }
          }
        },
      ...
    }
    

    To avoid hardcoding paths into these handlers we wanted to take advantage of UrlHelper to build the links. Unlike many components in ASP.NET Core, this is not something that is injectable by default.

    To register it with the built-in container, add the following to your Startup class:

    services.AddSingleton<IActionContextAccessor, ActionContextAccessor>();
    services.AddScoped<IUrlHelper>(x => {
        var actionContext = x.GetRequiredService<IActionContextAccessor>().ActionContext;
        var factory = x.GetRequiredService<IUrlHelperFactory>();
        return factory.GetUrlHelper(actionContext);
    });
    

    Both IActionContextAccessor and IUrlHelperFactory live in the Microsoft.AspNetCore.Mvc.Core package. If you're using the Microsoft.AspNetCore.All metapackage you should have this referenced already.

    Once done, you'll be able to use IUrlHelper in any of your components, assuming you're in the context of a HTTP request:

    if (authResponse.ThreeDsSessionId.HasValue)
    {
        return new PaymentAcceptedResponse
        {
            Id = id,
            Reference = paymentRequest.Reference,
            Status = authResponse.Status
        }
        .WithLink("self", _urlHelper.PaymentLink(id))
        .WithLink("redirect",
            _urlHelper.Link("AcsRedirect", new { id = authResponse.ThreeDsSessionId }));
    }
    


    Anuraj Parameswaran: Bulk Removing Azure Active Directory Users using PowerShell

    This post is about deleting Azure Active directory. Sometimes you can’t remove your Azure Active Directory, because of the users and / or applications created or synced on it. So you can’t remove the users from Azure Portal.


    Andrew Lock: ASP.NET Core in Action - Filters

    ASP.NET Core in Action - Filters

    In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post is a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

    The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

    The book is now finished and completely available in the MEAP, so now is the time to act if you're interested! Thanks 🙂

    Understanding filters and when to use them

    The MVC filter pipeline is a relatively simple concept, in that it provides “hooks” into the normal MVC request as shown in figure 1. For example, say you wanted to ensure that users can only create or edit products on an ecommerce app if they’re logged in. The app would redirect anonymous users to a login page instead of executing the action.

    Without filters, you’d need to include the same code to check for a logged in user at the start of each specific action method. With this approach, the MvcMiddleware still executes the model binding and validation, even if the user wasn’t logged in.

    With filters, you can use the “hooks” into the MVC request to run common code across all, or a sub-set of requests. This way you can do a wide range of things, such as:

    • Ensure a user is logged in before an action method, model binding or validation runs
    • Customize the output format of particular action methods
    • Handle model validation failures before an action method is invoked
    • Catch exceptions from an action method and handle them in a special way

    ASP.NET Core in Action - Filters

    Figure 1 Filters run at multiple points in the MvcMiddleware in the normal handling of a request.

    In many ways, the MVC filter pipeline is like a middleware pipeline, but restricted to the MvcMiddleware only. Like middleware, filters are good for handling cross-cutting concerns for your application, and are a useful tool for reducing code duplication in many cases.

    The MVC filter pipeline

    As you saw in figure 1, MVC filters run at a number of different points in the MVC request. This “linear” view of an MVC request and the filter pipeline that we’ve used this far doesn’t quite match up with how these filters execute. Five different types of filter, each of which runs at a different “stage” in the MvcMiddleware, are shown in figure 2.

    Each stage lends itself to a particular use case, thanks to its specific location in the MvcMiddleware, with respect to model binding, action execution, and result execution.

    • Authorization filters – These run first in the pipeline, and are useful for protecting your APIs and action methods. If an authorization filter deems the request unauthorized, it short-circuits the request, preventing the rest of the filter pipeline from running.
    • Resource filters – After authorization, resource filters are the next filters to run in the pipeline. They can also execute at the end of the pipeline, in much the same way middleware components can handle both the incoming request and the outgoing response. Alternatively, they can completely short-circuit the request pipeline, and return a response directly. Thanks to their early position in the pipeline, resource filters can have a variety of uses. You could add metrics to an action method, prevent an action method from executing if an unsupported content type is requested, or, as they run before model binding, control the way model binding works for that request.
    • Action filters – Action filters run before and after an action is executed. As model binding has already happened, action filters let you manipulate the arguments to the method, before it executes, or they can short-circuit the action completely and return a different IActionResult. As they also run after the action executes, they can optionally customize the IActionResult before it’s executed.
    • Exception filters – Exception filters can catch exceptions that occur in the filter pipeline, and handle them appropriately. They let you write custom MVC-specific error handling code, which can be useful in some situations. For example, you could catch exceptions in Web API actions and format them differently to exceptions in your MVC actions.
    • Result filters – Result filters run before and after an action method’s IActionResult is executed. This lets you control the execution of the result, or even short-circuit the execution of the result.

    ASP.NET Core in Action - Filters

    Figure 2 The MVC filter pipeline, including the five different filters stages. Some filter stages (Resource, Action, Result filters) run twice, before and after the remainder of the pipeline.

    Exactly which filter you pick to implement depends on the functionality you’re trying to introduce. Want to short-circuit a request as early as possible? Resource filters are a good fit. Need access to the action method parameters? Use an action filter.

    You can think of the filter pipeline as a small middleware pipeline that lives by itself in the MvcMiddleware. Alternatively, you could think of them as “hooks” into the MVC action invocation process, which let you run code at a particular point in a request’s “lifecycle.”

    That’s all for this article. For more information, read the free first chapter of ASP.NET Core in Action and see this Slideshare presentation.


    Anuraj Parameswaran: WebHooks in ASP.NET Core

    This post is about consuming webhooks in ASP.NET Core. A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. From ASP.NET Core 2.1 preview onwards ASP.NET Core supports WebHooks. As usual, to use WebHooks, you need to install package for WebHook support. In this post I am consuming webhook from GitHub. So you need to install Microsoft.AspNetCore.WebHooks.Receivers.GitHub. You can do it like this.


    Andrew Lock: Coming in ASP.NET Core 2.1 - top-level MVC parameter validation

    Coming in ASP.NET Core 2.1 - top-level MVC parameter validation

    This post looks at a feature coming in ASP.NET Core 2.1 related to Model Binding in ASP.NET Core MVC/Web API Controllers. I say it's a feature, but from my point of view it feels more like a bug-fix!

    Note, ASP.NET Core 2.1 isn't actually in preview yet, so this post might not be accurate! I'm making a few assumptions from looking at the code and issues, I haven't tried it out myself yet.

    Model validation in ASP.NET Core 2.0

    Model validation is an important part of the MVC pipeline in ASP.NET Core. There are many ways you can hook into the validation layer (using FluentValidation for example), but probably the most common approach is to decorate your binding models with validation attributes from the System.ComponentModel.DataAnnotations namespace. For example:

    public class UserModel  
    {
        [Required, EmailAddress]
        public string Email { get; set; }
    
        [Required, StringLength(1000)]
        public string Name { get; set; }
    }
    

    If you use the UserModel in a controller's action method, the MvcMiddleware will automatically create a new instance of the object, bind the properties of the model, and validate it using three sources:

    1. Form values – Sent in the body of an HTTP request when a form is sent to the server using a POST
    2. Route values – Obtained from URL segments or through default values after matching a route
    3. Querystring values – Passed at the end of the URL, not used during routing.

    Note, currently, data sent as JSON isn't bound by default. If you wish to bind JSON data in the body, you need to decorate your model with the [FromBody] attribute as described here.

    In your controller action, you can simply check the ModelState property, and find out if the data provided was valid:

    public class CheckoutController : Controller  
    {
        public IActionResult SaveUser(UserModel model)
        {
            if(!ModelState.IsValid)
            {
                // Something wasn't valid on the model
                return View(model);
            }
    
            // The model passed validation, do something with it
        }
    }
    

    This is all pretty standard MVC stuff, but what if you don't want to create a whole binding model, but you still want to validate the incoming data?

    Top-level parameters in ASP.NET Core 2.0

    The DataAnnotation attributes used by the default MVC validation system don't have to be applied to the properties of a class, they can also be applied to parameters. That might lead you to think that you could completely replace the UserModel in the above example with the following:

    public class CheckoutController : Controller  
    {
        public IActionResult SaveUser(
            [Required, EmailAddress] string Email 
            [Required, StringLength(1000)] string Name)
        {
            if(!ModelState.IsValid)
            {
                // Something wasn't valid on the model
                return View(model);
            }
    
            // The model passed validation, do something with it
        }
    }
    

    Unfortunately, this doesn't work! While the properties are bound, the validation attributes are ignored, and ModelState.IsValid is always true!

    Top level parameters in ASP.NET Core 2.1

    Luckily, the ASP.NET Core team were aware of the issue, and a fix has been merged as part of ASP.NET Core 2.1. As a consequence, the code in the previous section behaves as you'd expect, with the parameters validated, and the ModelState.IsValid updated accordingly.

    As part of this work you will now also be able to use the [BindRequired] attribute on parameters. This attribute is important when you're binding non-nullable value types, as using the [Required] attribute with these doesn't give the expected behaviour.

    That means you can now do the following for example, and be sure that the testId parameter was bound correctly from the route parameters, and the qty parameter was bound from the querystring. Before ASP.NET Core 2.1 this won't even compile!

    [HttpGet("test/{testId}")]
    public IActionResult Get([BindRequired, FromRoute] Guid testId, [BindRequired, FromQuery] int qty)  
    {
        if(!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }
        // Valid and bound
    }
    

    For an excellent description of this problem and the difference between Required and BindRequired, see this article by Filip.

    Summary

    In ASP.NET Core 2.0 and below, validation attributes applied to top-level parameters are ignored, and the ModelState is not updated. Only validation parameters on complex model types are considered.

    In ASP.NET Core 2.1 validation attributes will now be respected on top-level parameters. What's more, you'll be able to apply the [BindReqired] attribute to parameters.

    ASP.NET Core 2.1 looks to be shaping up to have a ton of new features. This is one of those nice little improvements that just makes things a bit easier, a little bit more consistent - just the sort of changes I like 🙂


    Andrew Lock: Gotchas upgrading from IdentityServer 3 to IdentityServer 4

    Gotchas upgrading from IdentityServer 3 to IdentityServer 4

    This post covers a couple of gotchas I experienced upgrading an IdentityServer 3 implementation to IdentityServer 4. I've written about a previous issue I ran into with an OWIN app in this scenario - where JWTs could not be validated correctly after upgrading. In this post I'll discuss two other minor issues I ran into:

    1. The URL of the JSON Web Key Set (JWKS) has changed from /.well-known/jwks to .well-known/openid-configuration/jwks.
    2. The KeyId of the X509 certificate signing material (used to validate the identity token) changes between IdentityServer 3 and IdentityServer 4. That means a token issued by IdentityServer 3 will not be validated using IdentityServer 4, leaving users stuck in a redirect loop.

    Both of these issues are actually quite minor, and weren't a problem for us to solve, they just caused a bit of confusion initially! This is just a quick post about these problems - if you're looking for more information on upgrading from IdentityServer 3 to 4 in general, I suggest checking out the docs, the announcement post, or this article by Scott Brady.

    1. The JWKS URL has changed

    OpenID Connect uses a "discovery document" to describe the capabilities and settings of the server - in this case, IdentityServer. This includes things like the Claims and Scopes that are available and the supported grants and response types. It also includes a number of URLs indicating other available endpoints. As a very compressed example, it might look like the following:

    {
        "issuer": "https://example.com",
        "jwks_uri": "https://example.com/.well-known/openid-configuration/jwks",
        "authorization_endpoint": "https://example.com/connect/authorize",
        "token_endpoint": "https://example.com/connect/token",
        "userinfo_endpoint": "https://example.com/connect/userinfo",
        "end_session_endpoint": "https://example.com/connect/endsession",
        "scopes_supported": [
            "openid",
            "profile",
            "email"
        ],
        "claims_supported": [
            "sub",
            "name",
            "family_name",
            "given_name"
        ],
        "grant_types_supported": [
            "authorization_code",
            "client_credentials",
            "refresh_token",
            "implicit"
        ],
        "response_types_supported": [
            "code",
            "token",
            "id_token",
            "id_token token",
        ],
        "id_token_signing_alg_values_supported": [
            "RS256"
        ],
        "code_challenge_methods_supported": [
            "plain",
            "S256"
        ]
    }
    

    The discovery document is always located at the URL /.well-known/openid-configuration, so a new client connecting to the server knows where to look, but the other endpoints are free to move, as long as the discovery document reflects that.

    In our move from IdentityServer 3 to IdentityServ4, the JSWKs URL did just that - it moved from /.well-known/jwks to /.well-known/openid-configuration/jwks. The discovery document obviously reflected that, and all of the IdentityServer .NET client libraries for doing token validation, both with .NET Core and for OWIN, switched to the correct URLs without any problems.

    What I didn't appreciate, was that we had a Python app which was using IdentityServer for authentication, but which wasn't using the discovery document. Rather than go to the effort of calling the discovery document and parsing out the URL, and knowing that we controlled the IdentityServer implementation, the /.well-known/jwks URL was hard coded.

    Oops!

    Obviously it was a simple hack to update the hard coded URL to the new location, though a much better solution would be to properly parse the discovery document.

    2. The KeyId of the signing material has changed

    This is a slightly complex issue, and I confess, this has been on my backlog to write up for so long that I can't remember all the details myself! I do, however, remember the symptom quite vividly - a crazy, endless, redirect loop on the client!

    The sequence of events looked something like this:

    1. The client side app authenticates with IdentityServer 3, obtaining an id and access token.
    2. Upgrade IdentityServer to IdentityServer 4.
    3. The client side app calls the API, which tries to validate the token using the public keys exposed by IdenntityServer 4. However IdentityServer 4 can't seem to find the key that was used to sign the token, so this validation fails causing a 401 redirect.
    4. The client side app handles the 401, and redirects to IdentityServer 4 to login.
    5. However, you're already logged in (the cookie persists across IdentityServer versions), so IdentityServer 4 redirects you back.
    6. Go to 4.

    Gotchas upgrading from IdentityServer 3 to IdentityServer 4

    It's possible that this issue manifested as it did due to something awry in the client side app, but the root cause of the issue was the fact a token issued by IdentityServer 3 could not be validated using the exposed public keys of IdentityServer 4, even though both implementations were using the same signing material - an X509 certificate.

    The same public and private keypair is used in both IdentityServer 3 and IdentityServer4, but they have different identifiers, so IdentityServer thinks they are different keys.

    In order to validate an access token, an app must obtain the public key material from IdentityServer, which it can use to confirm the token was signed with the associated private key. The public keys are exposed at the jwks endpoint (mentioned earlier), something like the following (truncated for brevity):

    {
      "keys": [
        {
          "kty": "RSA",
          "use": "sig",
          "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
          "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk",
          "e": "AQAB",
          "n": "rHRhPtwUwp-i3lA_CINLooJygpJwukbw",
          "x5c": [
            "MIIDLjCCAhagAwIBAgIQ9tul\/q5XHX10l7GMTDK3zCna+mQ="
          ],
          "alg": "RS256"
        }
      ]
    }
    

    As you can see, this JSON object contains a keys property which is an array of objects (though we only have one here). Therefore, when validating an access token, the API server needs to know which key to use for the validation.

    The JWT itself contains metadata indicating which signing material was used:

    {
      "alg": "RS256",
      "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
      "typ": "JWT",
      "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk"
    }
    

    As you can see, there's a kid property (KeyId) which matches in both the jwks response and the value in the JWT header. The API token validator uses the kid contained in the JWT to locate the appropriate signing material from the jwks endpoint, and can confirm the access token hasn't been tampered with.

    Unfortunately, the kid was not consistent across IdentityServer 3 and IdentityServer 4. When trying to use a token issued by IdentityServer 3, IdentityServer 4 was unable to find a matching token, and validation failed.

    For those interested, IdentityServer3 uses the bae 64 encoded certificate thumbprint as the KeyId - Base64Url.Encode(x509Key.Certificate.GetCertHash()). IdentityServer 4 [uses X509SecurityKey.KeyId] (https://github.com/IdentityServer/IdentityServer4/blob/993103d51bff929e4b0330f6c0ef9e3ffdcf8de3/src/IdentityServer4/ResponseHandling/DiscoveryResponseGenerator.cs#L316) which is slightly different - a base 16 encoded version of the hash.

    Our simple solution to this was to do the upgrade of IdentityServer out of hours - in the morning, the IdentityServer cookies had expired and so everyone had to re-authenticate anyway. IdentityServer 4 issued new access tokens with a kid that matched its jwks values, so there were no issues 🙂

    In practice, this solution might not work for everyone, for example if you're not able to enforce a period of downtime. There are other options, like explicitly providing the kid material yourself as described in this issue if you need it. If the kid doesn't change between versions, you shouldn't have any issues validating old tokens in the upgrade.

    Alternatively, you could add the signing material to IdentityServer 4 using both the old and new kids. That way, IdentityServer 4 can validate tokens issued by IdentityServer 3 (using the old kid), while also issuing (and validating) new tokens using the new kid.

    Summary

    This post describes a couple of minor issues upgrading a deployment from IdentityServer 3 to IdentitySerrver4. The first issue, the jwks URL changing, is not an issue I expect many people to run into - if you're using the discovery document you won't have this problem. The second issue is one you might run into when upgrading from IdentityServer 3 to IdentityServer 4 in production; even if you use the same X509 certificate in both implementations, tokens issued by IdentityServer 3 can not be validated by IdentityServer 4 due to mis-matching kids.


    Andrew Lock: Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

    This post builds on my previous posts on building ASP.NET Core apps in Docker and using Cake in Docker. In this post I show how you can optimise your Dockerfiles for dotnet restore, without having to manually specify all your app's .csproj files in the Dockerfile.

    Background - optimising your Dockerfile for dotnet restore

    When building ASP.NET Core apps using Docker, there are many best-practices to consider. One of the most important aspects is using the correct base image - in particular, a base image containing the .NET SDK to build your app, and a base image containing only the .NET runtime to run your app in production.

    In addition, there are a number of best practices which apply to Docker and the way it caches layers to build your app. I discussed this process in a previous post on building ASP.NET Core apps using Cake in Docker, so if that's new to you, i suggest checking it out.

    A common way to take advantage of the build cache when building your ASP.NET Core in, is to copy across only the .csproj, .sln and nuget.config files for your app before doing a restore, rather than the entire source code for your app. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the result of the restore, so it doesn't need to run twice, if all you do is change a .cs file.

    For example, in a previous post I used the following Docker file for building an ASP.NET Core app with three projects - a class library, an ASP.NET Core app, and a test project:

    # Build image
    FROM microsoft/dotnet:2.0.3-sdk AS builder  
    WORKDIR /sln  
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./
    
    # Copy all the csproj files and restore to cache the layer for faster builds
    # The dotnet_build.sh script does this anyway, so superfluous, but docker can 
    # cache the intermediate images so _much_ faster
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
    COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
    COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
    RUN dotnet restore
    
    COPY ./test ./test  
    COPY ./src ./src  
    RUN dotnet build -c Release --no-restore
    
    RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore
    
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.3  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    As you can see, the first things we do are copy the .sln file and nuget.config files, followed by all the .csproj files. We can then run dotnet restore, before we copy the /src and /test folders.

    While this is great for optimising the dotnet restore point, it has a couple of minor downsides:

    1. You have to manually reference every .csproj (and .sln) file in the Dockerfile
    2. You create a new layer for every COPY command. (This is a very minor issue, as the layers don't take up much space, but it's a bit annoying)

    The ideal solution

    My first thought for optimising this process was to simply use wildcards to copy all the .csproj files at once. This would solve both of the issues outlined above. I'd hoped that all it would take would be the following:

    # Copy all csproj files (WARNING, this doesn't work!)
    COPY ./**/*.csproj ./  
    

    Unfortunately, while COPY does support wildcard expansion, the above snippet doesn't do what you'd like it to. Instead of copying each of the .csproj files into their respective folders in the Docker image, they're dumped into the root folder instead!

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

    The problem is that the wildcard expansion happens before the files are copied, rather than by the COPY file itself. Consequently, you're effectively running:

    # Copy all csproj files (WARNING, this doesn't work!)
    # COPY ./**/*.csproj ./
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj ./  
    

    i.e. copy the three .csproj files into the root folder. It sucks that this doesn't work, but you can read more in the issue on GitHub, including how there's no plans to fix it 🙁

    The solution - tarball up the csproj

    The solution I'm using to the problem is a bit hacky, and has some caveats, but it's the only one I could find that works. It goes like this:

    1. Create a tarball of the .csproj files before calling docker build.
    2. In the Dockerfile, expand the tarball into the root directory
    3. Run dotnet restore
    4. After the docker file is built, delete the tarball

    Essentially, we're using other tools for bundling up the .csproj files, rather than trying to use the capabilities of the Dockerfile format. The big disadvantage with this approach is that it makes running the build a bit more complicated. You'll likely want to use a build script file, rather than simplu calling docker build .. Similarly, this means you won't be able to use the automated builds feature of DockerHub.

    For me, those are easy tradeoffs, as I typically use a build script anyway. The solution in this post just adds a few more lines to it.

    1. Create a tarball of your project files

    If you're not familiar with Linux, a tarball is simply a way of packaging up multiple files into a single file, just like a .zip file. You can package and unpackage files using the tar command, which has a daunting array of options.

    There's a plethora of different ways we could add all our .csproj files to a .tar file, but the following is what I used. I'm not a Linux guy, so any improvements would be greatly received 🙂

    find . -name "*.csproj" -print0 \  
        | tar -cvf projectfiles.tar --null -T -
    

    Note: Don't use the -z parameter here to GZIP the file. Including it causes Docker to never cache the COPY command (shown below) which completely negates all the benefits of copying across the .csproj files first!

    This actually uses the find command to iterate through sub directories, list out all the .csproj files, and pipe them to the tar command. The tar command writes them all to a file called projectfiles.tar in the root directory.

    2. Expand the tarball in the Dockerfile and call dotnet run

    When we call docker build . from our build script, the projectfiles.tar file will be available to copy in our Dockerfile. Instead of having to individually copy across every .csproj file, we can copy across just our .tar file, and the expand it in the root directory.

    The first part of our Dockerfile then becomes:

    FROM microsoft/aspnetcore-build:2.0.3 AS builder  
    WORKDIR /sln  
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./
    
    COPY projectfiles.tar .  
    RUN tar -xvf projectfiles.tar  
    RUN dotnet restore
    
    # The rest of the build process
    

    Now, it doesn't matter how many new projects we add or delete, we won't need to touch the Dockerfile

    3. Delete the old projectfiles.tar

    The final step is to delete the old projectfiles.tar after the build has finished. This is sort of optional - if the file already exists the next time you run your build script, tar will just overwrite the existing file.

    If you want to delete the file, you can use

    rm projectfiles.tar  
    

    at the end of your build script. Either way, it's best to add projectfiles.tar as an ignored file in your .gitignore file, to avoid accidentally committing it to source control.

    Further optimisation - tar all the things!

    We've come this far, why not go a step further! As we're already taking the hit of using tar to create and extract an archive, we may as well package everything we need to run dotnet restore i.e. the .sln and _NuGet.config files!. That lets us do a couple more optimisations in the Docker file.

    All we need to change, is to add "OR" clauses to the find command of our build script (urgh, so ugly):

    find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
        | tar -cvf projectfiles.tar --null -T -
    

    and then we can remove the COPY ./aspnetcore-in-docker.sln ./NuGet.config ./ line from our Dockerfile.

    The very last optimisation I want to make is to combine the layer that expands the .tar file with the line that runs dotnet restore by using the && operator. Given the latter is dependent on the first, there's no advantage to caching them separately, so we may as well inline it:

    RUN tar -xvf projectfiles.tar && dotnet restore  
    

    Putting it all together - the build script and Dockerfile

    And we're all done! For completeness, the final build script and Dockerfile are shown below. This is functionally identical to the Dockerfile we started with, but it's now optimised to better handle changes to our ASP.NET Core app. If we add or remove a project from our app, we won't have to touch the Dockerfile, which is great! 🙂

    The build script:

    #!/bin/bash
    set -eux
    
    # tarball csproj files, sln files, and NuGet.config
    find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
        | tar -cvf projectfiles.tar --null -T -
    
    docker build  .
    
    rm projectfiles.tar  
    

    The Dockerfile

    # Build image
    FROM microsoft/aspnetcore-build:2.0.3 AS builder  
    WORKDIR /sln
    
    COPY projectfiles.tar .  
    RUN tar -xvf projectfiles.tar && dotnet restore
    
    COPY ./test ./test  
    COPY ./src ./src  
    RUN dotnet build -c Release --no-restore
    
    RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore
    
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.3  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    Summary

    In this post I showed how you can use tar to package up your ASP.NET Core .csproj files to send to Docker. This lets you avoid having to manually specify all the project files explicitly in your Dockerfile.


    Damien Bowden: Adding HTTP Headers to improve Security in an ASP.NET MVC Core application

    This article shows how to add headers in a HTTPS response for an ASP.NET Core MVC application. The HTTP headers help protect against some of the attacks which can be executed against a website. securityheaders.io is used to test and validate the HTTP headers as well as F12 in the browser. NWebSec is used to add most of the HTTP headers which improve security for the MVC application. Thanks to Scott Helme for creating securityheaders.io, and André N. Klingsheim for creating NWebSec.

    Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

    History

    2018-05-07 Updated to .NET Core 2.1 preview 2, new Identity Views, 2FA Authenticator, IHttpClientFactory, bootstrap 4.1.0

    2018-02-09: Updated, added feedback from different sources, removing extra headers, add form actions to the CSP configuration, adding info about CAA.

    A simple ASP.NET Core MVC application was created and deployed to Azure. securityheaders.io can be used to validate the headers in the application. The deployed application used in this post can be found here: https://webhybridclient20180206091626.azurewebsites.net/status/test

    Testing the default application using securityheaders.io gives the following results with some room for improvement.

    Fixing this in ASP.NET Core is pretty easy due to NWebSec. Add the NuGet package to the project.

    <PackageReference Include="NWebsec.AspNetCore.Middleware" Version="2.0.0" />
    

    Or using the NuGet Package Manager in Visual Studio

    Add the Strict-Transport-Security Header

    By using HSTS, you can force that all communication is done using HTTPS. If you want to force HTTPS on the first request from the browser, you can use the HSTS preload: https://hstspreload.appspot.com

    app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
    

    https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

    Add the X-Content-Type-Options Header

    The X-Content-Type-Options can be set to no-sniff to prevent content sniffing.

    app.UseXContentTypeOptions();
    

    https://www.keycdn.com/support/what-is-mime-sniffing/

    https://en.wikipedia.org/wiki/Content_sniffing

    Add the Referrer Policy Header

    This allows us to restrict the amount of information being passed on to other sites when referring to other sites. This is set to no referrer.

    app.UseReferrerPolicy(opts => opts.NoReferrer());
    

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

    Scott Helme write a really good post on this:
    https://scotthelme.co.uk/a-new-security-header-referrer-policy/

    Add the X-XSS-Protection Header

    The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. (Text copied from here)

    app.UseXXssProtection(options => options.EnabledWithBlockMode());
    

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection

    Add the X-Frame-Options Header

    You can use the X-frame-options Header to block iframes and prevent click jacking attacks.

    app.UseXfo(options => options.Deny());
    

    Add the Content-Security-Policy Header

    Content Security Policy can be used to prevent all sort of attacks, XSS, click-jacking attacks, or prevent mixed mode (HTTPS and HTTP). The following configuration works for ASP.NET Core MVC applications, the mixed mode is activated, styles can be read from unsafe inline, due to the razor controls, or tag helpers, and everything can only be loaded from the same origin.

    app.UseCsp(opts => opts
    	.BlockAllMixedContent()
    	.StyleSources(s => s.Self())
    	.StyleSources(s => s.UnsafeInline())
    	.FontSources(s => s.Self())
    	.FormActions(s => s.Self())
    	.FrameAncestors(s => s.Self())
    	.ImageSources(s => s.Self())
    	.ScriptSources(s => s.Self())
    );
    

    Due to this CSP configuration, the public CDNs need to be removed from the MVC application which are per default included in the dotnet template for an ASP.NET Core MVC application.

    https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

    NWebSec configuration in the Startup

    //Registered before static files to always set header
    app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
    app.UseXContentTypeOptions();
    app.UseReferrerPolicy(opts => opts.NoReferrer());
    app.UseXXssProtection(options => options.EnabledWithBlockMode());
    app.UseXfo(options => options.Deny());
    
    app.UseCsp(opts => opts
    	.BlockAllMixedContent()
    	.StyleSources(s => s.Self())
    	.StyleSources(s => s.UnsafeInline())
    	.FontSources(s => s.Self())
    	.FormActions(s => s.Self())
    	.FrameAncestors(s => s.Self())
    	.ImageSources(s => s.Self())
    	.ScriptSources(s => s.Self())
    );
    
    app.UseStaticFiles();
    

    When the application is tested again, things look much better.

    Or view the headers in the browser, for example F12 in Chrome, and then the network view:

    Here’s the securityheaders.io test results for this demo.

    https://securityheaders.io/?q=https%3A%2F%2Fwebhybridclient20180206091626.azurewebsites.net%2Fstatus%2Ftest&followRedirects=on

    Removing the extra infomation from the Headers

    You could also remove the extra information from the HTTPS headers, for example X-Powered-By, or Server, so that less information is sent to the client.

    Remove the server headers from the kestrel server, by using the UseKestrel extension method.

    .UseKestrel(c => c.AddServerHeader = false)
    

    Add a web.config to your project with the following settings:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
      <system.web>
        <httpRuntime enableVersionHeader="false"/>
      </system.web>
      <system.webServer>
        <security>
          <requestFiltering removeServerHeader="true" />
        </security>
        <httpProtocol>
          <customHeaders>
            <remove name="X-Powered-By"/>
          </customHeaders>
        </httpProtocol>
      </system.webServer>
    </configuration>
    
    

    Now by viewing the response in the browser, you can see some unrequired headers have been removed.


    Further steps in hardening the application:

    Use CAA

    You can fix your domain to a selected amount of authorities. You can control the authorities which can issue the certs for your domain. This reduces the risk, that another cert authority produces a cert for your domain to a different person. This can be checked here:

    https://toolbox.googleapps.com/apps/dig/

    Or configured here:
    https://sslmate.com/caa/

    Then add it to the hosting provider.

    Use a WAF

    You could also add a WAF, for example to only expose public URLs and not private ones, or protect against DDoS attacks.

    Certificate testing

    The certificate should also be tested and validated.

    https://www.ssllabs.com is a good test tool.

    Here’s the result for the cert used in the demo project.

    https://www.ssllabs.com/ssltest/analyze.html?d=webhybridclient20180206091626.azurewebsites.net

    I would be grateful for feedback, or suggestions to improve this.

    Links:

    https://securityheaders.io

    https://docs.nwebsec.com/en/latest/

    https://github.com/NWebsec/NWebsec

    https://www.troyhunt.com/shhh-dont-let-your-response-headers/

    https://anthonychu.ca/post/aspnet-core-csp/

    https://rehansaeed.com/content-security-policy-for-asp-net-mvc/

    https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

    https://www.troyhunt.com/the-6-step-happy-path-to-https/

    https://www.troyhunt.com/understanding-http-strict-transport/

    https://hstspreload.appspot.com

    https://geekflare.com/http-header-implementation/

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options

    https://docs.microsoft.com/en-us/aspnet/core/tutorials/publish-to-azure-webapp-using-vs

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key_Pinning

    https://toolbox.googleapps.com/apps/dig/

    https://sslmate.com/caa/


    Dominick Baier: NDC London 2018 Artefacts

    “IdentityServer v2 on ASP.NET Core v2: An update” video

    “Authorization is hard! (aka the PolicyServer announcement) video

    DotNetRocks interview audio

     


    Anuraj Parameswaran: Deploying Your Angular Application To Azure

    This post is about deploying you Angular application to Azure App service. Unlike earlier versions of Angular JS, Angular CLI is the preferred way to develop and deploy Angular applications. In this post I will show you how to build a CI/CD pipeline with GitHub and Kudu, which will deploy your Angular application to an Azure Web App. I am using ASP.NET Core Web API application, which will be the backend and Angular application is the frontend.


    Anuraj Parameswaran: Anti forgery validation with ASP.NET MVC and Angular

    This post is how to implement anti forgery validation with ASP.NET MVC and Angular. The anti-forgery token can be used to help protect your application against cross-site request forgery. To use this feature, call the AntiForgeryToken method from a form and add the ValidateAntiForgeryTokenAttribute attribute to the action method that you want to protect.


    Damien Bowden: Securing an ASP.NET Core MVC application which uses a secure API

    The article shows how an ASP.NET Core MVC application can implement security when using an API to retrieve data. The OpenID Connect Hybrid flow is used to secure the ASP.NET Core MVC application. The application uses tokens stored in a cookie. This cookie is not used to access the API. The API is protected using a bearer token.

    To access the API, the code running on the server of the ASP.NET Core MVC application, implements the OAuth2 client credentials resource owner flow to get the access token for the API and can then return the data to the razor views.

    Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

    History

    2018-05-07 Updated to .NET Core 2.1 preview 2, new Identity Views, 2FA Authenticator, IHttpClientFactory, bootstrap 4.1.0

    Setup

    IdentityServer4 and OpenID connect flow configuration

    Two client configurations are setup in the IdentityServer4 configuration class. The OpenID Connect Hybrid Flow client is used for the ASP.NET Core MVC application. This flow, after a successful login, will return a cookie to the client part of the application which contains the tokens. The second client is used for the API. This is a service to service communication between two trusted applications. This usually happens in a protected zone. The client API uses a secret to connect to the API. The secret should be a secret and different for each deployment.

    public static IEnumerable<Client> GetClients()
    {
    	return new List<Client>
    	{
    		new Client
    		{
    			ClientName = "hybridclient",
    			ClientId = "hybridclient",
    			ClientSecrets = {new Secret("hybrid_flow_secret".Sha256()) },
    			AllowedGrantTypes = GrantTypes.Hybrid,
    			AllowOfflineAccess = true,
    			RedirectUris = { "https://localhost:44329/signin-oidc" },
    			PostLogoutRedirectUris = { "https://localhost:44329/signout-callback-oidc" },
    			AllowedCorsOrigins = new List<string>
    			{
    				"https://localhost:44329/"
    			},
    			AllowedScopes = new List<string>
    			{
    				IdentityServerConstants.StandardScopes.OpenId,
    				IdentityServerConstants.StandardScopes.Profile,
    				IdentityServerConstants.StandardScopes.OfflineAccess,
    				"scope_used_for_hybrid_flow",
    				"role"
    			}
    		},
    		new Client
    		{
    			ClientId = "ProtectedApi",
    			ClientName = "ProtectedApi",
    			ClientSecrets = new List<Secret> { new Secret { Value = "api_in_protected_zone_secret".Sha256() } },
    			AllowedGrantTypes = GrantTypes.ClientCredentials,
    			AllowedScopes = new List<string> { "scope_used_for_api_in_protected_zone" }
    		}
    	};
    }
    

    The GetApiResources defines the scopes and the APIs for the different resources. I usually define one scope per API resource.

    public static IEnumerable<ApiResource> GetApiResources()
    {
    	return new List<ApiResource>
    	{
    		new ApiResource("scope_used_for_hybrid_flow")
    		{
    			ApiSecrets =
    			{
    				new Secret("hybrid_flow_secret".Sha256())
    			},
    			UserClaims = { "role", "admin", "user", "some_api" }
    		},
    		new ApiResource("ProtectedApi")
    		{
    			DisplayName = "API protected",
    			ApiSecrets =
    			{
    				new Secret("api_in_protected_zone_secret".Sha256())
    			},
    			Scopes =
    			{
    				new Scope
    				{
    					Name = "scope_used_for_api_in_protected_zone",
    					ShowInDiscoveryDocument = false
    				}
    			},
    			UserClaims = { "role", "admin", "user", "safe_zone_api" }
    		}
    	};
    }
    

    Securing the Resource API

    The protected API uses the IdentityServer4.AccessTokenValidation Nuget package to validate the access token. This uses the introspection endpoint to validate the token. The scope is also validated in this example using authorization policies from ASP.NET Core.

    public void ConfigureServices(IServiceCollection services)
    {
    	services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
    	  .AddIdentityServerAuthentication(options =>
    	  {
    		  options.Authority = "https://localhost:44352";
    		  options.ApiName = "ProtectedApi";
    		  options.ApiSecret = "api_in_protected_zone_secret";
    		  options.RequireHttpsMetadata = true;
    	  });
    
    	services.AddAuthorization(options =>
    		options.AddPolicy("protectedScope", policy =>
    		{
    			policy.RequireClaim("scope", "scope_used_for_api_in_protected_zone");
    		})
    	);
    
    	services.AddMvc();
    }
    

    The API is protected using the Authorize attribute and checks the defined policy. If this is ok, the data can be returned to the server part of the MVC application.

    [Authorize(Policy = "protectedScope")]
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
    	[HttpGet]
    	public IEnumerable<string> Get()
    	{
    		return new string[] { "data 1 from the second api", "data 2 from the second api" };
    	}
    }
    

    Securing the ASP.NET Core MVC application

    The ASP.NET Core MVC application uses OpenID Connect to validate the user and the application and saves the result in a cookie. If the identity is ok, the tokens are returned in the cookie from the server side of the application. See the OpenID Connect specification, for more information concerning the OpenID Connect Hybrid flow.

    public void ConfigureServices(IServiceCollection services)
    {
    	services.AddAuthentication(options =>
    	{
    		options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    		options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
    	})
    	.AddCookie()
    	.AddOpenIdConnect(options =>
    	{
    		options.SignInScheme = "Cookies";
    		options.Authority = "https://localhost:44352";
    		options.RequireHttpsMetadata = true;
    		options.ClientId = "hybridclient";
    		options.ClientSecret = "hybrid_flow_secret";
    		options.ResponseType = "code id_token";
    		options.Scope.Add("scope_used_for_hybrid_flow");
    		options.Scope.Add("profile");
    		options.SaveTokens = true;
    	});
    
    	services.AddAuthorization();
    
    	services.AddMvc();
    }
    

    The Configure method adds the authentication to the MVC middleware using the UseAuthentication extension method.

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
    	...
    
    	app.UseStaticFiles();
    
    	app.UseAuthentication();
    
    	app.UseMvc(routes =>
    	{
    		routes.MapRoute(
    			name: "default",
    			template: "{controller=Home}/{action=Index}/{id?}");
    	});
    }
    

    The home controller is protected using the authorize attribute, and the index method gets the data from the API using the api service.

    [Authorize]
    public class HomeController : Controller
    {
    	private readonly ApiService _apiService;
    
    	public HomeController(ApiService apiService)
    	{
    		_apiService = apiService;
    	}
    
    	public async System.Threading.Tasks.Task<IActionResult> Index()
    	{
    		var result = await _apiService.GetApiDataAsync();
    
    		ViewData["data"] = result.ToString();
    		return View();
    	}
    
    	public IActionResult Error()
    	{
    		return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier });
    	}
    }
    

    Calling the protected API from the ASP.NET Core MVC app

    The API service implements the HTTP request using the TokenClient from IdentiyModel. This can be downloaded as a Nuget package. First the access token is acquired from the server, then the token is used to request the data from the API.

    Use the IHttpClientFactory in the service via dependency injection. You also need to add this to the Startup services. (AddHttpClient)

    private readonly IHttpClientFactory _clientFactory;
    
    public ApiService(
         IOptions<AuthConfigurations> authConfigurations, 
         IHttpClientFactory clientFactory)
    {
       _authConfigurations = authConfigurations;
       _clientFactory = clientFactory;
    }
    

    And the HttpClient can be used to access the protected API.

    var discoClient = new DiscoveryClient(_authConfigurations.Value.StsServer);
    var disco = await discoClient.GetAsync();
    if (disco.IsError)
    {
    	throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}");
    }
    
    var tokenClient = new TokenClient(disco.TokenEndpoint, "ProtectedApi", "api_in_protected_zone_secret");
    var tokenResponse = await tokenClient.RequestClientCredentialsAsync("scope_used_for_api_in_protected_zone");
    
    if (tokenResponse.IsError)
    {
    	throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}");
    }
    
    var client = _clientFactory.CreateClient();
    
    client.BaseAddress = new Uri(_authConfigurations.Value.ProtectedApiUrl);
    client.SetBearerToken(tokenResponse.AccessToken);
    
    var response = await client.GetAsync("api/values");
    if (response.IsSuccessStatusCode)
    {
    	var responseContent = await response.Content.ReadAsStringAsync();
    	var data = JArray.Parse(responseContent);
    
    	return data;
    }
    
    throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
    

    Authentication and Authorization in the API

    The ASP.NET Core MVC application calls the API using a service to service trusted association in the protected zone. Due to this, the identity which made the original request cannot be validated using the access token on the API. If authorization is required for the original identity, this should be sent in the URL of the API HTTP request, which can then be validated as required using an authorization filter. Maybe it is enough to validate that the service token is authenticated, and authorized. Care should be taken when sending user data, GDPR requirements, or user information which the IT admins should not have access to.

    Should I use the same token as the access token returned to the MVC client?

    This depends 🙂 If the API is a public API, then this is fine, if you have no problem re-using the same token for different applications. If the API is in the protected zone, for example behind a WAF, then a separate token would be better. Only tokens issued for the trusted app can be used to access the protected API. This can be validated by using separate scopes, secrets, etc. The tokens issued for the MVC app and the user, will not work, these were issued for a single purpose only, and not multiple applications. The token used for the protected API never leaves the trusted zone.

    Links

    https://docs.microsoft.com/en-gb/aspnet/core/mvc/overview

    https://docs.microsoft.com/en-gb/aspnet/core/security/anti-request-forgery

    https://docs.microsoft.com/en-gb/aspnet/core/security/

    http://openid.net/

    https://www.owasp.org/images/b/b0/Best_Practices_WAF_v105.en.pdf

    https://tools.ietf.org/html/rfc7662

    http://docs.identityserver.io/en/release/quickstarts/5_hybrid_and_api_access.html

    https://github.com/aspnet/Security

    https://elanderson.net/2017/07/identity-server-from-implicit-to-hybrid-flow/

    http://openid.net/specs/openid-connect-core-1_0.html#HybridFlowAuth


    Anuraj Parameswaran: Using Yarn with Angular CLI

    This post is about using Yarn in Angular CLI instead of NPM. Yarn is an alternative package manager for NPM packages with a focus on reliability and speed. It has been released in October 2016 and already gained a lot of traction and enjoys great popularity in the JavaScript community.


    Anuraj Parameswaran: Measuring code coverage of .NET Core applications with Visual Studio 2017

    This post is about Measuring code coverage of .NET Core applications with Visual Studio. Test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.


    Anuraj Parameswaran: Building Progressive Web apps with ASP.NET Core

    This post is about building Progressive Web Apps or PWA with ASP.NET Core. Progressive Web App (PWA) are web applications that are regular web pages or websites, but can appear to the user like traditional applications or native mobile applications. The application type attempts to combine features offered by most modern browsers with the benefits of mobile experience.


    Anuraj Parameswaran: How to launch different browsers from VS Code for debugging ASP.NET Core

    This post is about launching different browsers from VSCode, while debugging ASP.NET Core. By default when debugging an ASP.NET Core, VS Code will launch default browser. There is way to choose the browser you would like to use. Here is the code snippet which will add different debug configuration to VS Code.


    Anuraj Parameswaran: Connecting Localdb using Sql Server Management Studio

    This post is about connecting and managing SQL Server LocalDB instances with Sql Server Management Studio. While working on an ASP.NET Core web application, I was using LocalDB, but when I tried to connect to it and modifying the data, but I couldn’t find it. Later after exploring little I found one way of doing it.


    Anuraj Parameswaran: Runtime bundling and Minification in ASP.NET Core with Smidge

    This post is about enabling bundling and minification in ASP.NET Core with Smidge. Long back I wrote a post about bundling and minification in ASP.NET Core. But it was during the compile time or while publishing the app. But Smidge helps you to enable bundling and minification in runtime similar to earlier versions of ASP.NET MVC.


    Dominick Baier: Sponsoring IdentityServer

    Brock and I have been working on free identity & access control related libraries since 2009. This all started as a hobby project, and I can very well remember the day when I said to Brock that we can only really claim to understand the protocols if we implement them ourselves. That’s what we did.

    We are now at a point where the IdentityServer OSS project reached both enough significance and complexity what we need to find a sustainable way to manage it. This includes dealing with issues, questions and bug reports as well as feature and pull requests.

    That’s why we decided to set up a sponsorship page on Patreon. So if you like the project and want to support us – or even more important, if you work for a company that relies on IdentityServer, please consider supporting us. This will allow us to be able to maintain this level of commitment.

    Thank you!


    Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.