Andrew Lock: Gotchas upgrading from IdentityServer 3 to IdentityServer 4

Gotchas upgrading from IdentityServer 3 to IdentityServer 4

This post covers a couple of gotchas I experienced upgrading an IdentityServer 3 implementation to IdentityServer 4. I've written about a previous issue I ran into with an OWIN app in this scenario - where JWTs could not be validated correctly after upgrading. In this post I'll discuss two other minor issues I ran into:

  1. The URL of the JSON Web Key Set (JWKS) has changed from /.well-known/jwks to .well-known/openid-configuration/jwks.
  2. The KeyId of the X509 certificate signing material (used to validate the identity token) changes between IdentityServer 3 and IdentityServer 4. That means a token issued by IdentityServer 3 will not be validated using IdentityServer 4, leaving users stuck in a redirect loop.

Both of these issues are actually quite minor, and weren't a problem for us to solve, they just caused a bit of confusion initially! This is just a quick post about these problems - if you're looking for more information on upgrading from IdentityServer 3 to 4 in general, I suggest checking out the docs, the announcement post, or this article by Scott Brady.

1. The JWKS URL has changed

OpenID Connect uses a "discovery document" to describe the capabilities and settings of the server - in this case, IdentityServer. This includes things like the Claims and Scopes that are available and the supported grants and response types. It also includes a number of URLs indicating other available endpoints. As a very compressed example, it might look like the following:

{
    "issuer": "https://example.com",
    "jwks_uri": "https://example.com/.well-known/openid-configuration/jwks",
    "authorization_endpoint": "https://example.com/connect/authorize",
    "token_endpoint": "https://example.com/connect/token",
    "userinfo_endpoint": "https://example.com/connect/userinfo",
    "end_session_endpoint": "https://example.com/connect/endsession",
    "scopes_supported": [
        "openid",
        "profile",
        "email"
    ],
    "claims_supported": [
        "sub",
        "name",
        "family_name",
        "given_name"
    ],
    "grant_types_supported": [
        "authorization_code",
        "client_credentials",
        "refresh_token",
        "implicit"
    ],
    "response_types_supported": [
        "code",
        "token",
        "id_token",
        "id_token token",
    ],
    "id_token_signing_alg_values_supported": [
        "RS256"
    ],
    "code_challenge_methods_supported": [
        "plain",
        "S256"
    ]
}

The discovery document is always located at the URL /.well-known/openid-configuration, so a new client connecting to the server knows where to look, but the other endpoints are free to move, as long as the discovery document reflects that.

In our move from IdentityServer 3 to IdentityServ4, the JSWKs URL did just that - it moved from /.well-known/jwks to /.well-known/openid-configuration/jwks. The discovery document obviously reflected that, and all of the IdentityServer .NET client libraries for doing token validation, both with .NET Core and for OWIN, switched to the correct URLs without any problems.

What I didn't appreciate, was that we had a Python app which was using IdentityServer for authentication, but which wasn't using the discovery document. Rather than go to the effort of calling the discovery document and parsing out the URL, and knowing that we controlled the IdentityServer implementation, the /.well-known/jwks URL was hard coded.

Oops!

Obviously it was a simple hack to update the hard coded URL to the new location, though a much better solution would be to properly parse the discovery document.

2. The KeyId of the signing material has changed

This is a slightly complex issue, and I confess, this has been on my backlog to write up for so long that I can't remember all the details myself! I do, however, remember the symptom quite vividly - a crazy, endless, redirect loop on the client!

The sequence of events looked something like this:

  1. The client side app authenticates with IdentityServer 3, obtaining an id and access token.
  2. Upgrade IdentityServer to IdentityServer 4.
  3. The client side app calls the API, which tries to validate the token using the public keys exposed by IdenntityServer 4. However IdentityServer 4 can't seem to find the key that was used to sign the token, so this validation fails causing a 401 redirect.
  4. The client side app handles the 401, and redirects to IdentityServer 4 to login.
  5. However, you're already logged in (the cookie persists across IdentityServer versions), so IdentityServer 4 redirects you back.
  6. Go to 4.

Gotchas upgrading from IdentityServer 3 to IdentityServer 4

It's possible that this issue manifested as it did due to something awry in the client side app, but the root cause of the issue was the fact a token issued by IdentityServer 3 could not be validated using the exposed public keys of IdentityServer 4, even though both implementations were using the same signing material - an X509 certificate.

The same public and private keypair is used in both IdentityServer 3 and IdentityServer4, but they have different identifiers, so IdentityServer thinks they are different keys.

In order to validate an access token, an app must obtain the public key material from IdentityServer, which it can use to confirm the token was signed with the associated private key. The public keys are exposed at the jwks endpoint (mentioned earlier), something like the following (truncated for brevity):

{
  "keys": [
    {
      "kty": "RSA",
      "use": "sig",
      "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
      "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk",
      "e": "AQAB",
      "n": "rHRhPtwUwp-i3lA_CINLooJygpJwukbw",
      "x5c": [
        "MIIDLjCCAhagAwIBAgIQ9tul\/q5XHX10l7GMTDK3zCna+mQ="
      ],
      "alg": "RS256"
    }
  ]
}

As you can see, this JSON object contains a keys property which is an array of objects (though we only have one here). Therefore, when validating an access token, the API server needs to know which key to use for the validation.

The JWT itself contains metadata indicating which signing material was used:

{
  "alg": "RS256",
  "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
  "typ": "JWT",
  "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk"
}

As you can see, there's a kid property (KeyId) which matches in both the jwks response and the value in the JWT header. The API token validator uses the kid contained in the JWT to locate the appropriate signing material from the jwks endpoint, and can confirm the access token hasn't been tampered with.

Unfortunately, the kid was not consistent across IdentityServer 3 and IdentityServer 4. When trying to use a token issued by IdentityServer 3, IdentityServer 4 was unable to find a matching token, and validation failed.

For those interested, IdentityServer3 uses the bae 64 encoded certificate thumbprint as the KeyId - Base64Url.Encode(x509Key.Certificate.GetCertHash()). IdentityServer 4 [uses X509SecurityKey.KeyId] (https://github.com/IdentityServer/IdentityServer4/blob/993103d51bff929e4b0330f6c0ef9e3ffdcf8de3/src/IdentityServer4/ResponseHandling/DiscoveryResponseGenerator.cs#L316) which is slightly different - a base 16 encoded version of the hash.

Our simple solution to this was to do the upgrade of IdentityServer out of hours - in the morning, the IdentityServer cookies had expired and so everyone had to re-authenticate anyway. IdentityServer 4 issued new access tokens with a kid that matched its jwks values, so there were no issues ūüôā

In practice, this solution might not work for everyone, for example if you're not able to enforce a period of downtime. There are other options, like explicitly providing the kid material yourself as described in this issue if you need it. If the kid doesn't change between versions, you shouldn't have any issues validating old tokens in the upgrade.

Alternatively, you could add the signing material to IdentityServer 4 using both the old and new kids. That way, IdentityServer 4 can validate tokens issued by IdentityServer 3 (using the old kid), while also issuing (and validating) new tokens using the new kid.

Summary

This post describes a couple of minor issues upgrading a deployment from IdentityServer 3 to IdentitySerrver4. The first issue, the jwks URL changing, is not an issue I expect many people to run into - if you're using the discovery document you won't have this problem. The second issue is one you might run into when upgrading from IdentityServer 3 to IdentityServer 4 in production; even if you use the same X509 certificate in both implementations, tokens issued by IdentityServer 3 can not be validated by IdentityServer 4 due to mis-matching kids.


Andrew Lock: Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

This post builds on my previous posts on building ASP.NET Core apps in Docker and using Cake in Docker. In this post I show how you can optimise your Dockerfiles for dotnet restore, without having to manually specify all your app's .csproj files in the Dockerfile.

Background - optimising your Dockerfile for dotnet restore

When building ASP.NET Core apps using Docker, there are many best-practices to consider. One of the most important aspects is using the correct base image - in particular, a base image containing the .NET SDK to build your app, and a base image containing only the .NET runtime to run your app in production.

In addition, there are a number of best practices which apply to Docker and the way it caches layers to build your app. I discussed this process in a previous post on building ASP.NET Core apps using Cake in Docker, so if that's new to you, i suggest checking it out.

A common way to take advantage of the build cache when building your ASP.NET Core in, is to copy across only the .csproj, .sln and nuget.config files for your app before doing a restore, rather than the entire source code for your app. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the result of the restore, so it doesn't need to run twice, if all you do is change a .cs file.

For example, in a previous post I used the following Docker file for building an ASP.NET Core app with three projects - a class library, an ASP.NET Core app, and a test project:

# Build image
FROM microsoft/dotnet:2.0.3-sdk AS builder  
WORKDIR /sln  
COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./

# Copy all the csproj files and restore to cache the layer for faster builds
# The dotnet_build.sh script does this anyway, so superfluous, but docker can 
# cache the intermediate images so _much_ faster
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
RUN dotnet restore

COPY ./test ./test  
COPY ./src ./src  
RUN dotnet build -c Release --no-restore

RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore

RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

As you can see, the first things we do are copy the .sln file and nuget.config files, followed by all the .csproj files. We can then run dotnet restore, before we copy the /src and /test folders.

While this is great for optimising the dotnet restore point, it has a couple of minor downsides:

  1. You have to manually reference every .csproj (and .sln) file in the Dockerfile
  2. You create a new layer for every COPY command. (This is a very minor issue, as the layers don't take up much space, but it's a bit annoying)

The ideal solution

My first thought for optimising this process was to simply use wildcards to copy all the .csproj files at once. This would solve both of the issues outlined above. I'd hoped that all it would take would be the following:

# Copy all csproj files (WARNING, this doesn't work!)
COPY ./**/*.csproj ./  

Unfortunately, while COPY does support wildcard expansion, the above snippet doesn't do what you'd like it to. Instead of copying each of the .csproj files into their respective folders in the Docker image, they're dumped into the root folder instead!

Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

The problem is that the wildcard expansion happens before the files are copied, rather than by the COPY file itself. Consequently, you're effectively running:

# Copy all csproj files (WARNING, this doesn't work!)
# COPY ./**/*.csproj ./
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj ./  

i.e. copy the three .csproj files into the root folder. It sucks that this doesn't work, but you can read more in the issue on GitHub, including how there's no plans to fix it ūüôĀ

The solution - tarball up the csproj

The solution I'm using to the problem is a bit hacky, and has some caveats, but it's the only one I could find that works. It goes like this:

  1. Create a tarball of the .csproj files before calling docker build.
  2. In the Dockerfile, expand the tarball into the root directory
  3. Run dotnet restore
  4. After the docker file is built, delete the tarball

Essentially, we're using other tools for bundling up the .csproj files, rather than trying to use the capabilities of the Dockerfile format. The big disadvantage with this approach is that it makes running the build a bit more complicated. You'll likely want to use a build script file, rather than simplu calling docker build .. Similarly, this means you won't be able to use the automated builds feature of DockerHub.

For me, those are easy tradeoffs, as I typically use a build script anyway. The solution in this post just adds a few more lines to it.

1. Create a tarball of your project files

If you're not familiar with Linux, a tarball is simply a way of packaging up multiple files into a single file, just like a .zip file. You can package and unpackage files using the tar command, which has a daunting array of options.

There's a plethora of different ways we could add all our .csproj files to a .tar file, but the following is what I used. I'm not a Linux guy, so any improvements would be greatly received ūüôā

find . -name "*.csproj" -print0 \  
    | tar -cvf projectfiles.tar --null -T -

Note: Don't use the -z parameter here to GZIP the file. Including it causes Docker to never cache the COPY command (shown below) which completely negates all the benefits of copying across the .csproj files first!

This actually uses the find command to iterate through sub directories, list out all the .csproj files, and pipe them to the tar command. The tar command writes them all to a file called projectfiles.tar in the root directory.

2. Expand the tarball in the Dockerfile and call dotnet run

When we call docker build . from our build script, the projectfiles.tar file will be available to copy in our Dockerfile. Instead of having to individually copy across every .csproj file, we can copy across just our .tar file, and the expand it in the root directory.

The first part of our Dockerfile then becomes:

FROM microsoft/aspnetcore-build:2.0.3 AS builder  
WORKDIR /sln  
COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./

COPY projectfiles.tar .  
RUN tar -xvf projectfiles.tar  
RUN dotnet restore

# The rest of the build process

Now, it doesn't matter how many new projects we add or delete, we won't need to touch the Dockerfile

3. Delete the old projectfiles.tar

The final step is to delete the old projectfiles.tar after the build has finished. This is sort of optional - if the file already exists the next time you run your build script, tar will just overwrite the existing file.

If you want to delete the file, you can use

rm projectfiles.tar  

at the end of your build script. Either way, it's best to add projectfiles.tar as an ignored file in your .gitignore file, to avoid accidentally committing it to source control.

Further optimisation - tar all the things!

We've come this far, why not go a step further! As we're already taking the hit of using tar to create and extract an archive, we may as well package everything we need to run dotnet restore i.e. the .sln and _NuGet.config files!. That lets us do a couple more optimisations in the Docker file.

All we need to change, is to add "OR" clauses to the find command of our build script (urgh, so ugly):

find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
    | tar -cvf projectfiles.tar --null -T -

and then we can remove the COPY ./aspnetcore-in-docker.sln ./NuGet.config ./ line from our Dockerfile.

The very last optimisation I want to make is to combine the layer that expands the .tar file with the line that runs dotnet restore by using the && operator. Given the latter is dependent on the first, there's no advantage to caching them separately, so we may as well inline it:

RUN tar -xvf projectfiles.tar && dotnet restore  

Putting it all together - the build script and Dockerfile

And we're all done! For completeness, the final build script and Dockerfile are shown below. This is functionally identical to the Dockerfile we started with, but it's now optimised to better handle changes to our ASP.NET Core app. If we add or remove a project from our app, we won't have to touch the Dockerfile, which is great! ūüôā

The build script:

#!/bin/bash
set -eux

# tarball csproj files, sln files, and NuGet.config
find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
    | tar -cvf projectfiles.tar --null -T -

docker build  .

rm projectfiles.tar  

The Dockerfile

# Build image
FROM microsoft/aspnetcore-build:2.0.3 AS builder  
WORKDIR /sln

COPY projectfiles.tar .  
RUN tar -xvf projectfiles.tar && dotnet restore

COPY ./test ./test  
COPY ./src ./src  
RUN dotnet build -c Release --no-restore

RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore

RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

Summary

In this post I showed how you can use tar to package up your ASP.NET Core .csproj files to send to Docker. This lets you avoid having to manually specify all the project files explicitly in your Dockerfile.


Damien Bowden: Adding HTTP Headers to improve Security in an ASP.NET MVC Core application

This article shows how to add headers in a HTTPS response for an ASP.NET Core MVC application. The HTTP headers help protect against some of the attacks which can be executed against a website. securityheaders.io is used to test and validate the HTTP headers as well as F12 in the browser. NWebSec is used to add most of the HTTP headers which improve security for the MVC application. Thanks to Scott Helme for creating securityheaders.io, and André N. Klingsheim for creating NWebSec.

Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

2018-02-09: Updated, added feedback from different sources, removing extra headers, add form actions to the CSP configuration, adding info about CAA.

A simple ASP.NET Core MVC application was created and deployed to Azure. securityheaders.io can be used to validate the headers in the application. The deployed application used in this post can be found here: https://webhybridclient20180206091626.azurewebsites.net/status/test

Testing the default application using securityheaders.io gives the following results with some room for improvement.

Fixing this in ASP.NET Core is pretty easy due to NWebSec. Add the NuGet package to the project.

<PackageReference Include="NWebsec.AspNetCore.Middleware" Version="1.1.0" />

Or using the NuGet Package Manager in Visual Studio

Add the Strict-Transport-Security Header

By using HSTS, you can force that all communication is done using HTTPS. If you want to force HTTPS on the first request from the browser, you can use the HSTS preload: https://hstspreload.appspot.com

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

Add the X-Content-Type-Options Header

The X-Content-Type-Options can be set to no-sniff to prevent content sniffing.

app.UseXContentTypeOptions();

https://www.keycdn.com/support/what-is-mime-sniffing/

https://en.wikipedia.org/wiki/Content_sniffing

Add the Referrer Policy Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites. This is set to no referrer.

app.UseReferrerPolicy(opts => opts.NoReferrer());

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Add the X-XSS-Protection Header

The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. (Text copied from here)

app.UseXXssProtection(options => options.EnabledWithBlockMode());

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection

Add the X-Frame-Options Header

You can use the X-frame-options Header to block iframes and prevent click jacking attacks.

app.UseXfo(options => options.Deny());

Add the Content-Security-Policy Header

Content Security Policy can be used to prevent all sort of attacks, XSS, click-jacking attacks, or prevent mixed mode (HTTPS and HTTP). The following configuration works for ASP.NET Core MVC applications, the mixed mode is activated, styles can be read from unsafe inline, due to the razor controls, or tag helpers, and everything can only be loaded from the same origin.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.StyleSources(s => s.Self())
	.StyleSources(s => s.UnsafeInline())
	.FontSources(s => s.Self())
	.FormActions(s => s.Self())
	.FrameAncestors(s => s.Self())
	.ImageSources(s => s.Self())
	.ScriptSources(s => s.Self())
);

Due to this CSP configuration, the public CDNs need to be removed from the MVC application which are per default included in the dotnet template for an ASP.NET Core MVC application.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

NWebSec configuration in the Startup

//Registered before static files to always set header
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseXContentTypeOptions();
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.StyleSources(s => s.Self())
	.StyleSources(s => s.UnsafeInline())
	.FontSources(s => s.Self())
	.FormActions(s => s.Self())
	.FrameAncestors(s => s.Self())
	.ImageSources(s => s.Self())
	.ScriptSources(s => s.Self())
);

app.UseStaticFiles();

When the application is tested again, things look much better.

Or view the headers in the browser, for example F12 in Chrome, and then the network view:

Here’s the securityheaders.io test results for this demo.

https://securityheaders.io/?q=https%3A%2F%2Fwebhybridclient20180206091626.azurewebsites.net%2Fstatus%2Ftest&followRedirects=on

Removing the extra infomation from the Headers

You could also remove the extra information from the HTTPS headers, for example X-Powered-By, or Server, so that less information is sent to the client.

Remove the server headers from the kestrel server, by using the UseKestrel extension method.

.UseKestrel(c => c.AddServerHeader = false)

Add a web.config to your project with the following settings:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.web>
    <httpRuntime enableVersionHeader="false"/>
  </system.web>
  <system.webServer>
    <security>
      <requestFiltering removeServerHeader="true" />
    </security>
    <httpProtocol>
      <customHeaders>
        <remove name="X-Powered-By"/>
      </customHeaders>
    </httpProtocol>
  </system.webServer>
</configuration>

Now by viewing the response in the browser, you can see some unrequired headers have been removed.


Further steps in hardening the application:

Use CAA

You can fix your domain to a selected amount of authorities. You can control the authorities which can issue the certs for your domain. This reduces the risk, that another cert authority produces a cert for your domain to a different person. This can be checked here:

https://toolbox.googleapps.com/apps/dig/

Or configured here:
https://sslmate.com/caa/

Then add it to the hosting provider.

Use a WAF

You could also add a WAF, for example to only expose public URLs and not private ones, or protect against DDoS attacks.

Certificate testing

The certificate should also be tested and validated.

https://www.ssllabs.com is a good test tool.

Here’s the result for the cert used in the demo project.

https://www.ssllabs.com/ssltest/analyze.html?d=webhybridclient20180206091626.azurewebsites.net

I would be grateful for feedback, or suggestions to improve this.

Links:

https://securityheaders.io

https://docs.nwebsec.com/en/latest/

https://github.com/NWebsec/NWebsec

https://www.troyhunt.com/shhh-dont-let-your-response-headers/

https://anthonychu.ca/post/aspnet-core-csp/

https://rehansaeed.com/content-security-policy-for-asp-net-mvc/

https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

https://www.troyhunt.com/the-6-step-happy-path-to-https/

https://www.troyhunt.com/understanding-http-strict-transport/

https://hstspreload.appspot.com

https://geekflare.com/http-header-implementation/

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options

https://docs.microsoft.com/en-us/aspnet/core/tutorials/publish-to-azure-webapp-using-vs

https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key_Pinning

https://toolbox.googleapps.com/apps/dig/

https://sslmate.com/caa/


Dominick Baier: NDC London 2018 Artefacts

“IdentityServer v2 on ASP.NET Core v2: An update” video

“Authorization is hard! (aka the PolicyServer announcement) video

DotNetRocks interview audio

 


Andrew Lock: Sharing appsettings.json configuration files between projects in ASP.NET Core

Sharing appsettings.json configuration files between projects in ASP.NET Core

A pattern that's common for some apps is the need to share settings across multiple projects. For example, imagine you have both an ASP.NET Core RazorPages app and an ASP.NET Core Web API app in the same solution:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Each of the apps will have its own distinct configuration settings, but it's likely that there will also be settings common to both, like a connection string or logging settings for example.

Sensitive configuration settings like connection strings should only be stored outside the version control repository (for example in UserSecrets or Environment Variables) but hopefully you get the idea.

Rather than having to duplicate the same values in each app's appsettings.json, it can be useful to have a common shared .json file that all apps can use, in addition to their specific appsettings.json file.

In this post I show how you can extract common settings to a SharedSettings.json file,how to configure your projects to use them both when running locally with dotnet run, and how to handle the the issues that arise after you publish your app!

The initial setup

If you create a new ASP.NET Core app from a template, it will use the WebHost.CreateDefaultBuilder(args) helper method to setup the web host. This uses a set of "sane" defaults to get you up and running quickly. While I often use this for quick demo apps, I prefer to use the long-hand approach to creating a WebHostBuilder in my production apps, as I think it's clearer to the next person what's going on.

As we're going to be modifying the ConfigureAppConfiguration call to add our shared configuration files, I'll start by modifying the apps to use the long-hand WebHostBuilder configuration. This looks something like the following (some details elided for brevity)

public class Program  
{
    public static void Main(string[] args) => BuildWebHost(args).Run();

    public static IWebHost BuildWebHost(string[] args) =>
        new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .ConfigureAppConfiguration((hostingContext, config) =>
            {
                // see below
            })
            .ConfigureLogging((ctx, log) => { /* elided for brevity */ })
            .UseDefaultServiceProvider((ctx, opts) => { /* elided for brevity */ })
            .UseStartup<Startup>()
            .Build();
}

We'll start by just using the standard appsettings.json files, and the environment-specific appsettings.json files, just as you would in a default ASP.NET Core app. I've included the environment variables in there as well for good measure, but it's the JSON files we're interested in for this post.

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

To give us something to test, I'll add some configuration values to the appsettings.json files for both apps. This will consist of a section with one value that should be the same for both apps, and one value that is app specific. So for the Web API app we have:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "Value for Api"
    }
}

while for the Razor app we have:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "Value for Razor app"
    }
}

Finally, so we can view the actual values received by the app, we'll just dump the configuration section to the screen in the Razor app with the following markup:

@page
@using Microsoft.Extensions.Configuration
@inject IConfiguration _configuration;

@foreach (var kvp in _configuration.GetSection("MySection").AsEnumerable())
{
    <p>@kvp.Key : @kvp.Value</p>
}

which, when run, gives

Sharing appsettings.json configuration files between projects in ASP.NET Core

With our apps primed and ready, we can start extracting the common settings to a shared file.

Extracting common settings to SharedSettings.json

The first question we need to ask is where are we going to actually put the shared file? Logically it doesn't belong to either app directly, so we'll move it outside of the two app folders. I created a folder called Shared at the same level as the project folders:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Inside this folder I created a file called SharedSettings.json, and inside that I added the following JSON:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "override me"
    }
}

Note, I added an AppSpecificValue setting here, just to show that the appsettings.json files will override it, but you could omit it completely from SharedSettings.json if there's no valid default value.

I also removed the SharedValue key from each app's appsettings.json file - the apps should use the value from SharedSettings.json instead. The appsettings.json file for the Razor app would be:

{
    "MySection": {
        "AppSpecificValue": "Value for Razor app"
    }
}

If we run the app now, we'll see that the shared value is no longer available, though the AppSpecificValue from appsettings.json is still there:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Loading the SharedSettings.json in ConfigureAppConfiguration

At this point, we've extracted the common setting to SharedSettings.json but we still need to configure our apps to load their configuration from that file as well. That's pretty straight forward, we just need to get the path to the file, and add it in our ConfigureAppConfiguration method, right before we add the appsettings.json files:

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    // find the shared folder in the parent folder
    var sharedFolder = Path.Combine(env.ContentRootPath, "..", "Shared");

    //load the SharedSettings first, so that appsettings.json overrwrites it
    config
        .AddJsonFile(Path.Combine(sharedFolder, "SharedSettings.json"), optional: true)
        .AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

Now if we run our app again, the setting's back:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Great, it works!

Or does it?

While this works fine in development, we'll have a problem when we publish and deploy the app. The app is going to be looking for the SharedSettings.json file in a parent Shared folder, but that won't exist when we publish - the SharedSettings.json file isn't included in any project files, so as it stands you'd have to manually copy the Shared folder across when you publish. Yuk!

Publishing the SharedSettings.json file with your project.

There's a number of possible solutions to this problem. The one I've settled on isn't necessarily the best or the most elegant, but it works for me and is close to an approach I was using in ASP.NET.

To publish the SharedSettings.json file with each app, I create a link to the file in each app as described in this post, and set the CopyToPublishDirectory property to Always. That way, I can be sure that when the app is published, the SharedSettings.json file will be there in the output directory:

Sharing appsettings.json configuration files between projects in ASP.NET Core

However, that leaves us with a different problem. The SharedSettings.json file will be in a different place depending on if you're running locally with dotnet run (in ../Shared) or the published app with dotnet MyApp.Api.dll (in the working directory).

This is where things get a bit hacky.

For simplicity, rather than trying to work out in which context the app's running (I don't think that's directly possible), I simply try and load the file from both locations - one of them won't exist, but as long as we make the files "optional" that won't be an issue:

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    var sharedFolder = Path.Combine(env.ContentRootPath, "..", "Shared");

    config
        .AddJsonFile(Path.Combine(sharedFolder, "SharedSettings.json"), optional: true) // When running using dotnet run
        .AddJsonFile("SharedSettings.json", optional: true) // When app is published
        .AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

It's not a particularly elegant solution, but it does the job for me. With the code in place we can now happily share settings across multiple apps, override them with app-specific values, and have the correct behaviour both when developing and after publishing.

Summary

This post showed how you can use a shared configuration file to share settings between multiple apps in a solution. By storing the configuration in a central JSON file accessible by both apps, you can avoid duplicating settings in appsettings.json.

Unfortunately this solution is a bit hacky due to the need to cater to the file being located at two different paths, depending on whether the app has been published or not. If anyone has a better solution, please let me know in the comments!

The sample code for this post can be found on GitHub.


Anuraj Parameswaran: Deploying Your Angular Application To Azure

This post is about deploying you Angular application to Azure App service. Unlike earlier versions of Angular JS, Angular CLI is the preferred way to develop and deploy Angular applications. In this post I will show you how to build a CI/CD pipeline with GitHub and Kudu, which will deploy your Angular application to an Azure Web App. I am using ASP.NET Core Web API application, which will be the backend and Angular application is the frontend.


Anuraj Parameswaran: Anti forgery validation with ASP.NET MVC and Angular

This post is how to implement anti forgery validation with ASP.NET MVC and Angular. The anti-forgery token can be used to help protect your application against cross-site request forgery. To use this feature, call the AntiForgeryToken method from a form and add the ValidateAntiForgeryTokenAttribute attribute to the action method that you want to protect.


Damien Bowden: Securing an ASP.NET Core MVC application which uses a secure API

The article shows how an ASP.NET Core MVC application can implement security when using an API to retrieve data. The OpenID Connect Hybrid flow is used to secure the ASP.NET Core MVC application. The application uses tokens stored in a cookie. This cookie is not used to access the API. The API is protected using a bearer token.

To access the API, the code running on the server of the ASP.NET Core MVC application, implements the OAuth2 client credentials resource owner flow to get the access token for the API and can then return the data to the razor views.

Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

Setup

IdentityServer4 and OpenID connect flow configuration

Two client configurations are setup in the IdentityServer4 configuration class. The OpenID Connect Hybrid Flow client is used for the ASP.NET Core MVC application. This flow, after a successful login, will return a cookie to the client part of the application which contains the tokens. The second client is used for the API. This is a service to service communication between two trusted applications. This usually happens in a protected zone. The client API uses a secret to connect to the API. The secret should be a secret and different for each deployment.

public static IEnumerable<Client> GetClients()
{
	return new List<Client>
	{
		new Client
		{
			ClientName = "hybridclient",
			ClientId = "hybridclient",
			ClientSecrets = {new Secret("hybrid_flow_secret".Sha256()) },
			AllowedGrantTypes = GrantTypes.Hybrid,
			AllowOfflineAccess = true,
			RedirectUris = { "https://localhost:44329/signin-oidc" },
			PostLogoutRedirectUris = { "https://localhost:44329/signout-callback-oidc" },
			AllowedCorsOrigins = new List<string>
			{
				"https://localhost:44329/"
			},
			AllowedScopes = new List<string>
			{
				IdentityServerConstants.StandardScopes.OpenId,
				IdentityServerConstants.StandardScopes.Profile,
				IdentityServerConstants.StandardScopes.OfflineAccess,
				"scope_used_for_hybrid_flow",
				"role"
			}
		},
		new Client
		{
			ClientId = "ProtectedApi",
			ClientName = "ProtectedApi",
			ClientSecrets = new List<Secret> { new Secret { Value = "api_in_protected_zone_secret".Sha256() } },
			AllowedGrantTypes = GrantTypes.ClientCredentials,
			AllowedScopes = new List<string> { "scope_used_for_api_in_protected_zone" }
		}
	};
}

The GetApiResources defines the scopes and the APIs for the different resources. I usually define one scope per API resource.

public static IEnumerable<ApiResource> GetApiResources()
{
	return new List<ApiResource>
	{
		new ApiResource("scope_used_for_hybrid_flow")
		{
			ApiSecrets =
			{
				new Secret("hybrid_flow_secret".Sha256())
			},
			Scopes =
			{
				new Scope
				{
					Name = "scope_used_for_hybrid_flow",
					DisplayName = "Scope for the scope_used_for_hybrid_flow ApiResource"
				}
			},
			UserClaims = { "role", "admin", "user", "some_api" }
		},
		new ApiResource("ProtectedApi")
		{
			DisplayName = "API protected",
			ApiSecrets =
			{
				new Secret("api_in_protected_zone_secret".Sha256())
			},
			Scopes =
			{
				new Scope
				{
					Name = "scope_used_for_api_in_protected_zone",
					ShowInDiscoveryDocument = false
				}
			},
			UserClaims = { "role", "admin", "user", "safe_zone_api" }
		}
	};
}

Securing the Resource API

The protected API uses the IdentityServer4.AccessTokenValidation Nuget package to validate the access token. This uses the introspection endpoint to validate the token. The scope is also validated in this example using authorization policies from ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
{
	services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
	  .AddIdentityServerAuthentication(options =>
	  {
		  options.Authority = "https://localhost:44352";
		  options.ApiName = "ProtectedApi";
		  options.ApiSecret = "api_in_protected_zone_secret";
		  options.RequireHttpsMetadata = true;
	  });

	services.AddAuthorization(options =>
		options.AddPolicy("protectedScope", policy =>
		{
			policy.RequireClaim("scope", "scope_used_for_api_in_protected_zone");
		})
	);

	services.AddMvc();
}

The API is protected using the Authorize attribute and checks the defined policy. If this is ok, the data can be returned to the server part of the MVC application.

[Authorize(Policy = "protectedScope")]
[Route("api/[controller]")]
public class ValuesController : Controller
{
	[HttpGet]
	public IEnumerable<string> Get()
	{
		return new string[] { "data 1 from the second api", "data 2 from the second api" };
	}
}

Securing the ASP.NET Core MVC application

The ASP.NET Core MVC application uses OpenID Connect to validate the user and the application and saves the result in a cookie. If the identity is ok, the tokens are returned in the cookie from the server side of the application. See the OpenID Connect specification, for more information concerning the OpenID Connect Hybrid flow.

public void ConfigureServices(IServiceCollection services)
{
	services.AddAuthentication(options =>
	{
		options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
		options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
	})
	.AddCookie()
	.AddOpenIdConnect(options =>
	{
		options.SignInScheme = "Cookies";
		options.Authority = "https://localhost:44352";
		options.RequireHttpsMetadata = true;
		options.ClientId = "hybridclient";
		options.ClientSecret = "hybrid_flow_secret";
		options.ResponseType = "code id_token";
		options.Scope.Add("scope_used_for_hybrid_flow");
		options.Scope.Add("profile");
		options.SaveTokens = true;
	});

	services.AddAuthorization();

	services.AddMvc();
}

The Configure method adds the authentication to the MVC middleware using the UseAuthentication extension method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The home controller is protected using the authorize attribute, and the index method gets the data from the API using the api service.

[Authorize]
public class HomeController : Controller
{
	private readonly ApiService _apiService;

	public HomeController(ApiService apiService)
	{
		_apiService = apiService;
	}

	public async System.Threading.Tasks.Task<IActionResult> Index()
	{
		var result = await _apiService.GetApiDataAsync();

		ViewData["data"] = result.ToString();
		return View();
	}

	public IActionResult Error()
	{
		return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier });
	}
}

Calling the protected API from the ASP.NET Core MVC app

The API service implements the HTTP request using the TokenClient from IdentiyModel. This can be downloaded as a Nuget package. First the access token is acquired from the server, then the token is used to request the data from the API.

var discoClient = new DiscoveryClient("https://localhost:44352");
var disco = await discoClient.GetAsync();
if (disco.IsError)
{
	throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}");
}

var tokenClient = new TokenClient(disco.TokenEndpoint, "ProtectedApi", "api_in_protected_zone_secret");
var tokenResponse = await tokenClient.RequestClientCredentialsAsync("scope_used_for_api_in_protected_zone");

if (tokenResponse.IsError)
{
	throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}");
}

using (var client = new HttpClient())
{
	client.BaseAddress = new Uri("https://localhost:44342");
	client.SetBearerToken(tokenResponse.AccessToken);

	var response = await client.GetAsync("/api/values");
	if (response.IsSuccessStatusCode)
	{
		var responseContent = await response.Content.ReadAsStringAsync();
		var data = JArray.Parse(responseContent);

		return data;
	}

	throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
}

Authentication and Authorization in the API

The ASP.NET Core MVC application calls the API using a service to service trusted association in the protected zone. Due to this, the identity which made the original request cannot be validated using the access token on the API. If authorization is required for the original identity, this should be sent in the URL of the API HTTP request, which can then be validated as required using an authorization filter. Maybe it is enough to validate that the service token is authenticated, and authorized. Care should be taken when sending user data, GDPR requirements, or user information which the IT admins should not have access to.

Should I use the same token as the access token returned to the MVC client?

This depends ūüôā If the API is a public API, then this is fine, if you have no problem re-using the same token for different applications. If the API is in the protected zone, for example behind a WAF, then a separate token would be better. Only tokens issued for the trusted app can be used to access the protected API. This can be validated by using separate scopes, secrets, etc. The tokens issued for the MVC app and the user, will not work, these were issued for a single purpose only, and not multiple applications. The token used for the protected API never leaves the trusted zone.

Links

https://docs.microsoft.com/en-gb/aspnet/core/mvc/overview

https://docs.microsoft.com/en-gb/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-gb/aspnet/core/security/

http://openid.net/

https://www.owasp.org/images/b/b0/Best_Practices_WAF_v105.en.pdf

https://tools.ietf.org/html/rfc7662

http://docs.identityserver.io/en/release/quickstarts/5_hybrid_and_api_access.html

https://github.com/aspnet/Security

https://elanderson.net/2017/07/identity-server-from-implicit-to-hybrid-flow/

http://openid.net/specs/openid-connect-core-1_0.html#HybridFlowAuth


Andrew Lock: ASP.NET Core in Action - MVC in ASP.NET Core

ASP.NET Core in Action - MVC in ASP.NET Core

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post is a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

The book is now finished and completely available in the MEAP, so now is the time to act if you're interested! Thanks ūüôā

MVC in ASP.NET Core

As you may be aware, ASP.NET Core implements MVC using a single piece of middleware, which is normally placed at the end of the middleware pipeline, as shown in figure 1. Once a request has been processed by each middleware (and assuming none of them handle the request and short-circuit the pipeline), it is received by the MVC middleware.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 1. The middleware pipeline. The MVC Middleware is typically configured as the last middleware in the pipeline.

Middleware often handles cross-cutting concerns or narrowly defined requests such as requests for files. For requirements that fall outside of these functions, or which have many external dependencies, a more robust framework is required. The MvcMiddleware in ASP.NET Core can provide this framework, allowing interaction with your application’s core business logic, and generation of a user interface. It handles everything from mapping the request to an appropriate controller, to generating the HTML or API response.

In the traditional description of the MVC design pattern, there is only a single type of model, which holds all the non-UI data and behavior. The controller updates this model as appropriate and then passes it to the view, which uses it to generate a UI. This simple, three-component pattern may be sufficient for some basic applications, but for more complex applications, it often doesn’t scale.

One of the problems when discussing MVC is the vague and overloaded terms that it uses, such as ‚Äúcontroller‚ÄĚ and ‚Äúmodel.‚ÄĚ Model, in particular, is such an overloaded term that it‚Äôs often difficult to be sure exactly what it refers to ‚Äď is it an object, a collection of objects, an abstract concept? Even ASP.NET Core uses the word ‚Äúmodel‚ÄĚ to describe several related, but different, components, as you‚Äôll see shortly.

Directing a request to a controller and building a binding model

The first step when the MvcMiddleware receives a request is the routing of the request to an appropriate controller. Let‚Äôs think about another page in our ToDo application. On this page, you‚Äôre displaying a list of items marked with a given category, assigned to a particular user. If you‚Äôre looking at the list of items assigned to the user ‚ÄúAndrew‚ÄĚ with a category of ‚ÄúSimple,‚ÄĚ you‚Äôd make a request to the URL /todo/list/Simple/Andrew.

Routing takes the path of the request, /todo/list/Simple/Andrew, and maps it against a preregistered list of patterns. These patterns match a path to a single controller class and action method.

DEFINITION An action (or action method) is a method that runs in response to a request. A controller is a class that contains a number of logically grouped action methods.

Once an action method is selected, the binding model (if applicable) is generated, based on the incoming request and the method parameters required by the action method, as shown in figure 2. A binding model is normally a standard class, with properties that map to the request data.

DEFINITION A binding model is an object that acts a ‚Äúcontainer‚ÄĚ for the data provided in a request which is required by an action method.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 2. Routing a request to a controller, and building a binding model. A request to the URL /todo/list/Simple/Andrew results in the ListCategory action being executed, passing in a populated binding model

In this case, the binding model contains two properties: Category, which is ‚Äúbound‚ÄĚ to the value "Simple"; and the property User which is bound to the value "Andrew". These values are provided in the request URL‚Äôs path and are used to populate a binding model of type TodoModel.

This binding model corresponds to the method parameter of the ListCategory action method. This binding model is passed to the action method when it executes, and it can be used to decide how to respond. For this example, the action method uses it to decide which ToDo items to display on the page.

Executing an action using the application model

The role of an action method in the controller is to coordinate the generation of a response to the request it’s handling. That means it should only perform a limited number of actions. In particular, it should:

  • Validate that the data contained in the binding model provided is valid for the request
  • Invoke the appropriate actions on the application model
  • Select an appropriate response to generate, based on the response from the application model

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 3. When executed, an action invokes the appropriate methods in the application model.

Figure 3 shows the action model invoking an appropriate method on the application model. Here you can see that the ‚Äúapplication model‚ÄĚ is a somewhat abstract concept, which encapsulates the remaining non-UI part of your application. It contains the domain model, a number of services, database interaction, and a few other things.

DEFINITION The domain model encapsulates complex business logic in a series of classes that don’t depend on any infrastructure and can be easily tested

The action method typically calls into a single point in the application model. In our example of viewing a product page, the application model might use a variety of different services to check whether the user is allowed to view the product, to calculate the display price for the product, to load the details from the database, or to load a picture of the product from a file.

Assuming the request is valid, the application model returns the required details back to the action method. It’s then up to the action method to choose a response to generate.

Generating a response using a view model

Once the action method is called out to the application model that contains the application business logic, it’s time to generate a response. A view model captures the details necessary for the view to generate a response.

DEFINITION A view model is a simple object that contains data required by the view to render a UI. It’s typically some transformation of the data contained in the application model, plus extra information required to render the page, for example the page’s title.

The action method selects an appropriate view template and passes the view model to it. Each view is designed to work with a particular view model, which it uses to generate the final HTML response. Finally, this is sent back through the middleware pipeline and out to the user’s browser, as shown in figure 4.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 4 The action method builds a view model, selects which view to use to generate the response, and passes it the view model. It is the view which generates the response itself.

It is important to note that although the action method selects which view to display, it doesn’t select what’s generated. It is the view itself that decides the content of the response.

Putting it all together: a complete mvc request

Now you‚Äôve seen each of the steps that go into handling a request in ASP.NET Core using MVC, let‚Äôs put it all together from request to response. Figure 5 shows how each of the steps combine to handle the request to display the list of ToDos for user ‚ÄúAndrew‚ÄĚ and category ‚ÄúSimple.‚ÄĚ The traditional MVC pattern is still visible in ASP.NET Core, made up of the action/controller, the view, and the application model.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 5 A complete MVC request for the list of ToDos in the ‚ÄúSimple‚ÄĚ category for user ‚ÄúAndrew‚ÄĚ

By now, you might be thinking this whole process seems rather convoluted ‚Äď numerous steps to display some HTML! Why not allow the application model to create the view directly, rather than having to go on a dance back and forth with the controller/action method?

The key benefit throughout this process is the separation of concerns.

  • The view is responsible for taking some data and generating HTML.
  • The application model is responsible for executing the required business logic.
  • The controller is responsible for validating the incoming request and selecting the appropriate view to display, based on the output of the application model.

By having clearly-defined boundaries it’s easier to update and test each of the components without depending on any of the others. If your UI logic changes, you won’t necessarily need to modify any of your business logic classes, and you’re less likely to introduce errors in unexpected places.

That’s all for this article. For more information, read the free first chapter of ASP.NET Core in Action and see this Slideshare presentation.


Andrew Lock: Including linked files from outside the project directory in ASP.NET Core

Including linked files from outside the project directory in ASP.NET Core

This post is just a quick tip that I found myself using recently- including files in a project that are outside the project directory. I suspect this feature may have slipped under the radar for many people due to the slightly obscure UI hints you need to pick up on in Visual Studio.

Adding files from outside the project by copying

Sometimes, you might want to include an existing item in your ASP.NET Core apps that lives outside the project directory. You can easily do this from Visual Studio by right clicking the project you want to include it in, and selecting Add > Existing Item…

Including linked files from outside the project directory in ASP.NET Core

You're then presented with a file picker dialog, so you can navigate to the file, and choose Add. Visual Studio will spot that the file is outside the project directory and will copy it in.

Sometimes this is the behaviour you want, but often you want the original file to remain where it is and for the project to just point to it, not to create a copy.

Adding files from outside the project by linking

To add a file as a link, right click and choose Add > Existing Item… as before, but this time, don't click the Add button. Instead, click the little dropdown arrow next to the Add button and select Add as Link.

Including linked files from outside the project directory in ASP.NET Core

Instead of copying the file into the project directory, Visual Studio will create a link to the original. That way, if you modify the original file you'll immediately see the changes in your project.

Visual Studio shows linked items with a slightly different icon, as you can see below where SharedSettings.json is a linked file and appsettings.json is a normally added file:

Including linked files from outside the project directory in ASP.NET Core

Directly editing the csproj file

As you'd expect for ASP.NET Core projects, you don't need Visual Studio to get this behaviour. You can always directly edit the .csproj file yourself and add the necessary items by hand.

The exact code required depends on the type of file you're trying to link and the type of MSBuild action required. For example, if you want to include a .cs file, you would use the <compile> element, nested in an <ItemGroup>:

<ItemGroup>  
  <Compile Include="..\OtherFolder\MySharedClass.cs" Link="MySharedClass.cs" />
</ItemGroup>  

Include gives the relative path to the file from the project folder, and the Link property tells MSBuild to add the file as a link, plus the name that should be used for it. If you change this file name, it will also change the filename as it's displayed in Visual Studio's Solution Explorer.

For content files like JSON configuration files, you would use the <content> element, for example:

<ItemGroup>  
  <Content Include="..\Shared\SharedSettings.json" Link="SharedSettings.json" CopyToOutputDirectory="PreserveNewest" />
</ItemGroup>  

In this example, I also set the CopyToOutputDirectory to PreserveNewest, so that the file will be copied to the output directory when the project is built or published.

Summary

Using linked files can be handy when you want to share code or resources between multiple projects. Just be sure that the files are checked in to source control along with your project, otherwise you might get build errors when loading your projects!


Anuraj Parameswaran: Using Yarn with Angular CLI

This post is about using Yarn in Angular CLI instead of NPM. Yarn is an alternative package manager for NPM packages with a focus on reliability and speed. It has been released in October 2016 and already gained a lot of traction and enjoys great popularity in the JavaScript community.


Damien Bowden: Using the dotnet Angular template with Azure AD OIDC Implicit Flow

This article shows how to use Azure AD with an Angular application implemented using the Microsoft dotnet template and the angular-auth-oidc-client npm package to implement the OpenID Implicit Flow. The Angular app uses bootstrap 4 and Angular CLI.

Code: https://github.com/damienbod/dotnet-template-angular

Setting up Azure AD

Log into https://portal.azure.com and click the Azure Active Directory button

Click App registrations and then the New application registration

Add an application name and set the URL to match the application URL. Click the create button.

Open the new application.

Click the Manifest button.

Set the oauth2AllowImplicitFlow to true.

Click the settings button and add the API Access required permissions as needed.

Now the Azure AD is ready to go. You will need to add your users which you want to login with and add them as admins if required. For example, I have add damien@damienbod.onmicrosoft.com as an owner.

dotnet Angular template from Microsoft.

Install the latest version and create a new project.

Installation:
https://docs.microsoft.com/en-gb/aspnet/core/spa/index#installation

Docs:
https://docs.microsoft.com/en-gb/aspnet/core/spa/angular?tabs=visual-studio

The dotnet template uses Angular CLI and can be found in the ClientApp folder.

Update all the npm packages including the Angular-CLI, and do a npm install, or use yarn to update the packages.

Add the angular-auth-oidc-client which implements the OIDC Implicit Flow for Angular applications.

{
  "name": "dotnet_angular",
  "version": "0.0.0",
  "license": "MIT",
  "scripts": {
    "ng": "ng",
    "start": "ng serve --extract-css",
    "build": "ng build --extract-css",
    "build:ssr": "npm run build -- --app=ssr --output-hashing=media",
    "test": "ng test",
    "lint": "ng lint",
    "e2e": "ng e2e"
  },
  "private": true,
  "dependencies": {
    "@angular-devkit/core": "0.0.28",
    "@angular/animations": "^5.2.1",
    "@angular/common": "^5.2.1",
    "@angular/compiler": "^5.2.1",
    "@angular/core": "^5.2.1",
    "@angular/forms": "^5.2.1",
    "@angular/http": "^5.2.1",
    "@angular/platform-browser": "^5.2.1",
    "@angular/platform-browser-dynamic": "^5.2.1",
    "@angular/platform-server": "^5.2.1",
    "@angular/router": "^5.2.1",
    "@nguniversal/module-map-ngfactory-loader": "^5.0.0-beta.5",
    "angular-auth-oidc-client": "4.0.0",
    "aspnet-prerendering": "^3.0.1",
    "bootstrap": "^4.0.0",
    "core-js": "^2.5.3",
    "es6-promise": "^4.2.2",
    "rxjs": "^5.5.6",
    "zone.js": "^0.8.20"
  },
  "devDependencies": {
    "@angular/cli": "1.6.5",
    "@angular/compiler-cli": "^5.2.1",
    "@angular/language-service": "^5.2.1",
    "@types/jasmine": "~2.8.4",
    "@types/jasminewd2": "~2.0.3",
    "@types/node": "~9.3.0",
    "codelyzer": "^4.1.0",
    "jasmine-core": "~2.9.1",
    "jasmine-spec-reporter": "~4.2.1",
    "karma": "~2.0.0",
    "karma-chrome-launcher": "~2.2.0",
    "karma-cli": "~1.0.1",
    "karma-coverage-istanbul-reporter": "^1.3.3",
    "karma-jasmine": "~1.1.1",
    "karma-jasmine-html-reporter": "^0.2.2",
    "protractor": "~5.2.2",
    "ts-node": "~4.1.0",
    "tslint": "~5.9.1",
    "typescript": "~2.6.2"
  }
}

Azure AD does not support CORS, so you have to GET the .well-known/openid-configuration with your tenant and add them to your application as a Json file.

https://login.microsoftonline.com/damienbod.onmicrosoft.com/.well-known/openid-configuration

Do the same for the jwt keys
https://login.microsoftonline.com/common/discovery/keys

Now change the URL in the well-known/openid-configuration json file to use the downloaded version of the keys.

{
  "authorization_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/authorize",
  "token_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/token",
  "token_endpoint_auth_methods_supported": [ "client_secret_post", "private_key_jwt", "client_secret_basic" ],
  "jwks_uri": "https://localhost:44347/jwks.json",
  "response_modes_supported": [ "query", "fragment", "form_post" ],
  "subject_types_supported": [ "pairwise" ],
  "id_token_signing_alg_values_supported": [ "RS256" ],
  "http_logout_supported": true,
  "frontchannel_logout_supported": true,
  "end_session_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/logout",
  "response_types_supported": [ "code", "id_token", "code id_token", "token id_token", "token" ],
  "scopes_supported": [ "openid" ],
  "issuer": "https://sts.windows.net/a0958f45-195b-4036-9259-de2f7e594db6/",
  "claims_supported": [ "sub", "iss", "cloud_instance_name", "cloud_instance_host_name", "cloud_graph_host_name", "msgraph_host", "aud", "exp", "iat", "auth_time", "acr", "amr", "nonce", "email", "given_name", "family_name", "nickname" ],
  "microsoft_multi_refresh_token": true,
  "check_session_iframe": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/checksession",
  "userinfo_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/openid/userinfo",
  "tenant_region_scope": "NA",
  "cloud_instance_name": "microsoftonline.com",
  "cloud_graph_host_name": "graph.windows.net",
  "msgraph_host": "graph.microsoft.com"
}

This can now be used in the APP_INITIALIZER of the app.module. In the OIDC configuration, set the OpenIDImplicitFlowConfiguration object to match the Azure AD application which was configured before.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule, APP_INITIALIZER } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpClientModule } from '@angular/common/http';
import { RouterModule } from '@angular/router';

import { AppComponent } from './app.component';
import { NavMenuComponent } from './nav-menu/nav-menu.component';
import { HomeComponent } from './home/home.component';

import {
  AuthModule,
  OidcSecurityService,
  OpenIDImplicitFlowConfiguration,
  OidcConfigService,
  AuthWellKnownEndpoints
} from 'angular-auth-oidc-client';
import { AutoLoginComponent } from './auto-login/auto-login.component';
import { routing } from './app.routes';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';
import { ProtectedComponent } from './protected/protected.component';
import { AuthorizationGuard } from './authorization.guard';
import { environment } from '../environments/environment';

export function loadConfig(oidcConfigService: OidcConfigService) {
  console.log('APP_INITIALIZER STARTING');
  // https://login.microsoftonline.com/damienbod.onmicrosoft.com/.well-known/openid-configuration
  // jwt keys: https://login.microsoftonline.com/common/discovery/keys
  // Azure AD does not support CORS, so you need to download the OIDC configuration, and use these from the application.
  // The jwt keys needs to be configured in the well-known-openid-configuration.json
  return () => oidcConfigService.load_using_custom_stsServer('https://localhost:44347/well-known-openid-configuration.json');
}

@NgModule({
  declarations: [
    AppComponent,
    NavMenuComponent,
    HomeComponent,
    AutoLoginComponent,
    ForbiddenComponent,
    UnauthorizedComponent,
    ProtectedComponent
  ],
  imports: [
    BrowserModule.withServerTransition({ appId: 'ng-cli-universal' }),
    HttpClientModule,
    AuthModule.forRoot(),
    FormsModule,
    routing,
  ],
  providers: [
	  OidcSecurityService,
	  OidcConfigService,
	  {
		  provide: APP_INITIALIZER,
		  useFactory: loadConfig,
		  deps: [OidcConfigService],
		  multi: true
    },
    AuthorizationGuard
	],
  bootstrap: [AppComponent]
})

export class AppModule {

  constructor(
    private oidcSecurityService: OidcSecurityService,
    private oidcConfigService: OidcConfigService,
  ) {
    this.oidcConfigService.onConfigurationLoaded.subscribe(() => {

      const openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
      openIDImplicitFlowConfiguration.stsServer = 'https://login.microsoftonline.com/damienbod.onmicrosoft.com';
      openIDImplicitFlowConfiguration.redirect_url = 'https://localhost:44347';
      openIDImplicitFlowConfiguration.client_id = 'fd87184a-00c2-4aee-bc72-c7c1dd468e8f';
      openIDImplicitFlowConfiguration.response_type = 'id_token token';
      openIDImplicitFlowConfiguration.scope = 'openid profile email ';
      openIDImplicitFlowConfiguration.post_logout_redirect_uri = 'https://localhost:44347';
      openIDImplicitFlowConfiguration.post_login_route = '/home';
      openIDImplicitFlowConfiguration.forbidden_route = '/home';
      openIDImplicitFlowConfiguration.unauthorized_route = '/home';
      openIDImplicitFlowConfiguration.auto_userinfo = false;
      openIDImplicitFlowConfiguration.log_console_warning_active = true;
      openIDImplicitFlowConfiguration.log_console_debug_active = !environment.production;
      openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = 600;

      const authWellKnownEndpoints = new AuthWellKnownEndpoints();
      authWellKnownEndpoints.setWellKnownEndpoints(this.oidcConfigService.wellKnownEndpoints);

      this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration, authWellKnownEndpoints);
      this.oidcSecurityService.setCustomRequestParameters({ 'prompt': 'admin_consent', 'resource': 'https://graph.windows.net'});
    });

    console.log('APP STARTING');
  }
}

Now an Auth Guard can be added to protect the protected routes.

import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';
import { Observable } from 'rxjs/Observable';
import { map } from 'rxjs/operators';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Injectable()
export class AuthorizationGuard implements CanActivate {

  constructor(
    private router: Router,
    private oidcSecurityService: OidcSecurityService
  ) { }

  public canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | boolean {
    console.log(route + '' + state);
    console.log('AuthorizationGuard, canActivate');

    return this.oidcSecurityService.getIsAuthorized().pipe(
      map((isAuthorized: boolean) => {
        console.log('AuthorizationGuard, canActivate isAuthorized: ' + isAuthorized);

        if (isAuthorized) {
          return true;
        }

        this.router.navigate(['/unauthorized']);
        return false;
      })
    );
  }
}

You can then add an app.routes and protect what you require.

import { Routes, RouterModule } from '@angular/router';

import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';
import { AutoLoginComponent } from './auto-login/auto-login.component';
import { ProtectedComponent } from './protected/protected.component';
import { AuthorizationGuard } from './authorization.guard';

const appRoutes: Routes = [
  { path: '', component: HomeComponent, pathMatch: 'full' },
  { path: 'home', component: HomeComponent },
  { path: 'autologin', component: AutoLoginComponent },
  { path: 'forbidden', component: ForbiddenComponent },
  { path: 'unauthorized', component: UnauthorizedComponent },
  { path: 'protected', component: ProtectedComponent, canActivate: [AuthorizationGuard] }
];

export const routing = RouterModule.forRoot(appRoutes);

The NavMenuComponent component is then updated to add the login, logout.

import { Component } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Component({
  selector: 'app-nav-menu',
  templateUrl: './nav-menu.component.html',
  styleUrls: ['./nav-menu.component.css']
})
export class NavMenuComponent {
  isExpanded = false;
  isAuthorizedSubscription: Subscription;
  isAuthorized: boolean;

  constructor(public oidcSecurityService: OidcSecurityService) {
  }

  ngOnInit() {
    this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
      (isAuthorized: boolean) => {
        this.isAuthorized = isAuthorized;
      });
  }

  ngOnDestroy(): void {
    this.isAuthorizedSubscription.unsubscribe();
  }

  login() {
    this.oidcSecurityService.authorize();
  }

  refreshSession() {
    this.oidcSecurityService.authorize();
  }

  logout() {
    this.oidcSecurityService.logoff();
  }
  collapse() {
    this.isExpanded = false;
  }

  toggle() {
    this.isExpanded = !this.isExpanded;
  }
}

Start the application and click login

Enter your user which is defined in Azure AD

Consent page:

And you are redircted back to the application.

Notes:

If you don’t use any Microsoft API use the id_token flow, and not the id_token token flow. The resource of the API needs to be defined in both the request and also the Azure AD app definitions.

Links:

https://docs.microsoft.com/en-gb/aspnet/core/spa/angular?tabs=visual-studio

https://portal.azure.com


Andrew Lock: Creating GitHub pull requests from the command-line with Hub

Creating GitHub pull requests from the command-line with Hub

If you use GitHub much, you'll likely find yourself having to repeatedly use the web interface to raise pull requests. The web interface is great and all, but it can really take you out of your flow if you're used to creating branches, rebasing, pushing, and pulling from the command line!

Creating GitHub pull requests from the command-line with Hub

Luckily GitHub has a REST API that you can use to create pull requests instead, and a nice command line wrapper to invoke it called Hub! Hub wraps the git command line tool - effectively adding extra commands you can invoke from the command line. Once it's installed (and aliased) you'll be able to call:

> git pull-request

and a new pull request will be created in your repository:

Creating GitHub pull requests from the command-line with Hub

If you're someone who likes using the command line, this can really help streamline your workflow.

Installing Hub

Hub is available on GitHub so you can download binaries, or install it from source. As I use chocolatey on my dev machine, I chose to install Hub using chocolatey by running the following from an administrative powershell:

> choco install hub

Creating GitHub pull requests from the command-line with Hub

Chocolatey will download and install hub into its standard installation folder (C:\ProgramData\chocolatey by default). As this folder should be in your PATH, you can type hub version from the command line and you should get back something similar to:

> hub version
git version 2.15.1.windows.2  
hub version 2.2.9  

That's it, you're good to go. The first time you use Hub to create a pull request (PR), it will prompt you for your GitHub username and password.

Creating a pull request with Hub

Hub is effectively an extension of the git command line, so it can do everything git does, and just adds some helper GitHub methods on top. Anything you can do with git, you can do with hub.

You can view all the commands available by simply typing hub into the command line. As hub is a wrapper for git it starts by displaying the git help message:

> hub
usage: git [--version] [--help] [-C <path>] [-c name=value]  
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p | --paginate | --no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]

These are common Git commands used in various situations:

start a working area (see also: git help tutorial)  
   clone      Clone a repository into a new directory
   init       Create an empty Git repository or reinitialize an existing one
...

At the bottom, hub lists the GitHub specific commands available to you:

These GitHub commands are provided by hub:

   pull-request   Open a pull request on GitHub
   fork           Make a fork of a remote repository on GitHub and add as remote
   create         Create this repository on GitHub and add GitHub as origin
   browse         Open a GitHub page in the default browser
   compare        Open a compare page on GitHub
   release        List or create releases (beta)
   issue          List or create issues (beta)
   ci-status      Show the CI status of a commit

As you can see, there's a whole bunch of useful commands there. The one I'm interested in is pull-request.

Lets imagine we have already checked out a repository we own, and we have created a branch to work on a feature, feature-37:

Creating GitHub pull requests from the command-line with Hub

Before we can create a PR, we need to push our branch to the server:

> git push origin -u feature-37

To create the PR, we use hub. This will open up your configured text editor to enter a message for the PR (I use Notepad++) . In the comments you can see the commit messages for the branch, or if your PR only has a single commit (as in this example), hub will handily fill the message in for you, just as it does in the web interface:

Creating GitHub pull requests from the command-line with Hub

As you can see from the comments in the screenshot, the first line of your message forms the PR title, and the remainder forms the description of the PR. After saving your message, hub spits out the URL for your PR on GitHub. Follow that link, and you can see your shiny new PR ready and waiting approval:

Creating GitHub pull requests from the command-line with Hub

Hub can do lots more than just create pull requests, but for me that's the killer feature I use everyday. If you use more features, then you may want to consider aliasing your hub command to git as it suggests in the docs.

Aliasing hub as git

As I mentioned earlier, hub is a wrapper around git that provides some handy extra tweaks. It even enhances some of the standard git commands: it can expand partial URLs in a git clone to be github.com addresses for example:

> hub clone andrewlock/test

# expands to
git clone git://github.com/andrewlock/test.git  

If you find yourself using the hub command a lot, then you might want to consider aliasing your git command to actually use Hub instead. That means you can just do

> git clone andrewlock/test

for example, without having to think about which commands are hub specific, and which are available in git. Adding an alias is safe to do, you're not modifying the underlying git program or anything, so don't worry about that.

If you're using PowerShell, you can add the alias to your profile by running:

> Add-Content $PROFILE "`nSet-Alias git hub"

and then restarting your session. For troubleshooting and other scripts see https://github.com/github/hub#aliasing.

Streamlining PR creation with a git alias

I love how much time hub has saved me by keeping my hands on the keyboard, but there's one thing that was annoying me: having to run git push before opening the PR. I'm a big fan of Git aliases, so I decided to create an alias called pr that does two things: push, and create a pull request.

If you're new to git aliases, I highly recommend checking out this post from Phil Haack. He explains what aliases are, why you want them, and gives a bunch of really useful aliases to get started.

You can create aliases directly from the command line with git, but for all but the simplest ones I like to edit the .gitconfig file directly. To open your global .gitconfig for editing, use

> git config --global --edit

This will popup your editor of choice, and allow you to edit to your heart's content. Locate the [alias] section of your config file (or if it doesn't exist, add it), and enter the following:

[alias]
    pr="!f() { \
        BRANCH_NAME=$(git rev-parse --abbrev-ref HEAD); \
        git push -u origin $BRANCH_NAME; \
        hub pull-request; \
    };f "

This alias uses the slightly more complex script format that creates a function and executes it immediately. In that function, we do three things:

  • BRANCH_NAME=$(git rev-parse --abbrev-ref HEAD); - Get the name of the current branch from git and store it in a variable, BRANCH_NAME
  • git push -u origin $BRANCH_NAME; - Push the current branch to the remote origin, and associate it with the remote branch of the same name
  • hub pull-request - Create the pull request using hub

To use the alias, simply check out the branch you wish to create a PR for and run:

> git pr

This will push the branch if necessary and create the pull request for you, all in one (prompting you for the PR title in your editor as usual).

> git pr
Counting objects: 11, done.  
Delta compression using up to 8 threads.  
Compressing objects: 100% (11/11), done.  
Writing objects: 100% (11/11), 1012 bytes | 1012.00 KiB/s, done.  
Total 11 (delta 9), reused 0 (delta 0)  
remote: Resolving deltas: 100% (9/9), completed with 7 local objects.  
To https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders.git  
 * [new branch]      feature-37 -> feature-37
Branch 'feature-37' set up to track remote branch 'feature-37' from 'origin'.  
https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders/pull/40  

Note, there is code in the Hub GitHub repository indicating that hub pr is going to be a feature that allows you to check-out a given PR. If that's the case this alias may break, so I'll keep an eye out!

Summary

Hub is a great little wrapper from GitHub that just simplifies some of the things I do many times a day. If you find it works for you, check it out on GitHub - it's written in Go and I've no doubt they'd love to have more contributors.


Anuraj Parameswaran: Measuring code coverage of .NET Core applications with Visual Studio 2017

This post is about Measuring code coverage of .NET Core applications with Visual Studio. Test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.


Andrew Lock: Handy Docker commands for local development - Part 2

Handy Docker commands for local development - Part 2

This is a follow up to my previous post of handy Docker commands that I always find myself having to Google. The full list of commands discussed in this and the previous post are shown below. Hope you find them useful!

View the space used by Docker

One of the slightly insidious things about Docker is the way it can silently chew up your drive space. Even more than that, it's not obvious exactly how much space it's actually using!

Luckily Docker includes a handy command, which lets you know how much space you're using, in terms of images, containers, and local volumes (essentially virtual hard drives attached to containers):

$ docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE  
Images              47                  23                  9.919GB             6.812GB (68%)  
Containers          48                  18                  98.35MB             89.8MB (91%)  
Local Volumes       6                   6                   316.1MB             0B (0%)  

As well as the actual space used up, this table also shows how much you could reclaim by deleting old containers, images, and volumes. In the next section, I'll show you how.

Remove old docker images and containers.

Until recently, I was manually deleting my old images and containers using the scripts in this gist, but it turns out there's a native command in Docker to cleanup - docker system prune -a.

This command removes all unused containers, volumes (and networks), as well as any unused or dangling images. What's the difference between an unused and dangling image? I think it's described well in this stack overflow answer:

An unused image means that it has not been assigned or used in a container. For example, when running docker ps -a - it will list all of your exited and currently running containers. Any images shown being used inside any of containers are a "used image".

On the other hand, a dangling image just means that you've created the new build of the image, but it wasn't given a new name. So the old images you have becomes the "dangling image". Those old image are the ones that are untagged and displays "" on its name when you run docker images.

When running docker system prune -a, it will remove both unused and dangling images. Therefore any images being used in a container, whether they have been exited or currently running, will NOT be affected. Dangling images are layers that aren't used by any tagged images. They take up space.

When you run the prune command, Docker double checks that you really mean it, and then proceeds to clean up your space. It lists out all the IDs of removed objects, and gives a little summary of everything it reclaimed (truncated for brevity):

$ docker system prune -a
WARNING! This will remove:  
        - all stopped containers
        - all networks not used by at least one container
        - all images without at least one container associated to them
        - all build cache
Are you sure you want to continue? [y/N] y  
Deleted Containers:  
c4b642d3cdb67035278e3529e07d94574d62bce36a9330655c7b752695a54c2d  
91de184f79942877c334b20eb67d661ec569224aacf65071e52527230b92932b  
...
93d4a795a635ba0e614c0e0ba9855252d682e4e3290bed49a5825ca02e0b6e64  
4d7f75ec610cbb1fcd1070edb05b7864b3f4b4079eb01b77e9dde63d89319e43

Deleted Images:  
deleted: sha256:5bac995a88af19e91077af5221a991b901d42c1e26d58b2575e2eeb4a7d0150b  
deleted: sha256:82fd2b23a0665bd64d536e74607d9319107d86e67e36a044c19d77f98fc2dfa1  
...
untagged: microsoft/dotnet:2.0.3-runtime  
deleted: sha256:a75caa09eb1f7d732568c5d54de42819973958589702d415202469a550ffd0ea

Total reclaimed space: 6.679GB  

Be aware, if you are working on a new build using a Dockerfile, you may have dangling or unused images that you want to keep around. It's best to leave the pruning until you're at a sensible point.

Speeding up builds by minimising the Docker context

Docker is designed with two components: a client and a deamon/service. When you write docker commands, you're sending commands using the client to the Docker deamon which does all the work. The client and deamon can even be on two separate machines.

In order for the Docker deamon to build an image from a Dockerfile using docker build ., the client needs to send it the "context" in which the command should be executed. The context is essentially all the files in the directory passed to the docker build command (e.g., the current directory when you call docker build .). You can see the client sending this context when you build using a Dockerfile:

Sending build context to Docker daemon   38.5MB  

For big projects, the context can get very large. This slows down the building of your Dockerfiles as you have to wait for the client to send all the files to the deamon. In an ASP.NET Core app for example, the top level directory includes a whole bunch of files that just aren't required for most Dockerfile builds - Git files, Visual Studio / Visual Studio Code files, previous bin and obj folders. All these additional files slow down the build when they are sent as part of the context.

Handy Docker commands for local development - Part 2

Luckily, you can exclude files by creating a .dockerignore file in your root directory. This works like a .gitignore file, listing the directories and files that Docker should ignore when creating the context, for example:

.git
.vs
.vscode
artifacts  
dist  
docs  
tools  
**/bin/*
**/obj/*

The syntax isn't quite the same as for Git, but it's the same general idea. Depending on the size of your project, and how many extra files you have, adding a .dockerignore file can make a big difference. For this very small project, it reduced the context from 38.5MB to 2.476MB, and instead of taking 3 seconds to send the context, it's practially instantaneous. Not bad!

Viewing (and minimising) the Docker context

As shown in the last section, reducing the context is well worth the effort to speed up your builds. Unfortunately, there's no easy way to actually view the files that are part of the context.

The easiest approach I've found is described in this Stack Overflow question. Essentially, you build a basic image, and just copy all the files from the context. You can then run the container and browse the file system, to see what you've got.

The following Dockerfile builds a simple image using the common BusyBox base image, copies all the context files into the /tmp directory, and runs find as the command when run as a container.

FROM busybox  
WORKDIR /tmp  
COPY . .  
ENTRYPOINT ["find"]  

If you create a new Dockerfile in the root directory callled InspectContext.Dockerfile containing these layers, you can create an image from it using docker build and passing the -f argument. If you don't pass the -f, Docker will use the default Dockerfile file

$ docker build -f InspectContext.Dockerfile --tag inspect-context .
Sending build context to Docker daemon  2.462MB  
Step 1/4 : FROM busybox  
 ---> 6ad733544a63
Step 2/4 : WORKDIR /tmp  
 ---> 6b1d4fad3942
Step 3/4 : COPY . .  
 ---> c48db59b30d9
Step 4/4 : ENTRYPOINT find  
 ---> bffa3718c9f6
Successfully built bffa3718c9f6  
Successfully tagged inspect-context:latest  

Once the image is built (which only takes a second or two), you can run the container. The find entrypoint will then spit out a list of all the files and folders in the /tmp directory, i.e. all the files that were part of the context.

$ docker run --rm inspect-context
.
./LICENSE
./.dockerignore
./aspnetcore-in-docker.sln
./README.md
./build.sh
./build.cake
./.gitattributes
./docker_build.sh
./Dockerfile
./.gitignore
./src
./src/AspNetCoreInDocker.Lib
./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj
./src/AspNetCoreInDocker.Lib/Class1.cs
./src/AspNetCoreInDocker.Web
...

With this list of files you can tweak your .dockerignore file to keep your context as lean as possible. Alternatively, if you want to browse around the container a bit further you can override the entrypoint, for example: docker run --entrypoint sh -it --rm inspect-context

That's pretty much it for the Docker commands for this article. I'm going to finish off with a couple of commands that are somewhat related, in that they're Git commands I always find myself reaching for when working with Docker!

Bonus: Making a file executable in git

This has nothing to do with Docker specifically, but it's something I always forget when my Dockerfile uses an external build script (for example when using Cake with Docker). Even if the file itself has executable permissions, you have to tell Git to store it as executable too:

git update-index --chmod=+x build.sh  

Bonus 2: Forcing script files to keep Unix Line endings in git

If you're working on Windows, but also have scripts that will be run in Linux (for example via a mapped folder in the Linux VM), you need to be sure that the files are checked out with Linux line endings (LF instead of CRLF).

You can set the line endings Git uses when checking out files with a .gitattributes file. Typically, my file just contains * text=auto so that line endings are auto-normalised for all text files. That means my .sh files end up with CRLF line endings when I check out on Windows. You can add an extra line to the file that forces all .sh files to use LF endings, no matter which platform they're checked out on:

* text=auto
*.sh eol=lf

Summary

These are the commands I find myself using most often, but if you have any useful additions, please leave them in the comments! :)


Andrew Lock: Handy Docker commands for local development - Part 1

Handy Docker commands for local development - Part 1

This post includes a grab-bag of Docker commands that I always find myself having to Google. Now I can just bookmark my own post and have them all at my finger tips! I use Docker locally inside a Linux VM rather than using Docker for Windows, so they all apply to that setup. Your mileage may vary - I imagine they work in Docker for Windows but I haven't checked.

I've split the list over 2 posts, but this is what you can expect:

Don't require sudo to execute Docker commands

By default, when you first install Docker on a Linux machine, you might find that you get permission errors when you try and run any Docker commands:

Handy Docker commands for local development - Part 1

To get round this, you need to run all your commands with sudo, e.g. sudo docker images. There's good security reasons for requiring sudo, but when I'm just running it locally in a VM, I'm not too concerned about them. To get round it, and avoid having to type sudo every time, you can add your current user to the docker group, which effectively gives it root access.

sudo usermod -aG docker $USER  

After running the command, just log out of your VM (or SSH session) and log back in, and you should be able to run your docker commands without needing to type sudo for everything.

Examining the file system of a failed Docker build

When you're initially writing a Dockerfile to build your app, you may find that it fails to build for some reason. That could happen for lots of reasons - it could be an error in your code,it could be an error in your Dockerfile invoking the wrong commands, or you might not be copying all the required files into the image for example.

Sometimes you can get obscure errors that leave you unsure what went wrong. When that happens, you might want to inspect the filesystem when the build failed, to see what went wrong. You can do this by running one of the previous image layers for your Dockerfile.

When Docker builds an image using a Dockerfile, it does so in layers. Each command in the Dockerfile creates a new layer when it executes. When a command fails, the layer is not created. To view the filesystem of the image when the command failed, we can just run the image that contains all the preceding layers.

Luckily, when you build a Dockerfile, Docker shows you the unique reference for each layer it creates - it's the random strings of numbers and letters like b1f30360c452 in the following example:

Step 1/13 : FROM microsoft/aspnetcore-build:2.0.3 AS builder  
 ---> b1f30360c452
Step 2/13 : WORKDIR /sln  
 ---> Using cache
 ---> 4dee75249988
Step 3/13 : COPY ./build.sh ./build.cake ./NuGet.config ./aspnetcore-in-docker.sln ./  
 ---> fee6f958bf9f
Step 4/13 : RUN ./build.sh -Target=Clean  
 ---> Running in ab207cd28503
/usr/bin/env: 'bash\r': No such file or directory

This build failed executing the ./build.sh -Target=Clean command. To view the filesystem we can create a container from the image created by the previous layer, fee6f958bf9f by calling docker run. We can execute a bash shell inside the container, and inspect the contents of the filesystem (or do anything else we like).

docker run --rm -it fee6f958bf9f /bin/bash  

The arguments are as follows:

  • --rm - when we exit the container, it will be deleted. this prevents the build up of exited transient images like this.
  • -it - create an "interactive" session. When the docker container starts up, you will be connected to it, rather than it running in the background.
  • fee6f958bf9f - the image layer to run in, taken from our failed output.
  • /bin/bash - The command to execute in the container when it starts. Using /bin/bash creates a shell, so you can execute commands and generally inspect the filesystem.

Note that --rm deletes the container, not the image. A container is basically a running image. Think of the image as a hard drive, it contains all the details on how to run a process, but you need a computer or a VM to actually do anything with the hard drive.

Once you've looked around and figured out the problem, you can quit the bash shell by running exit, which will also kill the container.

This docker run command works if you want to inspect an image during a failed build, but what if your build succeeded, and now you want to check the filesystem rather than running the app it contains? For that you'll need the command in the next section.

Examining the file system of an image with an ENTRYPOINT

Lets say your Docker build succeeded, but for some reason your app isn't starting correctly when you call docker run. You suspect there may be a missing file, and so you want to inspect the filesystem. Unfortunately, the command in the previous section won't work for you. If you try it, you'll probably be greeted with an error that looks like the following:

$ docker run -it --rm myImage /bin/bash
Unhandled Exception: System.FormatException: Value for switch '/bin/bash' is missing.  
   at Microsoft.Extensions.Configuration.CommandLine.CommandLineConfigurationProvider.Load(

The problem, is that the Docker image you're running (myImage) in this case, already includes an ENTRYPOINT command in the Dockerfile used to build it. The ENTRYPOINT defines the command to run when the container is started, which for ASP.NET Core apps, typically looks something like the following:

ENTRYPOINT ["dotnet", "MyApp.dll"]  

With the previous example, the /bin/bash argument is actually passed as an extra command line argument to the previously-defined entrypoint, so you're actually running
dotnet MyApp.dll /bin/bash in the container when it starts up.

In order to override the entrypoint, and to just run the shell directly, you need to use the --entrypoint argument instead. Note that the argument order is different in this case - the image name goes at the end in this command, whereas the shell was at the end in the previous example

$ docker run --rm -it --entrypoint /bin/bash  myImage

You'll now have a shell inside the container that you can use to inspect the contents or run other commands.

Copying files from a Docker container to the host

If you're not particularly au fait with Linux (ūüôč) then trying to diagnose a problem from inside a shell can be tricky. Sometimes, I'd rather copy a file back from a container and inspect it on the Windows side, using tools I'm familiar with. I typically have folders mapped between my Windows machine and my Linux VM, so I just need to get the file out of the Docker container and into the Linux host.

If you have a running (or stopped) container, you can copy a file from the container to the host with the docker cp command. For example, if you created a container called my-image-app from the myImage container using:

docker run -d --name my-image-app myImage  

then you could copy a file from the container to the host using something like the following:

docker cp my-image-app:/app/MyApp.dll /path/on/the/host/MyApp.dll  

Copying files from a Docker image to the host

The previous example shows how to copy files from a container, but you can't use this to copy files directly from an image. Unfortunately, to do that you have to create a container from the image and run it.

There's a number of ways you can do this, depending on exactly what state your image is in (e.g. does it have a defined ENTRYPOINT), but I tend to just use the following:

docker run --rm -it --entrypoint cat myImage /app/MyApp.dll > /path/on/the/host/MyApp.dll  

This gives me a single command I can run to grab the /app/MyApp.dll file from the image, and write it out to /path/on/the/host/MyApp.dll. It relies on the cat command being available, which it is for the ASP.NET Core base image. If that's not available, you'll essentially have to manually create a container from your image, copy the file across, and then kill the container. For example:

id=$(docker create myImage)  
docker cp $id:/app/MyApp.dll /path/on/the/host/MyApp.dll  
docker rm -v $id  

If the image has a defined ENTRYPOINT you may need to override it in the docker create call:

id=$(docker create --entrypoint / andrewlock/aspnetcore-in-docker)  
docker cp $id:/app/MyApp.dll /path/on/the/host/MyApp.dll  
docker rm -v $id  

That gives you three ways to copy files out of your containers - hopefully at least one of them works for you!

Summary

That's it for this first post of Docker commands. Hope you find them useful!


Anuraj Parameswaran: Building Progressive Web apps with ASP.NET Core

This post is about building Progressive Web Apps or PWA with ASP.NET Core. Progressive Web App (PWA) are web applications that are regular web pages or websites, but can appear to the user like traditional applications or native mobile applications. The application type attempts to combine features offered by most modern browsers with the benefits of mobile experience.


Damien Bowden: Creating specific themes for OIDC clients using razor views with IdentityServer4

This post shows how to use specific themes in an ASPNET Core STS application using IdentityServer4. For each OpenId Connect (OIDC) client, a separate theme is used. The theme is implemented using Razor, based on the examples, code from Ben Foster. Thanks for these. The themes can then be customized as required.

Code: https://github.com/damienbod/AspNetCoreIdentityServer4Persistence

Setup

The applications are setup using 2 OIDC Implicit Flow clients which get the tokens and login using a single IdentityServer4 application. The client id is sent which each authorize request. The client id is used to select, switch the theme.

An instance of the ClientSelector class is used per request to set, save the selected client id. The class is registered as a scoped instance.

namespace IdentityServerWithIdentitySQLite
{
    public class ClientSelector
    {
        public string SelectedClient = "";
    }
}

The ClientIdFilter Action Filter is used to read the client id from the authorize request and saves this to the ClientSelector instance of the request. The client id is read from the requesturl parameter.

using System;
using Microsoft.Extensions.Primitives;
using Microsoft.AspNetCore.WebUtilities;
using System.Linq;
using Microsoft.AspNetCore.Mvc.Filters;

namespace IdentityServerWithIdentitySQLite
{
    public class ClientIdFilter : IActionFilter
    {
        public ClientIdFilter(ClientSelector clientSelector)
        {
            _clientSelector = clientSelector;
        }

        public string Client_id = "none";
        private readonly ClientSelector _clientSelector;

        public void OnActionExecuted(ActionExecutedContext context)
        {
            var query = context.HttpContext.Request.Query;
            var exists = query.TryGetValue("client_id", out StringValues culture);

            if (!exists)
            {
                exists = query.TryGetValue("returnUrl", out StringValues requesturl);

                if (exists)
                {
                    var request = requesturl.ToArray()[0];
                    Uri uri = new Uri("http://faketopreventexception" + request);
                    var query1 = QueryHelpers.ParseQuery(uri.Query);
                    var client_id = query1.FirstOrDefault(t => t.Key == "client_id").Value;

                    _clientSelector.SelectedClient = client_id.ToString();
                }
            }
        }

        public void OnActionExecuting(ActionExecutingContext context)
        {
            
        }
    }
}

Now that we have a ClientSelector instance which can be injected into the different views as required, we also want to use different razor templates for each theme.

The IViewLocationExpander interface is implemented and sets the locations for the different themes. For a request, the client_id is read from the authorize request. For a logout, the client_id is not available in the URL. The selectedClient is set in the logout action method, and this can be read then when rendering the views.

using Microsoft.AspNetCore.Mvc.Razor;
using Microsoft.AspNetCore.WebUtilities;
using Microsoft.Extensions.Primitives;
using System;
using System.Collections.Generic;
using System.Linq;

public class ClientViewLocationExpander : IViewLocationExpander
{
    private const string THEME_KEY = "theme";

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        var query = context.ActionContext.HttpContext.Request.Query;
        var exists = query.TryGetValue("client_id", out StringValues culture);

        if (!exists)
        {
            exists = query.TryGetValue("returnUrl", out StringValues requesturl);

            if (exists)
            {
                var request = requesturl.ToArray()[0];
                Uri uri = new Uri("http://faketopreventexception" + request);
                var query1 = QueryHelpers.ParseQuery(uri.Query);
                var client_id = query1.FirstOrDefault(t => t.Key == "client_id").Value;

                context.Values[THEME_KEY] = client_id.ToString();
            }
        }
    }

    public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations)
    {
        // add the themes to the view location if one of the theme layouts are required. 
        if (context.ViewName.Contains("_Layout") 
            && context.ActionContext.HttpContext.Request.Path.ToString().Contains("logout"))
        {
            string themeValue = context.ViewName.Replace("_Layout", "");
            context.Values[THEME_KEY] = themeValue;
        }

        string theme = null;
        if (context.Values.TryGetValue(THEME_KEY, out theme))
        {
            viewLocations = new[] {
                $"/Themes/{theme}/{{1}}/{{0}}.cshtml",
                $"/Themes/{theme}/Shared/{{0}}.cshtml",
            }
            .Concat(viewLocations);
        }

        return viewLocations;
    }
}

The logout method in the account controller sets the theme and opens the correct themed view.

public async Task<IActionResult> Logout(LogoutViewModel model)
{
	...
	
	// get context information (client name, post logout redirect URI and iframe for federated signout)
	var logout = await _interaction.GetLogoutContextAsync(model.LogoutId);

	var vm = new LoggedOutViewModel
	{
		PostLogoutRedirectUri = logout?.PostLogoutRedirectUri,
		ClientName = logout?.ClientId,
		SignOutIframeUrl = logout?.SignOutIFrameUrl
	};
	_clientSelector.SelectedClient = logout?.ClientId;
	await _persistedGrantService.RemoveAllGrantsAsync(subjectId, logout?.ClientId);
	return View($"~/Themes/{logout?.ClientId}/Account/LoggedOut.cshtml", vm);
}

In the startup class, the classes are registered with the IoC, and the ClientViewLocationExpander is added.

public void ConfigureServices(IServiceCollection services)
{
	...
	
	services.AddScoped<ClientIdFilter>();
	services.AddScoped<ClientSelector>();
	services.AddAuthentication();

	services.Configure<RazorViewEngineOptions>(options =>
	{
		options.ViewLocationExpanders.Add(new ClientViewLocationExpander());
	});

In the Views folder, all the default views are implemented like before. The _ViewStart.cshtml was changed to select the correct layout using the injected service _clientSelector.

@using System.Globalization
@using IdentityServerWithAspNetIdentity.Resources
@inject LocService SharedLocalizer
@inject IdentityServerWithIdentitySQLite.ClientSelector _clientSelector
@{
    Layout = $"_Layout{_clientSelector.SelectedClient}";
}

Then the layout from the corresponding theme for the client is used and can be styled, changed as required for each client. Each themed Razor template which uses other views, should call the themed view. For example the ClientOne theme _Layout Razor view uses the _LoginPartial themed cshtml and not the default one.

@await Html.PartialAsync("~/Themes/ClientOne/Shared/_LoginPartial.cshtml")

The required themed views can then be implemented as required.

Client One themed view:

Client Two themed view:

Logout themed view for Client Two:

Links:

http://benfoster.io/blog/asp-net-core-themes-and-multi-tenancy

http://docs.identityserver.io/en/release/

https://docs.microsoft.com/en-us/ef/core/

https://docs.microsoft.com/en-us/aspnet/core/

https://getmdl.io/started/


Andrew Lock: Exploring the .NET Core Docker files: dotnet vs aspnetcore vs aspnetcore-build

Exploring the .NET Core Docker files: dotnet vs aspnetcore vs aspnetcore-build

When you build and deploy an application in Docker, you define how your image should be built using a Dockerfile. This file lists the steps required to create the image, for example: set an environment variable, copy a file, or run a script. Whenever a step is run, a new layer is created. Your final Docker image consists of all the changes introduced by these layers in your Dockerfile.

Typically, you don't start from an empty image where you need to install an operating system, but from a "base" image that contains an already configured OS. For .NET development, Microsoft provide a number of different images depending on what it is you're trying to achieve.

In this post, I look at the various Docker base images available for .NET Core development, how they differ, and when you should use each of them. I'm only going to look at the Linux amd64 images, but there are Windows container versions and even Linux arm32 images available too. At the time of writing the latest images available are 2.1.2 and 2.0.3 for the sdk-based and runtime-based images respectively.

Note: You should normally be specific about exactly which version of a Docker image you build on in your Dockerfiles (e.g. don't use latest). For that reason, all the image I mention in this post use the current latest version suffix, 2.0.3.

I'll start by briefly discussing the difference between the .NET Core SDK and the .NET Core Runtime, as it's an important factor when deciding which base image you need. I'll then walk through each of the images in turn, using the Dockerfiles for each to explain what they contain, and hence what you should use them for.

tl;dr; This is a pretty long post, so for convenience, here's some links to the relevant sections and a one-liner use case:

The .NET Core Runtime vs the .NET Core SDK

One of the most often lamented aspects of .NET Core and .NET Core development, is around version numbers. There are so many different moving parts, and none of the version numbers match up, so it can be difficult to figure out what you need.

For example, on my dev machine I am building .NET Core 2.0 apps, so I installed the .NET Core 2.x SDK to allow me to do so. When I look at what I have installed using dotnet --info, I get the following:

> dotnet --info
.NET Command Line Tools (2.1.2)

Product Information:  
 Version:            2.1.2
 Commit SHA-1 hash:  5695315371

Runtime Environment:  
 OS Name:     Windows
 OS Version:  10.0.16299
 OS Platform: Windows
 RID:         win10-x64
 Base Path:   C:\Program Files\dotnet\sdk\2.1.2\

Microsoft .NET Core Shared Framework Host

  Version  : 2.0.3
  Build    : a9190d4a75f4a982ae4b4fa8d1a24526566c69df

There's a lot of numbers there, but the important ones are 2.1.2 which is the version of the command line tools or SDK I have installed, and 2.0.3 which is the version of the .NET Core runtime I have installed.

I genuinely have no idea why the SDK is version 2.1.2 - I thought it was 2.0.3 as well but apparently not. This is made all the more confusing by the fact the 2.1.2 version isn't mentioned anywhere in any of the Docker images. Welcome to the brave new world.

Whether you need the .NET Core SDK or the .NET Core runtime depends on what you're trying to do:

  • The .NET Core SDK - This is what you need to build .NET Core applications.
  • The .NET Core Runtime - This is what you need to run .NET Core applications.

When you install the SDK, you get the runtime as well, so on your dev machines you can just install the SDK. However, when it comes to deployment you need to give it a little more thought. The SDK contains everything you need to build a .NET Core app, so it's much larger than the runtime alone (122MB vs 22MB for the MSI files). If you're just going to be running the app on a machine (or in a Docker container) then you don't need the full SDK, the runtime will suffice, and will keep the image as small as possible.

For the rest of this post, I'll walk through the main Docker images available for .NET Core and ASP.NET Core. I assume you have a working knowledge of Docker - if you're new to Docker I suggest checking out Steve Gordon's excellent series on Docker for .NET developers.

1. microsoft/dotnet:2.0.3-runtime-deps

  • Contains native dependencies
  • No .NET Core runtime or .NET Core SDK installed
  • Use for running Self-Contained Deployment apps

The first image we'll look at forms the basis for most of the other .NET Core images. It actually doesn't even have .NET Core installed. Instead, it consists of the base debian:stretch image and has all the low-level native dependencies on which .NET Core depends.

The Dockerfile consists of a single RUN command that apt-get installs the required dependencies on top of the base image.

FROM debian:stretch

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        ca-certificates \
        \
# .NET Core dependencies
        libc6 \
        libcurl3 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        libunwind8 \
        libuuid1 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

What should you use it for?

The microsoft/dotnet:2.0.3-runtime-deps image is the basis for subsequent .NET Core runtime installations. Its main use is for when you are building self-contained deployments (SCDs). SCDs are app that are packaged with the .NET Core runtime for the specific host, so you don't need to install the .NET Core runtime. You do still need the native dependencies though, so this is the image you need.

Note that you can't build SCDs with this image. For that, you'll need one of the SDK-based images described later in the post, such as microsoft/dotnet:2.0.3-sdk.

2. microsoft/dotnet:2.0.3-runtime

  • Contains .NET Core runtime
  • Use for running .NET Core console apps

The next image is one you'll use a lot if you're running .NET Core console apps in production. microsoft/dotnet:2.0.3-runtime builds on the runtime-deps image, and installs the .NET Core Runtime. It downloads the tar ball using curl, verifies the hash, unpacks it, sets up symlinks and removes the old installer.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0-runtime-deps

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        curl \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core
ENV DOTNET_VERSION 2.0.3  
ENV DOTNET_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/Runtime/$DOTNET_VERSION/dotnet-runtime-$DOTNET_VERSION-linux-x64.tar.gz  
ENV DOTNET_DOWNLOAD_SHA 4FB483CAE0C6147FBF13C278FE7FC23923B99CD84CF6E5F96F5C8E1971A733AB968B46B00D152F4C14521561387DD28E6E64D07CB7365D43A17430905DA6C1C0

RUN curl -SL $DOTNET_DOWNLOAD_URL --output dotnet.tar.gz \  
    && echo "$DOTNET_DOWNLOAD_SHA dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

What should you use it for?

The microsoft/dotnet:2.0.3-runtime image contains the .NET Core runtime, so you can use it to run any .NET Core 2.0 app such as a console app. You can't use this image to build your app, only to run it.

If you're running a self-contained app then you would be better served by the runtime-deps image. Similarly, if you're running an ASP.NET Core app, then you should use the microsoft/aspnetcore:2.0.3 image instead (up next), as it contains optimisations for running ASP.NET Core apps.

3. microsoft/aspnetcore:2.0.3

  • Contains .NET Core runtime and the ASP.NET Core runtime store
  • Use for running ASP.NET Core apps
  • Sets the default URL for apps to http://+:80

.NET Core 2.0 introduced a new feature called the runtime store. This is conceptually similar to the Global Assembly Cache (GAC) from .NET Framework days, though without some of the issues.

Effectively, you can install certain NuGet packages globally by adding them to a Runtime Store. ASP.NET Core does this by registering all of the Microsoft NuGet packages that make up the Microsoft.AspNetCore.All metapackage with the runtime store (as described in this post). When your app is published, it doesn't need to include any of the dlls that are in the store. This makes your published output smaller, and improves layer caching for Docker images.

The microsoft/aspnetcore:2.0.3 image builds on the previous .NET Core runtime image, and simply installs the ASP.NET Core runtime store. It also sets the default listening URL for apps to port 80 by setting the ASPNETCORE_URLS environment variable.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-runtime-stretch

# set up network
ENV ASPNETCORE_URLS http://+:80  
ENV ASPNETCORE_PKG_VERSION 2.0.3

# set up the runtime store
RUN for version in '2.0.0' '2.0.3'; do \  
        curl -o /tmp/runtimestore.tar.gz https://dist.asp.net/runtimestore/$version/linux-x64/aspnetcore.runtimestore.tar.gz \
        && export DOTNET_HOME=$(dirname $(readlink $(which dotnet))) \
        && tar -x -C $DOTNET_HOME -f /tmp/runtimestore.tar.gz \
        && rm /tmp/runtimestore.tar.gz; \
    done

What should you use it for?

Fairly obviously, for running ASP.NET Core apps! This is the image to use if you've published an ASP.NET Core app and you need to run it in production. It has the smallest possible footprint (ignoring the Alpine-based images for now!) but all the necessary framework components and optimisations. You can't use it for building your app though, as it doesn't have the SDK installed. For that, you need one of the upcoming images.

4. microsoft/dotnet:2.0.3-sdk

  • Contains .NET Core SDK
  • Use for building .NET Core apps
  • Can also be used for building ASP.NET Core apps

We're onto the first of the .NET Core SDK images now. These images can all be used for building your apps. Unlike all the runtime images which use debian:stretch as the base, the microsoft/dotnet:2.0.3-sdk image (and those that build on it) use the buildpack-deps:stretch-scm image. According to the Docker Hub description, the buildpack image:

…includes a large number of "development header" packages needed by various things like Ruby Gems, PyPI modules, etc.…a majority of arbitrary gem install / npm install / pip install should be successful without additional header/development packages…

The stretch-scm tag also ensures common tools like curl, git, and ca-certificates are installed.

The microsoft/dotnet:2.0.3-sdk image installs the native prerequisites (as you saw in the microsoft/dotnet:2.0.3-runtime-deps image), and then installs the .NET Core SDK. Finally, it warms up the NuGet cache by running dotnet new in an empty folder, which makes subsequent dotnet operations in derived images faster.

You can view the Dockerfile for the image here:

FROM buildpack-deps:stretch-scm

# Install .NET CLI dependencies
RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        libc6 \
        libcurl3 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        libunwind8 \
        libuuid1 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 2.0.3  
ENV DOTNET_SDK_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$DOTNET_SDK_VERSION/dotnet-sdk-$DOTNET_SDK_VERSION-linux-x64.tar.gz  
ENV DOTNET_SDK_DOWNLOAD_SHA 74A0741D4261D6769F29A5F1BA3E8FF44C79F17BBFED5E240C59C0AA104F92E93F5E76B1A262BDFAB3769F3366E33EA47603D9D725617A75CAD839274EBC5F2B

RUN curl -SL $DOTNET_SDK_DOWNLOAD_URL --output dotnet.tar.gz \  
    && echo "$DOTNET_SDK_DOWNLOAD_SHA dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

# Trigger the population of the local package cache
ENV NUGET_XMLDOC_MODE skip  
RUN mkdir warmup \  
    && cd warmup \
    && dotnet new \
    && cd .. \
    && rm -rf warmup \
    && rm -rf /tmp/NuGetScratch

What should you use it for?

This image has the .NET Core SDK installed, so you can use it for building your .NET Core apps. You can build .NET Core console apps or ASP.NET Core apps, though in the latter case you may prefer one of the alternative images coming up in this post.

Technically you can also use this image for running your apps in production as the SDK includes the runtime, but you shouldn't do that in practice. As discussed at the beginning of this post, optimising your Docker images in production is important for performance reasons, but the microsoft/dotnet:2.0.3-sdk image weighs in at a hefty 1.68GB, compared to the 219MB for the microsoft/dotnet:2.0.3-runtime image.

To get the best of both worlds, you should use this image (or one of the later images) to build your app, and one of the runtime images to run your app in production. You can see how to do this using Docker multi-stage builds in Scott Hanselman's post here.

5. microsoft/aspnetcore-build:2.0.3

  • Contains .NET Core SDK
  • Has warmed-up package cache for Microsoft.AspNetCore.All package
  • Installs Node, Bower and Gulp
  • Use for building ASP.NET Core apps

You can happily build ASP.NET Core apps using the microsoft/dotnet:2.0.3-sdk package, but the microsoft/aspnetcore-build:2.0.3 image that builds on it includes a number of additional layers that are often required.

First, it installs Node, Bower, and Gulp into the image. These tools are (were?) commonly used for building client-side apps, so this image makes them available globally.

Finally, the image warms up the package cache for all the common ASP.NET Core packages found in the Microsoft.AspNetCore.All metapackage, so that dotnet restore will be faster for apps based on this image. It does this by copying a .csproj file into a temporary folder and running dotnet restore. The csproj simply references the metapackage (with a version passed via an Environment Variable in the Dockerfile)

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <RuntimeIdentifiers>debian.8-x64</RuntimeIdentifiers>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="$(ASPNETCORE_PKG_VERSION)" />
  </ItemGroup>

</Project>  

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-sdk-stretch

# set up environment
ENV ASPNETCORE_URLS http://+:80  
ENV NODE_VERSION 6.11.3  
ENV ASPNETCORE_PKG_VERSION 2.0.3

RUN set -x \  
    && apt-get update && apt-get install -y gnupg dirmngr --no-install-recommends \
    && rm -rf /var/lib/apt/lists/*

# Install keys required for node
RUN set -ex \  
  && for key in \
    9554F04D7259F04124DE6B476D5A82AC7E37093B \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
    FD3A5288F042B6850C66B31F09FE44734EB7990E \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    B9AE9905FFD7803F25714661B63B535A4C206CA9 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
  ; do \
    gpg --keyserver pgp.mit.edu --recv-keys "$key" || \
    gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key" || \
    gpg --keyserver keyserver.pgp.com --recv-keys "$key" ; \
  done

# set up node
RUN buildDeps='xz-utils' \  
    && set -x \
    && apt-get update && apt-get install -y $buildDeps --no-install-recommends \
    && rm -rf /var/lib/apt/lists/* \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
    && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
    && grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
    && tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
    && rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
    && apt-get purge -y --auto-remove $buildDeps \
    && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
    # set up bower and gulp
    && npm install -g bower gulp \
    && echo '{ "allow_root": true }' > /root/.bowerrc

# warmup NuGet package cache
COPY packagescache.csproj /tmp/warmup/  
RUN dotnet restore /tmp/warmup/packagescache.csproj \  
      --source https://api.nuget.org/v3/index.json \
      --verbosity quiet \
    && rm -rf /tmp/warmup/

WORKDIR /  

What should you use it for?

This image will likely be the main image you use to build ASP.NET Core apps. It contains the .NET Core SDK, the same as microsoft/dotnet:2.0.3-sdk, but it also includes the additional dependencies that are sometimes required to build traditional apps with ASP.NET Core, such as Bower and Gulp.

Even if you're not using those dependencies, the additional warming of the package cache is a nice optimisation. If you opt to use the microsoft/dotnet:2.0.3-sdk image instead for building your apps, I suggest you warm up the package cache in your own Dockerfile in a similar way.

As before, the SDK image is much larger than the runtime image. You should only use this image for building your apps; use one of the runtime images to deploy your app to production.

6. microsoft/aspnetcore-build:1.0-2.0

  • Contains multiple .NET Core SDKs: 1.0, 1.1, and 2.0
  • Has warmed-up package cache for Microsoft.AspNetCore.All package
  • Installs Node, Bower and Gulp
  • Installs the Docker SDK for building solutions containing a Docker tools project
  • Use for building ASP.NET Core apps or anything really!

The final image is one I wasn't even aware of until I started digging around in the aspnet-docker GitHub repository. It's contained in the (aptly titled) kitchensink folder, and it really does have everything you could need to build your apps!

The microsoft/aspnetcore-build:1.0-2.0 image contains the .NET Core SDK for all current major and minor versions, namely .NET Core 1.0, 1.1, and 2.0. This has the advantage that you should be able to build any of your .NET Core apps, even if they are tied to a specific .NET Core version using a global.json file.

Just as for the microsoft/aspnetcore-build:2.0.3 image, Node, Bower, and Gulp are installed, and the package cache for the Microsoft.AspNetCore.All is warmed up. Additionally, the kitchensink image installs the Microsoft.Docker.SDK SDK that is required when building a project that has Docker tools enabled (through Visual Studio).

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-sdk-stretch

# set up environment
ENV ASPNETCORE_URLS http://+:80  
ENV NODE_VERSION 6.11.3  
ENV NETCORE_1_0_VERSION 1.0.8  
ENV NETCORE_1_1_VERSION 1.1.5  
ENV ASPNETCORE_PKG_VERSION 2.0.3

RUN set -x \  
    && apt-get update && apt-get install -y gnupg dirmngr --no-install-recommends \
    && rm -rf /var/lib/apt/lists/*

RUN set -ex \  
  && for key in \
    9554F04D7259F04124DE6B476D5A82AC7E37093B \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
    FD3A5288F042B6850C66B31F09FE44734EB7990E \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    B9AE9905FFD7803F25714661B63B535A4C206CA9 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
  ; do \
    gpg --keyserver pgp.mit.edu --recv-keys "$key" || \
    gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key" || \
    gpg --keyserver keyserver.pgp.com --recv-keys "$key" ; \
  done

# set up node
RUN buildDeps='xz-utils' \  
    && set -x \
    && apt-get update && apt-get install -y $buildDeps --no-install-recommends \
    && rm -rf /var/lib/apt/lists/* \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
    && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
    && grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
    && tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
    && rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
    && apt-get purge -y --auto-remove $buildDeps \
    && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
    # set up bower and gulp
    && npm install -g bower gulp \
    && echo '{ "allow_root": true }' > /root/.bowerrc

# Install the 1.x runtimes
RUN for url in \  
      "https://dotnetcli.blob.core.windows.net/dotnet/Runtime/${NETCORE_1_0_VERSION}/dotnet-debian-x64.${NETCORE_1_0_VERSION}.tar.gz" \
      "https://dotnetcli.blob.core.windows.net/dotnet/Runtime/${NETCORE_1_1_VERSION}/dotnet-debian-x64.${NETCORE_1_1_VERSION}.tar.gz"; \
    do \
      echo "Downloading and installing from $url" \
      && curl -SL $url --output /tmp/dotnet.tar.gz \
      && mkdir -p /usr/share/dotnet \
      && tar -zxf /tmp/dotnet.tar.gz -C /usr/share/dotnet \
      && rm /tmp/dotnet.tar.gz; \
    done

# Add Docker SDK for when building a solution that has the Docker tools project.
RUN curl -H 'Cache-Control: no-cache' -o /tmp/Microsoft.Docker.Sdk.tar.gz https://distaspnet.blob.core.windows.net/sdk/Microsoft.Docker.Sdk.tar.gz \  
    && cd /usr/share/dotnet/sdk/${DOTNET_SDK_VERSION}/Sdks \
    && tar xf /tmp/Microsoft.Docker.Sdk.tar.gz \
    && rm /tmp/Microsoft.Docker.Sdk.tar.gz

# copy the ASP.NET packages manifest
COPY packagescache.csproj /tmp/warmup/

# warm up package cache
RUN dotnet restore /tmp/warmup/packagescache.csproj \  
      --source https://api.nuget.org/v3/index.json \
      --verbosity quiet \
    && rm -rf /tmp/warmup/

WORKDIR /  

What should you use it for?

Use this image to build ASP.NET Core (or .NET Core apps) that require multiple .NET Core runtimes or that contain Docker tools projects.

Alternatively, you could use this image if you just want to have a single base image for building all of your .NET Core apps, regardless of the SDK version (instead of using microsoft/aspnetcore-build:2.0.3 for 2.0 projects and microsoft/aspnetcore-build:1.1.5 for 1.1 projects for example).

Summary

In this post I walked through some of the common Docker images used in .NET Core development. Each of the images have a set of specific use-cases, and it's important you use the right one for your requirements.


Anuraj Parameswaran: How to launch different browsers from VS Code for debugging ASP.NET Core

This post is about launching different browsers from VSCode, while debugging ASP.NET Core. By default when debugging an ASP.NET Core, VS Code will launch default browser. There is way to choose the browser you would like to use. Here is the code snippet which will add different debug configuration to VS Code.


Damien Bowden: Using an EF Core database for the IdentityServer4 configuration data

This article shows how to implement a database store for the IdentityServer4 configurations for the Client, ApiResource and IdentityResource settings using Entity Framework Core and SQLite. This could be used, if you need to create clients, or resources dynamically for the STS, or if you need to deploy the STS to multiple instances, for example using Service Fabric. To make it scalable, you need to remove all session data, and configuration data from the STS instances and share this in a shared resource, otherwise you can run it only smoothly as a single instance.

Information about IdentityServer4 deployment can be found here:
http://docs.identityserver.io/en/release/topics/deployment.html

Code: https://github.com/damienbod/AspNetCoreIdentityServer4Persistence

Implementing the IClientStore

By implementing the IClientStore, you can load your STS client data from anywhere you want. This example uses an Entity Framework Core Context, to load the data from a SQLite database.

using IdentityServer4.Models;
using IdentityServer4.Stores;
using Microsoft.Extensions.Logging;
using System;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ClientStore : IClientStore
    {
        private readonly ConfigurationStoreContext _context;
        private readonly ILogger _logger;

        public ClientStore(ConfigurationStoreContext context, ILoggerFactory loggerFactory)
        {
            _context = context;
            _logger = loggerFactory.CreateLogger("ClientStore");
        }

        public Task<Client> FindClientByIdAsync(string clientId)
        {
            var client = _context.Clients.First(t => t.ClientId == clientId);
            client.MapDataFromEntity();
            return Task.FromResult(client.Client);
        }
    }
}

The ClientEntity is used to save or retrieve the data from the database. Because the IdentityServer4 class cannot be saved directly using Entity Framework Core, a wrapper class is used which saves the Client object as a Json string. The entity class implements helper methods, which parses the Json string to/from the type Client class, which is used by Identityserver4.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ClientEntity
    {
        public string ClientData { get; set; }

        [Key]
        public string ClientId { get; set; }

        [NotMapped]
        public Client Client { get; set; }

        public void AddDataToEntity()
        {
            ClientData = JsonConvert.SerializeObject(Client);
            ClientId = Client.ClientId;
        }

        public void MapDataFromEntity()
        {
            Client = JsonConvert.DeserializeObject<Client>(ClientData);
            ClientId = Client.ClientId;
        }
    }
}

Teh ConfigurationStoreContext implements the Entity Framework class to access the SQLite database. This could be easily changed to any other database supported by Entity Framework Core.

using IdentityServer4.Models;
using Microsoft.EntityFrameworkCore;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ConfigurationStoreContext : DbContext
    {
        public ConfigurationStoreContext(DbContextOptions<ConfigurationStoreContext> options) : base(options)
        { }

        public DbSet<ClientEntity> Clients { get; set; }
        public DbSet<ApiResourceEntity> ApiResources { get; set; }
        public DbSet<IdentityResourceEntity> IdentityResources { get; set; }
        

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<ClientEntity>().HasKey(m => m.ClientId);
            builder.Entity<ApiResourceEntity>().HasKey(m => m.ApiResourceName);
            builder.Entity<IdentityResourceEntity>().HasKey(m => m.IdentityResourceName);
            base.OnModelCreating(builder);
        }
    }
}

Implementing the IResourceStore

The IResourceStore interface is used to save or access the ApiResource configurations and the IdentityResource data in the IdentityServer4 application. This is implemented in a similiar way to the IClientStore.

using IdentityServer4.Models;
using IdentityServer4.Stores;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ResourceStore : IResourceStore
    {
        private readonly ConfigurationStoreContext _context;
        private readonly ILogger _logger;

        public ResourceStore(ConfigurationStoreContext context, ILoggerFactory loggerFactory)
        {
            _context = context;
            _logger = loggerFactory.CreateLogger("ResourceStore");
        }

        public Task<ApiResource> FindApiResourceAsync(string name)
        {
            var apiResource = _context.ApiResources.First(t => t.ApiResourceName == name);
            apiResource.MapDataFromEntity();
            return Task.FromResult(apiResource.ApiResource);
        }

        public Task<IEnumerable<ApiResource>> FindApiResourcesByScopeAsync(IEnumerable<string> scopeNames)
        {
            if (scopeNames == null) throw new ArgumentNullException(nameof(scopeNames));


            var apiResources = new List<ApiResource>();
            var apiResourcesEntities = from i in _context.ApiResources
                                            where scopeNames.Contains(i.ApiResourceName)
                                            select i;

            foreach (var apiResourceEntity in apiResourcesEntities)
            {
                apiResourceEntity.MapDataFromEntity();

                apiResources.Add(apiResourceEntity.ApiResource);
            }

            return Task.FromResult(apiResources.AsEnumerable());
        }

        public Task<IEnumerable<IdentityResource>> FindIdentityResourcesByScopeAsync(IEnumerable<string> scopeNames)
        {
            if (scopeNames == null) throw new ArgumentNullException(nameof(scopeNames));

            var identityResources = new List<IdentityResource>();
            var identityResourcesEntities = from i in _context.IdentityResources
                             where scopeNames.Contains(i.IdentityResourceName)
                           select i;

            foreach (var identityResourceEntity in identityResourcesEntities)
            {
                identityResourceEntity.MapDataFromEntity();

                identityResources.Add(identityResourceEntity.IdentityResource);
            }

            return Task.FromResult(identityResources.AsEnumerable());
        }

        public Task<Resources> GetAllResourcesAsync()
        {
            var apiResourcesEntities = _context.ApiResources.ToList();
            var identityResourcesEntities = _context.IdentityResources.ToList();

            var apiResources = new List<ApiResource>();
            var identityResources= new List<IdentityResource>();

            foreach (var apiResourceEntity in apiResourcesEntities)
            {
                apiResourceEntity.MapDataFromEntity();

                apiResources.Add(apiResourceEntity.ApiResource);
            }

            foreach (var identityResourceEntity in identityResourcesEntities)
            {
                identityResourceEntity.MapDataFromEntity();

                identityResources.Add(identityResourceEntity.IdentityResource);
            }

            var result = new Resources(identityResources, apiResources);
            return Task.FromResult(result);
        }
    }
}

The IdentityResourceEntity class is used to persist the IdentityResource data.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class IdentityResourceEntity
    {
        public string IdentityResourceData { get; set; }

        [Key]
        public string IdentityResourceName { get; set; }

        [NotMapped]
        public IdentityResource IdentityResource { get; set; }

        public void AddDataToEntity()
        {
            IdentityResourceData = JsonConvert.SerializeObject(IdentityResource);
            IdentityResourceName = IdentityResource.Name;
        }

        public void MapDataFromEntity()
        {
            IdentityResource = JsonConvert.DeserializeObject<IdentityResource>(IdentityResourceData);
            IdentityResourceName = IdentityResource.Name;
        }
    }
}

The ApiResourceEntity is used to persist the ApiResource data.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ApiResourceEntity
    {
        public string ApiResourceData { get; set; }

        [Key]
        public string ApiResourceName { get; set; }

        [NotMapped]
        public ApiResource ApiResource { get; set; }

        public void AddDataToEntity()
        {
            ApiResourceData = JsonConvert.SerializeObject(ApiResource);
            ApiResourceName = ApiResource.Name;
        }

        public void MapDataFromEntity()
        {
            ApiResource = JsonConvert.DeserializeObject<ApiResource>(ApiResourceData);
            ApiResourceName = ApiResource.Name;
        }
    }
}

Adding the stores to the IdentityServer4 MVC startup class

The created stores can now be used and added to the Startup class of the ASP.NET Core MVC host project for IdentityServer4. The AddDbContext method is used to setup the Entity Framework Core data access and the AddResourceStore as well as AddClientStore are used to add the configuration data to IdentityServer4. The two interfaces and also the implementations need to be registered with the IoC.

The default AddInMemory… extension methods are removed.

public void ConfigureServices(IServiceCollection services)
{
	services.AddDbContext<ConfigurationStoreContext>(options =>
		options.UseSqlite(
			Configuration.GetConnectionString("ConfigurationStoreConnection"),
			b => b.MigrationsAssembly("AspNetCoreIdentityServer4")
		)
	);

	...

	services.AddTransient<IClientStore, ClientStore>();
	services.AddTransient<IResourceStore, ResourceStore>();

	services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddResourceStore<ResourceStore>()
		.AddClientStore<ClientStore>()
		.AddAspNetIdentity<ApplicationUser>()
		.AddProfileService<IdentityWithAdditionalClaimsProfileService>();

}

Seeding the database

A simple .NET Core console application is used to seed the STS server with data. This class creates the different Client, ApiResources and IdentityResources as required. The data is added directly to the database using Entity Framework Core. If this was a micro service, you would implement an API on the STS server which adds, removes, updates the data as required.

static void Main(string[] args)
{
	try
	{
		var currentDirectory = Directory.GetCurrentDirectory();

		var configuration = new ConfigurationBuilder()
			.AddJsonFile($"{currentDirectory}\\..\\AspNetCoreIdentityServer4\\appsettings.json")
			.Build();

		var configurationStoreConnection = configuration.GetConnectionString("ConfigurationStoreConnection");

		var optionsBuilder = new DbContextOptionsBuilder<ConfigurationStoreContext>();
		optionsBuilder.UseSqlite(configurationStoreConnection);

		using (var configurationStoreContext = new ConfigurationStoreContext(optionsBuilder.Options))
		{
			configurationStoreContext.AddRange(Config.GetClients());
			configurationStoreContext.AddRange(Config.GetIdentityResources());
			configurationStoreContext.AddRange(Config.GetApiResources());
			configurationStoreContext.SaveChanges();
		}
	}
	catch (Exception e)
	{
		Console.WriteLine(e.Message);
	}

	Console.ReadLine();
}

The static Config class just adds the data like the IdentityServer4 examples.


Now the applications run using the configuration data stored in an Entity Framwork Core supported database.

Note:

This post shows how just the configuration data can be setup for IdentityServer4. To make it scale, you also need to implement the IPersistedGrantStore and CORS for each client in the database. A cache solution might also be required.

IdentityServer4 provides a full solution and example: IdentityServer4.EntityFramework

Links:

http://docs.identityserver.io/en/release/topics/deployment.html

https://damienbod.com/2016/01/07/experiments-with-entity-framework-7-and-asp-net-5-mvc-6/

https://docs.microsoft.com/en-us/ef/core/get-started/netcore/new-db-sqlite

https://docs.microsoft.com/en-us/ef/core/

http://docs.identityserver.io/en/release/reference/ef.html

https://github.com/IdentityServer/IdentityServer4.EntityFramework

https://elanderson.net/2017/07/identity-server-using-entity-framework-core-for-configuration-data/

http://docs.identityserver.io/en/release/quickstarts/8_entity_framework.html


Andrew Lock: ASP.NET Core in Action - What is middleware?

ASP.NET Core in Action - What is middleware?

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post gives you a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

There are currently 18 of the 20 chapters available in the MEAP, so now is the time to act if you're interested! Thanks ūüôā

What is middleware?

The word middleware is used in a variety of contexts in software development and IT, but it‚Äôs not a particularly descriptive word ‚Äď so, what is middleware? This article discusses the definition of middleware in ASP.NET Core and how they can be used.

In ASP.NET Core, middleware are C# classes that can handle an HTTP request or response. Middleware can either:

  • Handle an incoming HTTP request by generating an HTTP response.
  • Process an incoming HTTP request, modify it, and pass it on to another piece of middleware.
  • Process an outgoing HTTP response, modify it, and pass it on to either another piece of middleware, or the ASP.NET Core web server.

For example, a piece of logging middleware might note down when a request arrived and then pass it on to another middleware. Meanwhile, an image resizing middleware component might spot an incoming request for an image with a specified size, generate the requested image, and send it back to the user without passing it on.

The most important piece of middleware in most ASP.NET Core applications is the MvcMiddleware. This normally generates your HTML pages and API responses. Like the image resizing middleware, it typically receives a request, generates a response, and then sends it back to the user, as shown in figure 1.

ASP.NET Core in Action - What is middleware?

Figure 1 Example of a middleware pipeline. Each middleware handles the request and passes it on to the next middleware in the pipeline. After a middleware generates a response, it passes it back through the pipeline. When it reaches the ASP.NET Core web server, the response is sent to the user's browser.

This arrangement, where a piece of middleware can call another piece of middleware, which in turn can call another, is referred to as a pipeline. You can think of each piece of middleware as a section of pipe ‚Äď when you connect all the sections, requests flow through one piece into the next.

One of the most common use cases for middleware is for "crosscutting concerns" of your application. These aspects of your application need to occur with every request, regardless of the specific path in the request or the resource requested. These include things like:

  • Logging each request
  • Adding standard security headers to the response
  • Associating a request with the relevant user
  • Setting the language for the current request

In each of these examples, the middleware receives a request, modifies it, and then passes the request on to the next piece of middleware in the pipeline. Subsequent middleware could use the details added by the earlier middleware to handle the request. For example, in figure 2, the authentication middleware associates the request with a user. The authorization middleware uses this detail to verify whether the user has permission to make that specific request to the application.

ASP.NET Core in Action - What is middleware?

Figure 2 Example of a middleware component modifying the request for use later in the pipeline. Middleware can also short-circuit the pipeline, returning a response before the request reaches later middleware.

If the user has permission, the authorization middleware passes the request on to the MVC middleware, to allow it to generate a response. If, on the other hand, the user doesn't have permission, the authorization middleware can short-circuit the pipeline, generating a response directly. It returns the response back to the previous middleware, before the MVC middleware has even seen the request.

In practice, you often don't have a dedicated authorization middleware, instead allowing the MvcMiddleware to handle the authorization requirements.

A key point to glean from this is that the pipeline is bi-directional. The request passes through the pipeline in one direction until a piece of middleware generates a response, at which point the response passes back through the pipeline in the other direction, passing through each piece of middleware for a second time, until it gets back to the first piece of middleware. Finally, this first/last piece of middleware passes the response back to the ASP.NET Core web server.

The HttpContext object

HttpContext sits behind the scenes. The ASP.NET Core web server constructs an HttpContext, which the ASP.NET Core application uses as a sort of "storage box" for a single request. Anything which is specific to this request and the subsequent response can be associated with and stored in it. This could include properties of the request, request-specific services, data which has been loaded, or errors which have occurred. The web server fills the initial HttpContext with details of the original HTTP request and other configuration details, and passes it on to the rest of the application.

All middleware has access to the HttpContext for a request. It can use this to determine, for example, if the request contained any user credentials, what page the request was attempting to access, and to fetch any posted data. It can then use these details to determine how to handle the request.

Once the application has finished processing the request, it'll update the HttpContext with an appropriate response and return it back through the middleware pipeline to the web server. The ASP.NET Core web server converts the representation into a raw HTTP response and sends it back to the reverse proxy, which will forward it to the user's browser.

You define the middleware pipeline in code as part of your initial application configuration in Startup. You can tailor the middleware pipeline specifically to your needs ‚Äďsimple apps may need only a short pipeline, and large apps with a variety of features may use many more middleware. Middleware is the fundamental source of behavior in your application ‚Äď ultimately the middleware pipeline is responsible for responding to any HTTP request it receives.

The request is passed to the middleware pipeline as an HttpContext object. The ASP.NET Core web server builds an HttpContext object from the incoming request, which passes up and down the middleware pipeline. When you're using existing middleware to build a pipeline, this is a detail you'll rarely deal with. Its presence behind the scenes provides a route to exerting extra control over your middleware pipeline.

Middleware vs HTTP Modules and HTTP Handlers

In the previous version of ASP.NET, the concept of a middleware pipeline isn't used. Instead, there are HTTP modules and HTTP handlers.

An HTTP handler is a process that runs in response to a request and generates the response. For example, the ASP.NET page handler runs in response to requests for .aspx pages. Alternatively, you could, for example, write a custom handler that returns resized images when an image is requested.

HTTP modules handle cross cutting concerns of applications, such as security, logging, or session management. They run in response to life-cycle events that a request progresses through when it's received at the server. For example, there are events such as BeginRequest, AcquireRequestState, or PostAcquireRequestState.

This approach works, but it's sometimes tricky to reason about which modules will run at which points. Implementing a module requires a relatively detailed understanding of the state of the request at each individual life-cycle event.

The middleware pipeline makes understanding your application far simpler. The pipeline is completely defined in code, specifying which components should run, and in which order.

That's all for this article. For more information, download the free first chapter of ASP.NET Core in Action and see this Slideshare presentation.

Remember, you can save 37% off with code lockaspdotnet.


Anuraj Parameswaran: Connecting Localdb using Sql Server Management Studio

This post is about connecting and managing SQL Server LocalDB instances with Sql Server Management Studio. While working on an ASP.NET Core web application, I was using LocalDB, but when I tried to connect to it and modifying the data, but I couldn’t find it. Later after exploring little I found one way of doing it.


Anuraj Parameswaran: Runtime bundling and Minification in ASP.NET Core with Smidge

This post is about enabling bundling and minification in ASP.NET Core with Smidge. Long back I wrote a post about bundling and minification in ASP.NET Core. But it was during the compile time or while publishing the app. But Smidge helps you to enable bundling and minification in runtime similar to earlier versions of ASP.NET MVC.


Andrew Lock: Building ASP.NET Core apps using Cake in Docker

Building ASP.NET Core apps using Cake in Docker

In a previous post, I showed how you can use Docker Hub to automatically build a Docker image for a project hosted on GitHub. To do that, I created a Dockerfile that contains the instructions for how to build the project by calling the dotnet CLI.

In this post, I show an alternative way to build your ASP.NET Core app, by using Cake to build your project inside the Docker container. We'll create a Cake build script that lets you both build outside and inside Docker, while taking advantage of the layer-caching optimization inherant to Docker.

tl;dr; You can optimise your Cake build scripts for running in Docker. To jump straight to the scripts themselves click here for the Cake script and here for the Dockerfile.

Background: why bother using Cake?

Building and publishing an ASP.NET Core project involves a series of steps that you have to undertake in order :

  • dotnet restore - Restore the NuGet packages for the solution
  • dotnet build - Build the solution
  • dotnet test - Run the unit tests in a project, don't publish if the tests fail.
  • dotnet publish - Publish a project, optimising it for production

Some of those steps can be implicitly run by later ones, for example dotnet test automatically calls dotnet build and dotnet restore, but fundamentally all those steps need to be run.

Whenever you have a standard set of commands to run, automation/scripting is the answer! Oftentimes people use Bash scripts when you're building in Docker containers, as that's a natural scripting language for Linux, and is available without any additional dependencies.

However, my preferred approach is to use Cake so that I can write my scripts in C#. This is even better now as you can get Intellisense for your .Cake files in Visual Studio Code. Using Cake has the added benefit of being cross platform (unlike Bash scripts), so I can run Cake "natively" on my dev machine, and also as the build script in a Docker container.

The two versions of Cake

Cake is built on top of the Roslyn compiler, and is available cross platform (Windows, macOS, Linux). There's actually two different versions of Cake:

  • Cake - Runs on .NET Framework on Windows, or Mono on macOS and Linux
  • Cake.CoreClr - Runs on .NET Core, on all platforms

You'd think the Cake.CoreClr version would be perfect for this situation - we have the .NET Core SDK installed in our docker container, and so Cake should be able to use it right?

The problem is that currently, Cake.CoreClr targets .NET Core 1.0 - you can't use it on a machine (or Docker container) that only has the .NET Core 2.0 SDK installed. This is a known issue, but it rather negates some of the benefits of Cake.CoreClr for our situation. We'll either have to install the .NET Core 1.0 SDK or Mono in order to run Cake in our Docker containers.

For that reason, I decided to go with the full Cake version. This is mostly so that I don't need to install any prerequisites (previous versions of the .NET Core SDK) on my dev Windows machine (.NET Framework is obviously already available). In the Docker container, we can install Mono for Cake.

Installing Cake into your project

If you're new to Cake I recommend following the getting started tutorial on the Cake website. On top of that, I strongly recommend the Cake extension for Visual Studio Code. This extension lets you easily add the necessary bootstrapping and build files to your project, as well install Intellisense!

Building ASP.NET Core apps using Cake in Docker

Once you've installed a bootstrapper and you have a build script, you'll be able to run it using PowerShell in Windows:

> .\build.ps1
Preparing to run build script...  
Running build script...  
Running target Default in configuration Release  

or using Bash on Linux:

$ ./build.sh
Preparing to run build script...  
Running build script...  
Running target Default in configuration Release  

Optimising Cake build scripts for Docker

Normally, when I'm building on a dev (or CI) machine directly, I use a script very similar to the one described by Muhammad Rehan Saeed in this post. However, Docker has an important feature, layer caching, that it's worth optimising for.

I won't go into how layer caching works just yet. For now it's enough to know that we want to be able to perform the same individual steps that you can with the dotnet CLI, such as restore, build, and test. Normally, each of the higher level tasks in my cake build script is dependent on earlier tasks, for example:

Task("Build")  
    .IsDependentOn("Restore")
    .Does(() => { /* do the build */ });

Task("Restore")  
    .IsDependentOn("Clean")
    .Does(() => { /* do the restore */ });

You can invoke specific tasks by passing them to the -Target parameter when you call the build script. For example, the Build task would be invoked on windows using:

> .\build.ps1 -Target=Build

Cake works out the tree of dependencies, and performs each necessary task in order. In this case, Cake would execute Clean, Restore, and finally Build.

To make it easier to optimise the Dockerfile, I remove the IsDependentOn() dependencies from the tasks, so they only perform the Does() action. I then create "meta" tasks that are purely chains of dependencies, for example:

Task("BuildAndTest")  
    .IsDependentOn("Clean")
    .IsDependentOn("Restore")
    .IsDependentOn("Build")
    .IsDependentOn("Test");

This configuration allows fine grained control over what's executed. If you only want to execute a specific task, without its dependencies you can do so. When you want to perform a series of tasks in sequence, you can use a "meta" task instead.

The Cake build script

With that in mind, here is the full Cake build script for the example ASP.NET Core from my last post. You can see a similar script in the example GitHub repository in the cake-in-docker branch. For simplicity, I've ignored versioning your project with VersionSuffix etc in this script (see Muhammad's post for more detail)

// Target - The task you want to start. Runs the Default task if not specified.
var target = Argument("Target", "Default");  
var configuration = Argument("Configuration", "Release");

Information($"Running target {target} in configuration {configuration}");

var distDirectory = Directory("./dist");

// Deletes the contents of the Artifacts folder if it contains anything from a previous build.
Task("Clean")  
    .Does(() =>
    {
        CleanDirectory(distDirectory);
    });

// Run dotnet restore to restore all package references.
Task("Restore")  
    .Does(() =>
    {
        DotNetCoreRestore();
    });

// Build using the build configuration specified as an argument.
 Task("Build")
    .Does(() =>
    {
        DotNetCoreBuild(".",
            new DotNetCoreBuildSettings()
            {
                Configuration = configuration,
                ArgumentCustomization = args => args.Append("--no-restore"),
            });
    });

// Look under a 'Tests' folder and run dotnet test against all of those projects.
// Then drop the XML test results file in the Artifacts folder at the root.
Task("Test")  
    .Does(() =>
    {
        var projects = GetFiles("./test/**/*.csproj");
        foreach(var project in projects)
        {
            Information("Testing project " + project);
            DotNetCoreTest(
                project.ToString(),
                new DotNetCoreTestSettings()
                {
                    Configuration = configuration,
                    NoBuild = true,
                    ArgumentCustomization = args => args.Append("--no-restore"),
                });
        }
    });

// Publish the app to the /dist folder
Task("PublishWeb")  
    .Does(() =>
    {
        DotNetCorePublish(
            "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj",
            new DotNetCorePublishSettings()
            {
                Configuration = configuration,
                OutputDirectory = distDirectory,,
                ArgumentCustomization = args => args.Append("--no-restore"),
            });
    });

// A meta-task that runs all the steps to Build and Test the app
Task("BuildAndTest")  
    .IsDependentOn("Clean")
    .IsDependentOn("Restore")
    .IsDependentOn("Build")
    .IsDependentOn("Test");

// The default task to run if none is explicitly specified. In this case, we want
// to run everything starting from Clean, all the way up to Publish.
Task("Default")  
    .IsDependentOn("BuildAndTest")
    .IsDependentOn("PublishWeb");

// Executes the task specified in the target argument.
RunTarget(target);  

As you can see, none of the main tasks have dependencies; so Build only builds, it doesn't restore (it explicitly doesn't try and restore in fact, by using the --no-restore argument). We'll use these tasks in the next section, when we create the Dockerfile that we'll use to build our app (on Docker Hub).

A brief introduction to Docker files and layer caching

A Dockerfile is effectively a "build script" for Docker images. It contains the series of steps, starting from a "base" image, that should be run to create your image. Each step can do something like set an environment variable, copy a file, or run a script. Whenever a step is run, a new layer is created. Your final Docker image consists of all the changes introduced by the layers in your Dockerfile.

Docker is quite clever about caching these layers. Multiple images can all share the same base image, and even multiple layers, as long as nothing has changed from when the image was created.

For example, say you have the following Dockerfile:

FROM microsoft/dotnet:2.0.3-sdk

COPY ./my-solution.sln  ./  
COPY ./src ./src

RUN dotnet build  

This Dockerfile contains 4 commands:

  • FROM - This defines the base image. All later steps add layers on top of this base image.
  • COPY - Copy a file from your filesystem to the Docker image. We have two separate COPY commands. The first one copies the solution file into the root folder, the second copies the whole src directory across.
  • RUN - Executes a command in the Docker image, in this case dotnet build.

When you build a Docker image, Docker pulls the base image from a public (or private) registry like Docker Hub, and applies the changes defined in the Dockerfile. In this case it pulls the microsoft/dotnet:2.0.3-sdk base image, copies across the solution file, then the src directory, and finally runs dotnet build in your image.

Docker "caches" each individual layer after it has applied the changes. If you build the Docker image a second time, and haven't made any changes to my-solution.sln, Docker can just reuse the layers it created last time, up to that point. Similarly, if you haven't changed any files in src, Docker can just reuse the layer it created previously, without having to do the work again.

Optimising for this layer caching is key to having performant Docker builds - if you can structure things such that Docker can reuse results from previous runs, then you can significantly reduce the time it takes to build an image.

This was a very brief introduction to how Docker builds images, if you're new to Docker, I strongly suggest reading Steve Gordon's post series on Docker for .NET developers, as he explains it all a lot clearer an in greater detail than I just have!

The Dockerfile I will show shortly uses a feature called multi-stage builds. This lets you use multiple base images to build your Docker images, so your final image is as small as possible. Typically, applications require many more dependencies to build them than to run them. Multi-stage builds effectively allow you to build your image in a large image with many dependencies installed, and then copy your published app to a small lightweight container to run. Scott Hansleman has a great post on this which is worth checking out for more details.

The Dockerfile - calling Cake inside Docker

In this section I show you what you've been waiting for: the actual Docker file that uses Cake to build an ASP.NET Core app. I'll start by showing the whole file to give you some context, then I'll walk through each command to explain why it's there and what it does.

# Build image
FROM microsoft/aspnetcore-build:2.0.3 AS builder

# Install mono for Cake
ENV MONO_VERSION 5.4.1.6

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

RUN echo "deb http://download.mono-project.com/repo/debian stretch/snapshots/$MONO_VERSION main" > /etc/apt/sources.list.d/mono-official.list \  
  && apt-get update \
  && apt-get install -y mono-runtime \
  && rm -rf /var/lib/apt/lists/* /tmp/*

RUN apt-get update \  
  && apt-get install -y binutils curl mono-devel ca-certificates-mono fsharp mono-vbnc nuget referenceassemblies-pcl \
  && rm -rf /var/lib/apt/lists/* /tmp/*

WORKDIR /sln

COPY ./build.sh ./build.cake ./NuGet.config   ./

# Install Cake, and compile the Cake build script
RUN ./build.sh -Target=Clean

# Copy all the csproj files and restore to cache the layer for faster builds
# The dotnet_build.sh script does this anyway, so superfluous, but docker can 
# cache the intermediate images so _much_ faster
COPY ./aspnetcore-in-docker.sln ./  
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
RUN sh ./build.sh -Target=Restore

COPY ./test ./test  
COPY ./src ./src

# Build, Test, and Publish
RUN ./build.sh -Target=Build && ./build.sh -Target=Test && ./build.sh -Target=PublishWeb

#App image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Production  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder ./sln/dist .  

This file is for the same solution I described in my previous post, which contains 3 projects:

  • AspNetCoreInDocker.Lib - A .NET Standard class library project
  • AspNetCoreInDocker.Web - A .NET Core app based on the default templates
  • AspNetCoreInDocker.Web.Tests - A .NET Core xUnit test project.

It fundamentally builds the app in the normal way - it installs the prerequisites, restores nuget packages, builds and tests the solution, and finally publishes the app. We just do all that inside of Docker, using Cake.

Dissecting the Dockerfile

This post is already pretty long, but I wanted to walk through the Dockerfile and explain why it's written the way it is.

FROM microsoft/aspnetcore-build:2.0.3 AS builder  

The first line in the Dockerfile defines the base image. I've used the microsoft/aspnetcore-build:2.0.3 base image, which has the prerequisites for .NET Core and the 2.0.3 SDK already installed. I also give it a name builder which we can refer to later when we build our runtime image, as part of the multi-stage build.

ENV MONO_VERSION 5.4.1.6

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF

RUN echo "deb http://download.mono-project.com/repo/debian stretch/snapshots/$MONO_VERSION main" > /etc/apt/sources.list.d/mono-official.list \  
  && apt-get update \
  && apt-get install -y mono-runtime \
  && rm -rf /var/lib/apt/lists/* /tmp/*

RUN apt-get update \  
  && apt-get install -y binutils curl mono-devel ca-certificates-mono fsharp mono-vbnc nuget referenceassemblies-pcl \
  && rm -rf /var/lib/apt/lists/* /tmp/*

The next big chunk of the Dockerfile is installing Mono. As discussed previously, I'm using the "full" version of Cake, which runs on .NET Framework on Windows and Mono on Linux/macOS, so I need to install Mono into our build image.

The installation script shown above is pulled from the official Mono Dockerfiles for both the mono:5.4.1.6 image and the mono:5.4.1.6-slim image it's based on. By installing Mono before anything else, Docker can cache the output layer, and will not need to perform the (relatively slow) installation on my machine again, even if my ASP.NET Core app completely changes.

WORKDIR /sln

COPY ./build.sh ./build.cake ./

RUN ./build.sh -Target=Clean  

After installing Mono, I copy across my Cake bootstrapper (build.sh) and my Cake build script (build.cake), and run the first of the Cake tasks, Clean. This task just deletes anything in the output dist directory.

This probably seem superflous - we're building a clean Docker image, so that directory won't even exist, let alone have anything in it.

Instead, I include this task here as it will cause the bootstrapper to install Cake and compile the build script. Given the bootstrapper and .cake file will rarely change, we can again take advantage of Docker layer caching to avoid taking the performance hit of installing Cake every time we change an unrelated file.

COPY ./aspnetcore-in-docker.sln ./  
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
RUN ./build.sh -Target=Restore  

In this step, I copy across the solution file, and all of the project files into their respective folders. We can then run the Cake Restore task, which runs dotnet restore.

The project files will generally only change when you change a NuGet package, or perform a major revision like adding or removing a project. By specifically copying these across first, Docker can cache the "restored" solution layer, even though it doesn't have the solution source code in the image yet. That way, if we change a source code file, we don't need to go through the restore process again, we can just use the cached layer.

This is the one part of the file that frustrates me. In order to preserve the correct directory structure, you have to explicitly copy across each file to it's destination. Ideally, you could do something like COPY ./**/*.csproj ./ but that doesn't work unfortunately.

COPY ./test ./test  
COPY ./src ./src

RUN ./build.sh -Target=Build && ./build.sh -Target=Test && ./build.sh -Target=PublishWeb  

Now we're into the meat of the file. At this point I copy across all the remaining files in the src and test directories, and run the Build, Test, and PublishWeb tasks. Pretty much any changes we make are going to affect these layers, so there's not a lot of point in splitting them into cacheable layers. Instead, I just run them all in one go. If any of them fail, the whole build fails.

Once this layer is complete, we'll have built our app, tested it, and published it to the /sln/dist directory in our "builder" docker image. All we need to do now is copy the output to the runtime base image.

FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Production  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder ./sln/dist .  

I used the microsoft/aspnetcore:2.0.3 base image for the runtime image, set the default hosting environment to Production (not strictly necessary, but I like to be explicit), and define the ENTRYPOINT for the image. The Entrypoint is the command that will be run by default when a container is created from the image, in this case, dotnet AspNetCoreInDocker.Web.dll.

Finally, I copy the publish output from the builder image into the runtime image, and we're done! To build the image simply push to GitHub if your'e using the automated builds from my previous post, or alternatively use docker build:

docker build .  

We now have 3 ways to build the project:

  • Using Cake on Windows with .\build.ps1
  • Using Cake on Linux (if Mono is installed) with .\build.sh
  • Using Cake in Docker with docker build .

You can see a similar example in the sample repository in GitHub, and the output Docker file on Docker Hub with the cake-in-docker tag.

Summary

In this post I described my motivation for using Cake in Docker to build ASP.NET Core apps, why I chose the Mono version of Cake over Cake.CoreClr, and provided an example build script. I discussed at length how both the Cake build script and Docker build scripts are optimised to take advantage of Docker's layer caching mechanism, and walked through an example Dockerfile that builds Cake in Docker.


Dominick Baier: Sponsoring IdentityServer

Brock and I have been working on free identity & access control related libraries since 2009. This all started as a hobby project, and I can very well remember the day when I said to Brock that we can only really claim to understand the protocols if we implement them ourselves. That’s what we did.

We are now at a point where the IdentityServer OSS project reached both enough significance and complexity what we need to find a sustainable way to manage it. This includes dealing with issues, questions and bug reports as well as feature and pull requests.

That’s why we decided to set up a sponsorship page on Patreon. So if you like the project and want to support us – or even more important, if you work for a company that relies on IdentityServer, please consider supporting us. This will allow us to be able to maintain this level of commitment.

Thank you!


Andrew Lock: Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

In this post, I give a brief introduction to Roslyn analyzers, what they're for, and how to create a simple analyzer in Visual Studio 2017. I'll show how to create a code analyzer that targets .NET Standard using the new Visual Studio 2017 (15.5) templates, and show how you can debug and test your analyzer using Visual Studio. As the code in Roslyn analyzers can be a bit complex, I'll look at the actual code for the analyzer in a subsequent post - this post just focuses on getting up and running.

This is my post for the C# Advent Calendar. Be sure to check it out in the run up to Christmas for a new post every day!

Why create a Roslyn Analyzer?

I was recently investigating some strange bugs which would only sporadically manifest in an ASP.NET app recently. Long story short, eventually the issue was traced back to a Task not being awaited. This was causing concurrency issues that were hard to spot in the code, as everything compiled correctly. For example

public class TestClass  
{
    public async Task DoSomethingAsync()
    {
        var theValue = await SomeLongTaskAsync();
        var aTask = SomeOtherTaskAsync(); // not awaited
    }
}

By default, you will get compiler warnings if you don't use await inside an async method. The problem was, we were using await for some of the calls in the offending method, just not all of them. By awaiting a single Task, the compiler was satisfied, and no warning was issued for the second method.

As an aside, this is one of the main arguments for preferring the Async suffix for async methods. Even with a rich IDE like Visual Studio, the issue in the above code was not picked up - when reviewing code statically (e.g. on this blog or in GitHub), the Async suffix is the only indication that there's anything awry in the second call.

Once we identified the problem, the question was how to prevent it happening again. Naming conventions and code-reviews can go some way towards mitigating the issue, but it seemed like there should be a more robust technical solution for detecting un-awaited tasks. That solution was a Roslyn Analyzer.

In this post I'll introduce analyzers in general, and show how to get started. In a later post, I'll show the solution we came up with for the above problem.

What are Roslyn analyzers

Analyzers are effectively extensions to the C# Roslyn compiler, which let you add extra warnings and errors to your code, in addition to the standard compiler errors. You can use these to enforce naming styles and code conventions, or to flag particular code patterns, such as the missing await in the above code.

Analyzers can be distributed either as a NuGet package, or as a VSIX extension for Visual Studio. If you install the analyzer as a VSIX extension, it'll automatically be used in all of your projects, but other people building your projects won't use the analyzer. On the other hand if you reference the analyzer as a NuGet package in a project, everyone who builds your project will see the same compiler warnings and errors, you just have to remember to install it

In Visual Studio, analyzers installed as extensions or as NuGet packages hook into the UI. You'll see green/yellow/red squigglies depending on the severity associated with your analyzer, and you can even associate your analyzer with a Code Fix to perform automatic refactorings:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

If you're using an editor other than Visual Studio, you won't get these UI enhancements, but by referencing the NuGet package you'll still get the compiler warnings and errors when you build your project.

If you're writing cross-platform code (or even if you're not) I strongly suggest installing the API Analyzer. This will highlight framework API calls that are deprecated, or which might throw PlatformNotSupportedExceptions on certain platforms.

Creating a Roslyn analyzer

Up until recently, creating a Roslyn Analyzer that could be consumed anywhere was a bit of a chore. You had to install various extensions from the Visual Studio marketplace, and even then the project templates produced PCL projects, which requires a different build chain to normal .NET Standard projects. When I created my first analyzer, half the battle was converting the project to be compatible with .NET Standard.

I was therefore very happy when writing up this post to see that the Analyzer projects in Visual Studio 2017 are now .NET Standard by default! I think that happened in update 15.5, but I'm not 100%. Either way, it makes the experience much smoother, so I'm going to assume you're already on Visual Studio 2017 15.5 for this post.

1. Install the Visual Studio Extension Development Workload

The first step is to install the necessary components for building Analyzers and VISX extensions in Visual Studio. You don't have to install the VSIX components, but even if you're always going to distribute your analyzer as a NuGet package, it makes the debugging experience much smoother, as you'll see later.

Open the Visual Studio Installer program from your start menu, and click the Modify button next to your installed version of Visual Studio:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

From the Workloads page, scroll to the bottom and select the Visual Studio extension development workload. This installs the .NET Compiler Platform SDK, the Visual Studio SDK, and other prerequisites. Depending on which other workloads you have installed it should use an additional 150-300MB of drive space.

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

Once that's installed, open Visual Studio, and we'll create our first analyzer.

2. Create an Analyzer with Code Fix

There are a variety of new templates made available by installing the Visual Studio workload, but the one we're interested in here is the Analyzer with Code Fix (.NET Standard). You can find it under Visual C# > Extensibility:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

Give your project a name (the imaginative Analyzer1 in my case), and let Visual Studio do it's thing. Once the template is created (much faster than usual in update 15.5 I have to admit) you'll have a solution with three projects:

  • Analyzer1 - This is the Roslyn analyzer and Code Fix project. It contains the code that you will package into a NuGet and deploy.
  • Analyzer1.Test - Tests for the analyzer. We'll look at these in more detail shortly - but effectively you pass some C# code stored in a string to the analyzer under test, and check you get the expected results.
  • Analyzer1.Vsix - A Visual Studio VSIX extension project that can be used to deploy your analyzer. More importantly (for me) it can also be used to Debug your analyzer in action inside an instance of Visual Studio.

The default project template creates a basic, but complete, Analyzer and CodeFix which requires that all class names should be entirely uppercase (I said it's complete, not useful). The Code Fix lets you click the light bulb (or ctrl+.) when the analyzer detects a class with lowercase letters, and replace the type with its uppercase equivalent.

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

At this point, rather than digging into the analyzer code itself, I'm going to show Visual Studio's party trick - debugging an analyzer while it's running in another instance of Visual Studio!

Debugging your analyzer inside Visual Studio

If you're new to working with the Roslyn compiler directly, then the code in an analyzer can be daunting. Lots of types with somewhat obscure names can make it difficult to get a foothold. For me, one of the best ways to get to grip with it was the ability to Debug my code as it was running in another instance of Visual Studio. That sounds like it would be a pain to set up, but it actually works out-of-the box, just press F5!

Make sure that the VSIX project is your solution's current startup project (it's shown in bold in Solution Explorer and it's listed in the Startup Projects box next to the Debug button):

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

When you hit Start or press F5 to debug, your project is compiled, and a Visual Studio extension is created. Visual Studio then starts up a new copy of Visual Studio and installs the extension into it.

I haven't looked into the specifics, but when you debug a VSIX project in this way, I believe it uses a different profile to your normal Visual Studio profile. When you first Debug, you'll see the Visual Studio startup screen asking to setup your environment:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

Note the small black debugging bar at the top of the window - this is an easy way to tell whether you're looking at your main Visual Studio window or the debugging window! As this is a separate Visual Studio profile, your recent projects list will be empty:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

If you check in Tools > Extensions and Updates you may also find that some of your extensions are missing. However, importantly you'll see that our analyzer, Analyzer1, has been installed:

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

Don't worry, all of these changes are only in the Debug session of Visual Studio - your existing Visual Studio instance and settings won't be affected. Any changes you make to the Debug instance are persisted across sessions though.

If you create a new Console project, you'll see that the analyzer immediately picks up the lowercase letters in the Program class name, and gives it a green squiggley - this is the analyzer at work. In the quick fix light bulb menu, you'll see an option for Make uppercase - this is the Code Fix in action.

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

This is all very nice, and lets you test your analyzer in action, but you can also properly debug the analyzer code. If you set a breakpoint in your analyzer project, you can step through the code as it's executed in the other instance of Visual Studio. Pretty cool :)

Creating a .NET Standard Roslyn Analyzer in Visual Studio 2017

This approach is great for experimenting and exploring issues, but you can also unit test your analyzers, as shown in the Analyzer1.Test project.

Testing your Analyzer and CodeFix

Roslyn is effectively a "compiler as a service". You can pass it a string containing C# code, and it will compile it, allowing you to ask semantic questions about the contents, including running your analyzer.

Creating a unit test for the sample project is simple, thanks to some helper classes added to the project by default, in particular the CodeFixVerifier base class. Simply create a string containing the C# code to test, define the expected analyzer results, and call VerifyCSharpDiagnostic() as shown below.

[TestClass]
public class UnitTest : CodeFixVerifier  
{
    [TestMethod]
    public void TestMethod()
    {
        var test = @"
using System;  
using System.Collections.Generic;  
using System.Linq;  
using System.Text;  
using System.Threading.Tasks;  
using System.Diagnostics;

namespace ConsoleApplication1  
{
    class TypeName
    {   
    }
}";
        var expected = new DiagnosticResult
        {
            Id = "Analyzer1",
            Message = String.Format("Type name '{0}' contains lowercase letters", "TypeName"),
            Severity = DiagnosticSeverity.Warning,
            Locations =
                new[] {
                        new DiagnosticResultLocation("Test0.cs", line: 11, column: 15)
                    }
        };

        VerifyCSharpDiagnostic(test, expected);
    }
}

This compiles the provided string, and runs your analyzer against the compilation result. As you can see, this test verifies that our analyzer flags the class TypeName as containing lowercase letters, and defines the position in the string (which is given the placeholder name "Test0.cs") that the warning should be placed.

You can run similar tests for the Code Fix, in addition to the analyzer. Simply pass the expected string after the Code Fix has been applied to the VerifyCSharpFix() method. After the Code Fix has executed, the TypeName class has been renamed to TYPENAME:

[TestClass]
public class UnitTest : CodeFixVerifier  
{
    [TestMethod]
    public void TestMethod()
    {
        // var test = // defined as above.

        var fixtest = @"
using System;  
using System.Collections.Generic;  
using System.Linq;  
using System.Text;  
using System.Threading.Tasks;  
using System.Diagnostics;

namespace ConsoleApplication1  
{
    class TYPENAME
    {   
    }
}";
        VerifyCSharpFix(test, fixtest);
    }

Summary

In this post I showed how to install the necessary components to build Roslyn analyzers, why you might want to, and how you can Debug and test your analyzers, using the default project templates. In the next post, I'l take a look at the code in the default analyzer template, and look at building the await analyzer described at the beginning of this post.


Anuraj Parameswaran: Unit Testing ASP.NET Core Tag Helper

This post is about unit testing an ASP.NET Core tag helper. Tag Helpers enable server-side code to participate in creating and rendering HTML elements in Razor files. Unlike HTML helpers, Tag Helpers reduce the explicit transitions between HTML and C# in Razor views.


Anuraj Parameswaran: Implementing feature toggle in ASP.NET Core

This post is about implementing feature toggle in ASP.NET Core. A feature toggle(also feature switch, feature flag, feature flipper, conditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.


Damien Bowden: Sending Direct Messages using SignalR with ASP.NET core and Angular

This article should how SignalR could be used to send direct messages between different clients using ASP.NET Core to host the SignalR Hub and Angular to implement the clients.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Other posts in this series:

When the application is started, different clients can log in using an email, if already registered, and can send direct messages from one SignalR client to the other SignalR client using the email of the user which was used to sign in. All messages are sent using a JWT token which is used to validate the identity.

The latest Microsoft.AspNetCore.SignalR Nuget package can be added to the ASP.NET Core project in the csproj file, or by using the Visual Studio Nuget package manager to add the package.

<PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha2-final" />

A single SignalR Hub is used to add the logic to send the direct messages between the clients. The Hub is protected using the bearer token authentication scheme which is defined in the Authorize filter. A client can leave or join using the Context.User.Identity.Name, which is configured to use the email of the Identity. When the user joins, the connectionId is saved to the in-memory database, which can then be used to send the direct messages. All other online clients are sent a message, with the new user data. The actual client is sent the complete list of existing clients.

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.SignalR;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace ApiServer.SignalRHubs
{
    [Authorize(AuthenticationSchemes = "Bearer")]
    public class UsersDmHub : Hub
    {
        private UserInfoInMemory _userInfoInMemory;

        public UsersDmHub(UserInfoInMemory userInfoInMemory)
        {
            _userInfoInMemory = userInfoInMemory;
        }

        public async Task Leave()
        {
            _userInfoInMemory.Remove(Context.User.Identity.Name);
            await Clients.AllExcept(new List<string> { Context.ConnectionId }).InvokeAsync(
                   "UserLeft",
                   Context.User.Identity.Name
                   );
        }

        public async Task Join()
        {
            if (!_userInfoInMemory.AddUpdate(Context.User.Identity.Name, Context.ConnectionId))
            {
                // new user

                var list = _userInfoInMemory.GetAllUsersExceptThis(Context.User.Identity.Name).ToList();
                await Clients.AllExcept(new List<string> { Context.ConnectionId }).InvokeAsync(
                    "NewOnlineUser",
                    _userInfoInMemory.GetUserInfo(Context.User.Identity.Name)
                    );
            }
            else
            {
                // existing user joined again
                
            }

            await Clients.Client(Context.ConnectionId).InvokeAsync(
                "Joined",
                _userInfoInMemory.GetUserInfo(Context.User.Identity.Name)
                );

            await Clients.Client(Context.ConnectionId).InvokeAsync(
                "OnlineUsers",
                _userInfoInMemory.GetAllUsersExceptThis(Context.User.Identity.Name)
            );
        }

        public Task SendDirectMessage(string message, string targetUserName)
        {
            var userInfoSender = _userInfoInMemory.GetUserInfo(Context.User.Identity.Name);
            var userInfoReciever = _userInfoInMemory.GetUserInfo(targetUserName);
            return Clients.Client(userInfoReciever.ConnectionId).InvokeAsync("SendDM", message, userInfoSender);
        }
    }
}

The UserInfoInMemory is used as an in-memory database, which is nothing more than a ConcurrentDictionary to manage the online users.

System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;

namespace ApiServer.SignalRHubs
{
    public class UserInfoInMemory
    {
        private ConcurrentDictionary<string, UserInfo> _onlineUser { get; set; } = new ConcurrentDictionary<string, UserInfo>();

        public bool AddUpdate(string name, string connectionId)
        {
            var userAlreadyExists = _onlineUser.ContainsKey(name);

            var userInfo = new UserInfo
            {
                UserName = name,
                ConnectionId = connectionId
            };

            _onlineUser.AddOrUpdate(name, userInfo, (key, value) => userInfo);

            return userAlreadyExists;
        }

        public void Remove(string name)
        {
            UserInfo userInfo;
            _onlineUser.TryRemove(name, out userInfo);
        }

        public IEnumerable<UserInfo> GetAllUsersExceptThis(string username)
        {
            return _onlineUser.Values.Where(item => item.UserName != username);
        }

        public UserInfo GetUserInfo(string username)
        {
            UserInfo user;
            _onlineUser.TryGetValue(username, out user);
            return user;
        }
    }
}

The UserInfo class is used to save the ConnectionId from the SignalR Hub, and the user name.

namespace ApiServer.SignalRHubs
{
    public class UserInfo
    {
        public string ConnectionId { get; set; }
        public string UserName { get; set; }
    }
}

The JWT Bearer token is configured in the startup class, to read the token from the URL parameters.

var tokenValidationParameters = new TokenValidationParameters()
{
	ValidIssuer = "https://localhost:44318/",
	ValidAudience = "dataEventRecords",
	IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("dataEventRecordsSecret")),
	NameClaimType = "name",
	RoleClaimType = "role", 
};

var jwtSecurityTokenHandler = new JwtSecurityTokenHandler
{
	InboundClaimTypeMap = new Dictionary<string, string>()
};

services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
	options.Authority = "https://localhost:44318/";
	options.Audience = "dataEventRecords";
	options.IncludeErrorDetails = true;
	options.SaveToken = true;
	options.SecurityTokenValidators.Clear();
	options.SecurityTokenValidators.Add(jwtSecurityTokenHandler);
	options.TokenValidationParameters = tokenValidationParameters;
	options.Events = new JwtBearerEvents
	{
		OnMessageReceived = context =>
		{
			if ( (context.Request.Path.Value.StartsWith("/loo")) || (context.Request.Path.Value.StartsWith("/usersdm")) 
				&& context.Request.Query.TryGetValue("token", out StringValues token)
			)
			{
				context.Token = token;
			}

			return Task.CompletedTask;
		},
		OnAuthenticationFailed = context =>
		{
			var te = context.Exception;
			return Task.CompletedTask;
		}
	};
});

Angular SignalR Client

The Angular SignalR client is implemented using the npm package “@aspnet/signalr-client”: “1.0.0-alpha2-final”

A ngrx store is used to manage the states sent, received from the API. All SiganlR messages are sent using the DirectMessagesService Angular service. This service is called from the ngrx effects, or sends the received information to the reducer of the ngrx store.

import 'rxjs/add/operator/map';
import { Subscription } from 'rxjs/Subscription';

import { HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';

import { HubConnection } from '@aspnet/signalr-client';
import { Store } from '@ngrx/store';
import * as directMessagesActions from './store/directmessages.action';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { OnlineUser } from './models/online-user';

@Injectable()
export class DirectMessagesService {

    private _hubConnection: HubConnection;
    private headers: HttpHeaders;

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(
        private store: Store<any>,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');

        this.init();
    }

    sendDirectMessage(message: string, userId: string): string {

        this._hubConnection.invoke('SendDirectMessage', message, userId);
        return message;
    }

    leave(): void {
        this._hubConnection.invoke('Leave');
    }

    join(): void {
        this._hubConnection.invoke('Join');
    }

    private init() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.initHub();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    private initHub() {
        console.log('initHub');
        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }
        const url = 'https://localhost:44390/';
        this._hubConnection = new HubConnection(`${url}usersdm${tokenValue}`);

        this._hubConnection.on('NewOnlineUser', (onlineUser: OnlineUser) => {
            console.log('NewOnlineUser received');
            console.log(onlineUser);
            this.store.dispatch(new directMessagesActions.ReceivedNewOnlineUser(onlineUser));
        });

        this._hubConnection.on('OnlineUsers', (onlineUsers: OnlineUser[]) => {
            console.log('OnlineUsers received');
            console.log(onlineUsers);
            this.store.dispatch(new directMessagesActions.ReceivedOnlineUsers(onlineUsers));
        });

        this._hubConnection.on('Joined', (onlineUser: OnlineUser) => {
            console.log('Joined received');
            this.store.dispatch(new directMessagesActions.JoinSent());
            console.log(onlineUser);
        });

        this._hubConnection.on('SendDM', (message: string, onlineUser: OnlineUser) => {
            console.log('SendDM received');
            this.store.dispatch(new directMessagesActions.ReceivedDirectMessage(message, onlineUser));
        });

        this._hubConnection.on('UserLeft', (name: string) => {
            console.log('UserLeft received');
            this.store.dispatch(new directMessagesActions.ReceivedUserLeft(name));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
                this._hubConnection.invoke('Join');
            })
            .catch(() => {
                console.log('Error while establishing connection')
            });
    }

}

The DirectMessagesComponent is used to display the data, or send the events to the ngrx store, which in turn, sends the data to the SignalR server.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { Store } from '@ngrx/store';
import { DirectMessagesState } from '../store/directmessages.state';
import * as directMessagesAction from '../store/directmessages.action';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { OnlineUser } from '../models/online-user';
import { DirectMessage } from '../models/direct-message';
import { Observable } from 'rxjs/Observable';

@Component({
    selector: 'app-direct-message-component',
    templateUrl: './direct-message.component.html'
})

export class DirectMessagesComponent implements OnInit, OnDestroy {
    public async: any;
    onlineUsers: OnlineUser[];
    onlineUser: OnlineUser;
    directMessages: DirectMessage[];
    selectedOnlineUserName = '';
    dmState$: Observable<DirectMessagesState>;
    dmStateSubscription: Subscription;
    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;
    connected: boolean;
    message = '';

    constructor(
        private store: Store<any>,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.dmState$ = this.store.select<DirectMessagesState>(state => state.dm.dm);
        this.dmStateSubscription = this.store.select<DirectMessagesState>(state => state.dm.dm)
            .subscribe((o: DirectMessagesState) => {
                this.connected = o.connected;
            });

    }

    public sendDm(): void {
        this.store.dispatch(new directMessagesAction.SendDirectMessageAction(this.message, this.onlineUser.userName));
    }

    ngOnInit() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    ngOnDestroy(): void {
        this.isAuthorizedSubscription.unsubscribe();
        this.dmStateSubscription.unsubscribe();
    }

    selectChat(onlineuserUserName: string): void {
        this.selectedOnlineUserName = onlineuserUserName
    }

    sendMessage() {
        console.log('send message to:' + this.selectedOnlineUserName + ':' + this.message);
        this.store.dispatch(new directMessagesAction.SendDirectMessageAction(this.message, this.selectedOnlineUserName));
    }

    getUserInfoName(directMessage: DirectMessage) {
        if (directMessage.fromOnlineUser) {
            return directMessage.fromOnlineUser.userName;
        }

        return '';
    }

    disconnect() {
        this.store.dispatch(new directMessagesAction.Leave());
    }

    connect() {
        this.store.dispatch(new directMessagesAction.Join());
    }
}

The Angular HTML template displays the data using Angular material.

<div class="full-width" *ngIf="isAuthorized">
    <div class="left-navigation-container" >
        <nav>

            <mat-list>
                <mat-list-item *ngFor="let onlineuser of (dmState$|async)?.onlineUsers">
                    <a mat-button (click)="selectChat(onlineuser.userName)">{{onlineuser.userName}}</a>
                </mat-list-item>
            </mat-list>

        </nav>
    </div>
    <div class="column-container content-container">
        <div class="row-container info-bar">
            <h3 style="padding-left: 20px;">{{selectedOnlineUserName}}</h3>
            <a mat-button (click)="sendMessage()" *ngIf="connected && selectedOnlineUserName && selectedOnlineUserName !=='' && message !==''">SEND</a>
            <a mat-button (click)="disconnect()" *ngIf="connected">Disconnect</a>
            <a mat-button (click)="connect()" *ngIf="!connected">Connect</a>
        </div>

        <div class="content" *ngIf="selectedOnlineUserName && selectedOnlineUserName !==''">

            <mat-form-field  style="width:95%">
                <textarea matInput placeholder="your message" [(ngModel)]="message" matTextareaAutosize matAutosizeMinRows="2"
                          matAutosizeMaxRows="5"></textarea>
            </mat-form-field>
           
            <mat-chip-list class="mat-chip-list-stacked">
                <ng-container *ngFor="let directMessage of (dmState$|async)?.directMessages">

                    <ng-container *ngIf="getUserInfoName(directMessage) !== ''">
                        <mat-chip selected="true" style="width:95%">
                            {{getUserInfoName(directMessage)}} {{directMessage.message}}
                        </mat-chip>
                    </ng-container>
                       
                    <ng-container *ngIf="getUserInfoName(directMessage) === ''">
                        <mat-chip style="width:95%">
                            {{getUserInfoName(directMessage)}} {{directMessage.message}}
                        </mat-chip>
                    </ng-container>

                    </ng-container>
            </mat-chip-list>

        </div>
    </div>
</div>

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5


Andrew Lock: Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

In this post I show how you can use Docker Hub's GitHub integration to automatically build a Docker image when you push to your GitHub repository. I show how to register with Docker Hub, how to setup the GitHub integration, and how to configure automated builds.

Prerequistes

I've made a number of assumptions for this post. The first, is that you're familiar with Docker in general. If you're completely new to Docker, I recommend checking out Steve Gordon's excellent series of posts on Docker, the problems it solves, and how to get started.

Secondly, I assume you have a GitHub account that you're using to host an open source/public project. The instructions in this post are geared towards this scenario, though you can also host private repositories, or use a BitBucket account if your prefer.

Finally, I assume you already have a project with a Dockerfile for building your app. You can view the sample repository I used on GitHub, but this post just focuses on the process of connecting Docker Hub and GitHub, and getting Docker Hub to build your images for you.

For this post I created a simple ASP.NET Core solution called AspNetCoreInDocker consisting of three projects:

  • AspNetCoreInDocker.Web - An ASP.NET Core 2.0 app, based on the basic template
  • AspNetCoreInDocker.Lib - A .NET Standard library project
  • AspNetCoreInDocker.Web.Tests - An xUnit test project

All of the projects were created with the default templates. The solution folder looks something like the following:

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

The Dockerfile for the solution uses a multistage build approach, to optimise the size of the final image. The DockerFile I'm currently using is as follows:

# Build image
FROM microsoft/dotnet:2.0.3-sdk AS builder  
WORKDIR /sln  
COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./

# Copy all the csproj files and restore to cache the layer for faster builds
# The dotnet_build.sh script does this anyway, so superfluous, but docker can 
# cache the intermediate images so _much_ faster
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
RUN dotnet restore

COPY ./test ./test  
COPY ./src ./src  
RUN dotnet build -c Release --no-restore

RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore

RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore

#Build the app image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

This Dockerfile builds the solution, runs the tests, and publishes the app using the microsoft/dotnet:2.0.3-sdk base image. It then copies the published output to the microsoft/aspnetcore:2.0.3 base image, so the final image is much smaller, and optimised for running apps.

In the next section I'll walk through creating a Docker Hub account.

Create a Docker Hub account

Docker Hub lets you create private or public registries to host your Docker image repositories. These are the central storage location for your Docker images. If you think of your docker images like NuGet packages, then Docker Hub is the equivalent of https://www.nuget.org. You can create your own Docker registries, in the same way you can create your own NuGet feeds; Docker Hub is just the most common and public implementation.

Before you can create your own repositories on Docker Hub or use it to build Docker images, you must create a Docker Hub account. This a simple process, that I'll walk through here.

1. Browse to https://hub.docker.com/ and signup for a new account

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

2. You'll receive an email with an activation link. Click on it, and login with your new account

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

3. That's it, you now have a Docker Hub account! You'll find yourself dropped onto your Docker Hub dashboard. There's nothing there yet, but this will display details and the status of all your Docker repositories once you add them.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

If you want Docker Hub to automatically build your projects from GitHub, you'll need to link your GitHub account, so Docker Hub can receive notifications when you push to your GitHub repository.

Connecting your Docker Hub account to your GitHub account

One of the services offered by Docker Hub is integration with GitHub. This lets you configure a Docker Hub repository to automatically build a new Docker image anytime you push to a source GitHub repository.

This has a a number of limitations and restrictions in order to work seamlessly. Most notably, by default you must have a single Dockerfile (called Dockerfile) that contains the whole build definition. If this is the case for you, then the automated builds make creating Docker images easy. If you have more complex requirements, you can always push to your Docker Hub repository manually using docker push.

Integration with GitHub is configured using the standard OAuth mechanism.

1. Click on the Profile > Settings menu item from the top right of Docker Hub.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

2. From the top menu, choose Linked Accounts & Services and select Link GitHub.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

3. On the following page you're presented with two options. You can either provide read and write access to both your public and private repositories, or you can provide read-only access to your public repositories. If you go for read-only access, you'll have to do some extra work to setup automated builds (see point 6 in the following section).

Important Docker Hub needs write access to automatically configure your repositories with the required web-hooks to build your Docker images.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

4. Authorise Docker Hub to access your GitHub repo

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

5. Once your account is linked, it will show up in Docker Hub.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

Now the account is linked, we can setup an automated build for our Docker images, so they are automatically published to GitHub.

Configuring an automated build

You can create an automated build in Docker Hub so that every push to your GitHub triggers a new build of a Docker image.

1. Click Create > Create Automated Build from the top menu

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

2. You're presented with a list of all the available repositories connected to your GitHub account. Choose the repository you wish to configure from the list, or search to narrow down the list:

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

3. Customise the build. You can provide a name for the Docker repository that images will be pushed to, along with a description. Docker hub will automatically build Docker images and tag them based on the branch configuration. By default, pushing to the master branch will tag Docker images with the special latest tag, otherwise they'll be tagged with the branch name. I just left the defaults below.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

4. View your new repository. By setting up an automated build, you've created a new Docker repository for your images, in this example at andrewlock/aspnetcore-in-docker. This page contains various details about the Docker images in the repository.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

5. Navigate to Build Details to see which builds are currently running. When you push to GitHub, the Docker Hub integration will kick off a build of your Dockerfile, tag the image as appropriate, and push to your Docker Hub repository:

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

6. If you chose to only grant read-only access to your GitHub account, you'll need to add the Docker Hub integration to your repository manually. To do this go to the Settings page for your GitHub repository (not your profile) and choose Integrations and Services. You'll need to add the Docker service, as shown below so Docker Hub is notified of pushes to your GitHub repository.

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

We've just built and published our first Docker image on Docker Hub using an automated build process. Now we can take it for a spin!

Running the docker image

You can pull the latest version of your Docker image using docker pull, for example

docker pull andrewlock/aspnetcore-in-docker  

This uses the latest tag by default, which always matches the most recently built image from the master branch:

$ docker pull andrewlock/aspnetcore-in-docker
Using default tag: latest  
latest: Pulling from andrewlock/aspnetcore-in-docker  
3e17c6eae66c: Already exists  
4041d8a28951: Already exists  
f8ad8f42d05d: Already exists  
55b6ebe9b140: Already exists  
83778bf3f266: Already exists  
830e558d106a: Pull complete  
cd471fda7e3f: Pull complete  
Digest: sha256:5fb4de0d2d30d424af8cf085e7dba5570f54b7ca353c1cbd6c82dbbe1dab334c  
Status: Downloaded newer image for andrewlock/aspnetcore-in-docker:latest  

We can run the container in the background and bind it to port 5000 using:

 docker run -d -p 5000:80 andrewlock/aspnetcore-in-docker

Et voilà, we have an ASP.NET Core application, with the source code in GitHub, automatically built using Docker Hub, running locally in a Docker container:

Using Docker Hub to automatically build a Docker image for ASP.NET Core apps

Summary

In this post I showed how to create a Docker Hub account, how to setup integration with GitHub, and how to add automatic builds, so that pushing to a GitHub repository causes a new Docker image to be pushed to your repository in Docker Hub. The main point to be aware of is that you need to provide write access for Docker hub, so that it can configure the integration for your app. If you only want to provide read-only access to your repositories, you'll need to configure the integration yourself.


Anuraj Parameswaran: Building multi-tenant applications with ASP.NET Core

This post is about developing multi-tenant applications with ASP.NET Core. Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each customer is called a tenant. Tenants may be given the ability to customize some parts of the application.


Anuraj Parameswaran: Seed database in ASP.NET Core

This post is about how to seed database in ASP.NET Core. You may want to seed the database with initial users for various reasons. You may want default users and roles added as part of the application. In this post, we will take a look at how to  seed the database with default data.


Anuraj Parameswaran: CI build for an ASP.NET Core app

This post is about setting up continuous integration (CI) process for an ASP.NET Core app using Visual Studio Team Services (VSTS) or Team Foundation Server (TFS).


Andrew Lock: Home, home on the range: Installing Kubernetes using Rancher 2.0

Home, home on the range: Installing Kubernetes using Rancher 2.0

In this post, I give a brief introduction to Rancher and what you can use it for. Then I show how you can install it onto a host (an Ubuntu virtual machine in this case) to get Kubernetes running locally in just a few minutes.

Rancher recently released an alpha version of 2.0, which goes all-in on Kubernetes, and I wanted to give it a try. I'm just experimenting at the moment, but Rancher seems like a really promising way for you to get a local Kubernetes cluster up and running.

Note: Rancher labs have 2 separate products - Rancher, and RancherOS. RancherOS is an interesting take on an OS, in which pretty much everything is containerised, but that's not what I'm looking at in this post.

What is Rancher?

I wouldn't' be surprised if most people reading this post have never heard of Rancher. I certainly hadn't until quite recently. But if you've just managed to get your head around Docker, and are thinking about introducing Kubernetes for orchestration, then Rancher may be worth a look.

So what is Rancher? In their own words:

Rancher is an open source software platform that enables organizations to run and manage Docker and Kubernetes in production. With Rancher, organizations no longer have to build a container services platform from scratch using a distinct set of open source technologies. Rancher supplies the entire software stack needed to manage containers in production

That's a lot of words without a lot of meaning if you ask me.

Essentially, Rancher provides an easy way to manage your container orchestrators (such as Kubernetes, or Docker Swarm) via a web interface. It's an orchestrator for your orchestrator. I know the mention of a GUI will make a lot of die-hard console fans baulk in disgust - I like a good command line as much as the next person, but sometimes it's just nice to have a friendly UI.

How does it work?

Rancher consists of a central management service, the server, running on a host in a Docker container, and 1 or more additional hosts running an agent service, again in a Docker container. A host can be any Linux host, and Rancher lets you easily connect to cloud hosting like EC2, Digital Ocean, or Azure. Alternatively, you can add any host that's running a supported version of Docker.

Home, home on the range: Installing Kubernetes using Rancher 2.0

Rancher adds a layer of dockerised infrastructure to connect all of your hosts, managing storage, networking, and DNS etc. Just follow the wizard to add a new host, and Rancher will take of setting it up and monitoring it for you.

Rancher doesn't replace your "native" orchestrator. You can still use Kubernetes to manage your applications from the dashboard or the command line. Rancher even provides a handy in-browser shell for running kubectl commands on your Kubernetes cluster:

Home, home on the range: Installing Kubernetes using Rancher 2.0

What can Rancher do for you?

So why bother with Rancher? Personally, Rancher looks like it should take a lot of the hassle out of setting up a Kubernetes cluster. It only took me a few minutes (maybe half an hour, I should have paid more attention!) to go from scratch to having a cluster running and looking at the Kubernetes dashboard.

In a similar vein, Rancher includes a variety of "apps" that you can use to quickly spin up a whole array of services. These can be simple single container apps like Ghost or they could be whole multi-container systems like Openfaas. All of these are just a few clicks away to get them running on your hosts.

Home, home on the range: Installing Kubernetes using Rancher 2.0

From the point of view of organisations, Rancher looks to have a lot of enterprise-level integration and support for things like authenticating with Active Directory, and managing access to multiple environments. I haven't looked into this aspect personally, but it looks like it should tick the right boxes to keep people happy.

How do you get started?

The one thing you need to get started, is a Linux host on which you can install the Rancher server. As long as it's running a supported version of docker, you'll be good to go. It could be a local machine, a VM, a cloud host, and doesn't need to be very powerful.

I decided to give Rancher a go by running it locally in a virtual machine, and installing Kubernetes, just to see what the experience was like. This post describes pretty much all the steps I went through to get set up.

Running Rancher in a local VM

The rest of this post describes how I got the Rancher 2.0 running locally in a virtual machine, and deployed Kubernetes. As I mentioned before, v2.0 is only in alpha, so things might well change, but everything went very smoothly for me

A word of warning, I'm a Windows guy, so the rest of this post may feel a bit arduous if you're more used to playing with Linux and setting up VMs than I am. This is as much a record for myself of what I did to get running, so apologies if it's a bit slow!

First thing's first, we need a virtual machine to run the Rancher server. If you already have a Linux host running a compatible version of Docker you can skip ahead to installing Rancher.

Setup the VM

I won't go into installing a hypervisor here - I'm using VMware Player, but you could use Virtual Box or Hyper-V i'm sure.

1. Install Ubuntu

Rancher server v1.6 will support any modern Linux distro, but for v2.0 it may need to be Ubuntu 16.04 at the moment. I'm sure that will expand later, but it suits me.

Download Ubuntu 16.04 from https://www.ubuntu.com/download/desktop and install it in your hypervisor. Once you're at the desktop, open a terminal (CTRL+ALT+T).

As always with a new OS, first-things-first, install the updates. You need to run two commands:

  • sudo apt-get update - Updates the list of available packages and their versions
  • sudo apt-get upgrade - Actually installs the updates

Home, home on the range: Installing Kubernetes using Rancher 2.0

Now we have an up-to-date OS, we can install Docker.

2. Install Docker

Rancher runs inside a Docker container, so you need to install of the compatible versions of Docker on your host. For version v2.0, the compatible versions are:

  • Docker v1.12.6
  • Docker v1.13.1
  • Docker v17.03-ce
  • Docker v17.06-ce

I went with the most recent compatible version, v17.06-ce. I followed the instructions for installing Docker from here, but I give the highlights below.

1. Install packages to allow apt to fetch updates from a repository over HTTPS:

sudo apt-get install \  
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

2. Add Docker’s official GPG key. This is used to ensure the packages downloaded can be trusted.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -  

3. Setup the stable Docker apt repository, so we can download Docker packages

sudo add-apt-repository \  
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

4. Run another apt-get update, to populate the package list from the stable Docker apt repository

sudo apt-get update  

5. Check the versions of Docker available using apt-cache

apt-cache madison docker-ce  

This displays a list of the available packages, the version numbers are shown in the second column. I chose the second one down, 17.06.2~ce-0~ubuntu

Home, home on the range: Installing Kubernetes using Rancher 2.0

6. Install the specific compatible Docker version

sudo apt-get install docker-ce=17.06.2~ce-0~ubuntu  

7. Check it's installed by running docker--version

docker --version  
Docker version 17.06.2-ce, build cec0b72  

Phew. Now we have a host running Docker, we can get down to business, installing Rancher!

3. Install Rancher Server

As I've mentioned ad nauseum, Rancher runs in Docker, so installing the Rancher server is as simple as running a Docker container:

sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:preview  

If you now navigate to http://localhost:8080 in your VM, you'll see the Rancher server welcome page - easy!

Home, home on the range: Installing Kubernetes using Rancher 2.0

4. Add a Host and install Kubernetes

Once Rancher is running, the next thing to do is to give it something to manage. If you already have a Kubernetes cluster running on Azure Container Service or Google Container Engine for example, you can simply import this cluster into Rancher. If you do that, Rancher doesn't manage the host, but you can view the status of your nodes in one place.

Instead, we're going to add our own host, and install Kubernetes on it using Rancher to do the hard work for us.

1. Click Add Host. This presents you with a variety of options for where you wish to run your Linux host. You could spin up an EC2 instance, or a DO droplet, but in this case, I'm going to add the same VM as a host, so choose Custom. This probably wouldn't be a good idea in production, but it seems to work fine for me locally.

Home, home on the range: Installing Kubernetes using Rancher 2.0

2. Enter the IP address of your host. You need all your hosts to be able to communicate with each other - I just entered the IP of the VM (found using hostname -I).

Home, home on the range: Installing Kubernetes using Rancher 2.0

3. When you click Save, you'll be provided a script to run on your linux host.

sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v2.0-alpha4 http://192.168.178.129:8080/v3/scripts/E3C8476FE2408EA7070A:1464462400000:4RkyHoXcbZmWNaKEvOJeZ6bE9A  

Run this on the host (the same VM in our case). This starts the Rancher agent on your host, and includes a key so it can authenticate with the Rancher server. Once it's started successfully, you should see a message appear down the bottom saying "One new host has registered"

Home, home on the range: Installing Kubernetes using Rancher 2.0

4. Wait for everything to start up. Once the host is added, Rancher will install Kubernetes on it, and start monitoring the health of the host. This was the slowest part, waiting for the agent to spin up all the bells and whistles that go into a Kuberentes cluster!

You can view the progress by switching to the System environment (you start in Default) from the top-right menu:

Home, home on the range: Installing Kubernetes using Rancher 2.0

This shows all your containers, so you just need to wait for everything to turn green:

Home, home on the range: Installing Kubernetes using Rancher 2.0

Once you have green, you're good to go - you're running Kubernetes! If you ignore the cruft of setting up the VM and installing Docker, then that was really just 2 Docker commands to run. Not bad in my book.

5. View the Kubernetes dashboard

Just to prove we really have a Kubernetes cluster, let's explore a little. From the Containers page, click on the Advanced tab. This gives you the options below:

Home, home on the range: Installing Kubernetes using Rancher 2.0

If you click Launch Dashboard, Rancher opens the Kubernetes dashboard for your cluster, just to prove you really are running your own Kubernetes cluster.

Home, home on the range: Installing Kubernetes using Rancher 2.0

Summary

In this post I provided a quick introduction to Rancher, and where it fits in the Docker and Kubernetes ecosystem. I showed how to deploy Rancher 2.0 to a local VM, to quickly get Kubernetes running. I've only played with it a little so far, but I really like how little effort it was to get started. I'll have some follow up posts soon with further thoughts no doubt!

Resources:


Anuraj Parameswaran: How to use Angular 4 with ASP.NET MVC 5

This post is about how to use Angular 4 with ASP.NET MVC5. In one of my existing projects we were using Angular 1.x, due to some plugin compatibility issues, we had to migrate to latest version of Angular. We couldn’t find any good article which talks about development and deployment aspects of Angular 4 with ASP.NET MVC.


Dominick Baier: Updated Templates for IdentityServer4

We finally found the time to put more work into our templates.

dotnet new is4empty

Creates a minimal IdentityServer4 project without a UI.

dotnet new is4ui

Adds the quickstart UI to the current project (can be e.g added on top of is4empty)

dotnet new is4inmem

Adds a basic IdentityServer with UI, test users and sample clients and resources. Shows both in-memory code and JSON configuration.

dotnet new is4aspid

Adds a basic IdentityServer that uses ASP.NET Identity for user management

dotnet new is4ef

Adds a basic IdentityServer that uses Entity Framework for configuration and state management

Installation

Install with:

dotnet new -i identityserver4.templates

If you need to set back your dotnet new list to “factory defaults”, use this command:

dotnet new --debug:reinit


Andrew Lock: Creating strongly typed xUnit theory test data with TheoryData

Creating strongly typed xUnit theory test data with TheoryData

In a recent post I described the various ways you can pass data to xUnit theory tests using attributes such as [InlineData], [ClassData], or [MemberData]. For the latter two, you create a property, method or class that returns IEnumerable<object[]>, where each object[] item contains the arguments for your theory test.

In this post, I'll show an alternative way to pass data to your theory tests by using the strongly-typed TheoryData<> class. You can use it to create test data in the same way as the previous post, but you get the advantage of compile-time type checking (as you should in C#!)

The problem with IEnumerable<object[]>

I'll assume you've already seen the previous post on how to use [ClassData] and [MemberData] attributes but just for context, this is what a typical theory test and data function might look like:

public class CalculatorTests  
{
    [Theory]
    [MemberData(nameof(Data))]
    public void CanAdd(int value1, int value2, int expected)
    {
        var calculator = new Calculator();
        var result = calculator.Add(value1, value2);
        Assert.Equal(expected, result);
    }

    public static IEnumerable<object[]> Data =>
        new List<object[]>
        {
            new object[] { 1, 2, 3 },
            new object[] { -4, -6, -10 },
            new object[] { -2, 2, 0 },
            new object[] { int.MinValue, -1, int.MaxValue },
        };
}

The test function CanAdd(value1, value2, expected) has three int parameters, and is decorated with a [MemberData] attribute that tells xUnit to load the parameters for the theory test from the Data property.

This works perfectly well, but if you're anything like me, returning an object[] just feels wrong. As we're using objects, there's nothing stopping you returning something like this:

public static IEnumerable<object[]> Data =>  
    new List<object[]>
    {
        new object[] { 1.5, 2.3m, "The value" }
    };

This compiles without any warnings or errors, even from the xUnit analyzers. The CanAdd function requires three ints, but we're returning a double, a decimal, and a string. When the test executes, you'll get the following error:

Message: System.ArgumentException : Object of type 'System.String' cannot be converted to type 'System.Int32'.  

That's not ideal. Luckily, xUnit allows you to provide the same data as a strongly typed object, TheoryData<>.

Strongly typed data with TheoryData

The TheoryData<> types provide a series of abstractions around the IEnumerable<object[]> required by theory tests. It consists of a TheoryData base class, and a number of generic derived classes TheoryData<>. The basic abstraction looks like the following:

public abstract class TheoryData : IEnumerable<object[]>  
{
    readonly List<object[]> data = new List<object[]>();

    protected void AddRow(params object[] values)
    {
        data.Add(values);
    }

    public IEnumerator<object[]> GetEnumerator()
    {
        return data.GetEnumerator();
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
}

This class implements IEnumerable<object[]> but it has no other public members. Instead, the generic derived classes TheoryData<> provide a public Add<T>() method, to ensure you can only add rows of the correct type. For example, the derived class with three generic arguments looks likes the following:

public class TheoryData<T1, T2, T3> : TheoryData  
{
    /// <summary>
    /// Adds data to the theory data set.
    /// </summary>
    /// <param name="p1">The first data value.</param>
    /// <param name="p2">The second data value.</param>
    /// <param name="p3">The third data value.</param>
    public void Add(T1 p1, T2 p2, T3 p3)
    {
        AddRow(p1, p2, p3);
    }
}

This type just passes the generic arguments to the protected AddRow() command, but it enforces that the types are correct, as the code won't compile if you try and pass an incorrect parameter to the Add<T>() method.

Using TheoryData with the [ClassData] attribute

First, we'll look at how to use TheoryData<> with the [ClassData] attribute. You can apply the [ClassData] attribute to a theory test, and the referenced type will be used to load the data. In the previous post, the data class implemented IEnumerable<object[]>, but we can alternatively implement TheoryData<T1, T2, T3> to ensure all the types are correct, for example:

public class CalculatorTestData : TheoryData<int, int, int>  
{
    public CalculatorTestData()
    {
        Add(1, 2, 3);
        Add(-4, -6, -10);
        Add(-2, 2, 0);
        Add(int.MinValue, -1, int.MaxValue);
        Add(1.5, 2.3m, "The value"); // will not compile!
    }
}

You can apply this to your theory test in exactly the same way as before, but this time you can be sure that every row will have the correct argument types:

[Theory]
[ClassData(typeof(CalculatorTestData))]
public void CanAdd(int value1, int value2, int expected)  
{
    var calculator = new Calculator();
    var result = calculator.Add(value1, value2);
    Assert.Equal(expected, result);
}

The main thing to watch out for here is that that the CalculatorTestData implements the correct generic TheoryData<> - there's no compile time checking that you're referencing a TheoryData<int, int, int> instead of a TheoryData<string> for example.

Using TheoryData with the [MemberData] attribute

You can use TheoryData<> with [MemberData] attributes as well as [ClassData] attributes. Instead of referencing a static property that returns an IEnumerable<object[]>, you reference a property or method that returns a TheoryData<> object with the correct parameters.

For example, we can rewrite the Data property from the start of this post to use a TheoryData<int, int, int> object:

public static TheoryData<int, int, int> Data  
{
    get
    {
        var data = new TheoryData<int, int, int>();
        data.Add(1, 2, 3);
        data.Add(-4, -6, -10 );
        data.Add( -2, 2, 0 );
        data.Add(int.MinValue, -1, int.MaxValue );
        data.Add( 1.5, 2.3m, "The value"); // won't compile
        return data;
    }
}

This is effectively identical to the original example, but the strongly typed TheoryData<> won't let us add invalid data.

That's pretty much all there is to it, but if the verbosity of that example bugs you, you can make use of collection initialisers and expression bodied members to give:

public static TheoryData<int, int, int> Data =>  
    new TheoryData<int, int, int>
        {
            { 1, 2, 3 },
            { -4, -6, -10 },
            { -2, 2, 0 },
            { int.MinValue, -1, int.MaxValue }
        };

As with the [ClassData] attribute, you have to manually ensure that the TheoryData<> generic arguments match the theory test parameters they're used with, but at least you can be sure all of the rows in the IEnumerable<object[]> are consistent!

Summary

In this post I described how to create strongly-typed test data for xUnit theory tests using TheoryData<> classes. By creating instances of this class instead of IEnumerable<object[]> you can be sure that each row of data has the correct types for the theory test.


Anuraj Parameswaran: Using LESS CSS with ASP.NET Core

This post is about getting started with LESS CSS with ASP.NET. Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themeable and extendable. Less css helps developers to avoid code duplication.


Dominick Baier: Missing Claims in the ASP.NET Core 2 OpenID Connect Handler?

The new OpenID Connect handler in ASP.NET Core 2 has a different (aka breaking) behavior when it comes to mapping claims from an OIDC provider to the resulting ClaimsPrincipal.

This is especially confusing and hard to diagnose since there are a couple of moving parts that come together here. Let’s have a look.

You can use my sample OIDC client here to observe the same results.

Mapping of standard claim types to Microsoft proprietary ones
The first annoying thing is, that Microsoft still thinks they know what’s best for you by mapping the OIDC standard claims to their proprietary ones.

This can be fixed elegantly by clearing the inbound claim type map on the Microsoft JWT token handler:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

A basic OpenID Connect authentication request
Next – let’s start with a barebones scenario where the client requests the openid scope only.

First confusing thing is that Microsoft pre-populates the Scope collection on the OpenIdConnectOptions with the openid and the profile scope (don’t get me started). This means if you only want to request openid, you first need to clear the Scope collection and then add openid manually.

services.AddAuthentication(options =>
{
    options.DefaultScheme = "Cookies";
    options.DefaultChallengeScheme = "oidc";
})
    .AddCookie("Cookies", options =>
    {
        options.AccessDeniedPath = "/account/denied";
    })
    .AddOpenIdConnect("oidc", options =>
    {
        options.Authority = "https://demo.identityserver.io";
        options.ClientId = "server.hybrid";
        options.ClientSecret = "secret";
        options.ResponseType = "code id_token";
 
        options.SaveTokens = true;
                    
        options.Scope.Clear();
        options.Scope.Add("openid");
                    
        options.TokenValidationParameters = new TokenValidationParameters
        {
            NameClaimType = "name", 
            RoleClaimType = "role"
        };
    });

With the ASP.NET Core v1 handler, this would have returned the following claims: nbf, exp, iss, aud, nonce, iat, c_hash, sid, sub, auth_time, idp, amr.

In V2 we only get sid, sub and idp. What happened?

Microsoft added a new concept to their OpenID Connect handler called ClaimActions. Claim actions allow modifying how claims from an external provider are mapped (or not) to a claim in your ClaimsPrincipal. Looking at the ctor of the OpenIdConnectOptions, you can see that the handler will now skip the following claims by default:

ClaimActions.DeleteClaim("nonce");
ClaimActions.DeleteClaim("aud");
ClaimActions.DeleteClaim("azp");
ClaimActions.DeleteClaim("acr");
ClaimActions.DeleteClaim("amr");
ClaimActions.DeleteClaim("iss");
ClaimActions.DeleteClaim("iat");
ClaimActions.DeleteClaim("nbf");
ClaimActions.DeleteClaim("exp");
ClaimActions.DeleteClaim("at_hash");
ClaimActions.DeleteClaim("c_hash");
ClaimActions.DeleteClaim("auth_time");
ClaimActions.DeleteClaim("ipaddr");
ClaimActions.DeleteClaim("platf");
ClaimActions.DeleteClaim("ver");

If you want to “un-skip” a claim, you need to delete a specific claim action when setting up the handler. The following is the very intuitive syntax to get the amr claim back:

options.ClaimActions.Remove("amr");

If you want to see the raw claims from the token in the principal, you need to clear the whole claims action collection.

Requesting more claims from the OIDC provider
When you are requesting more scopes, e.g. profile or custom scopes that result in more claims, there is another confusing detail to be aware of.

Depending on the response_type in the OIDC protocol, some claims are transferred via the id_token and some via the userinfo endpoint. I wrote about the details here.

So first of all, you need to enable support for the userinfo endpoint in the handler:

options.GetClaimsFromUserInfoEndpoint = true;

If the claims are being returned by userinfo, ClaimsActions are used again to map the claims from the returned JSON document to the principal. The following default settings are used here:

ClaimActions.MapUniqueJsonKey("sub", "sub");
ClaimActions.MapUniqueJsonKey("name", "name");
ClaimActions.MapUniqueJsonKey("given_name", "given_name");
ClaimActions.MapUniqueJsonKey("family_name", "family_name");
ClaimActions.MapUniqueJsonKey("profile", "profile");
ClaimActions.MapUniqueJsonKey("email", "email");

IOW – if you are sending a claim to your client that is not part of the above list, it simply gets ignored, and you need to do an explicit mapping. Let’s say your client application receives the website claim via userinfo (one of the standard OIDC claims, but unfortunately not mapped by Microsoft) – you need to add the mapping yourself:

options.ClaimActions.MapUniqueJsonKey("website", "website");

The same would apply for any other claims you return via userinfo.

I hope this helps. In short – you want to be explicit about your mappings, because I am sure that those default mappings will change at some point in the future which will lead to unexpected behavior in your client applications.


Damien Bowden: IdentityServer4 Localization using ui_locales and the query string

This post is part 2 from the previous post IdentityServer4 Localization with the OIDC Implicit Flow where the localization was implemented using a shared cookie between the applications. This has its restrictions, due to the cookie domain constraints and this post shows how the oidc optional parameter ui_locales can be used instead, to pass the localization between the client application and the STS server.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

The ui_locales, which is an optional parameter defined in the OpenID standard, can be used to pass the localization from the client application to the server application in the authorize request. The parameter is passed as a query string value.

A custom RequestCultureProvider class is implemented to handle this. The culture provider checks for the ui_locales in the query string and sets the culture if it is found. If it is not found, it checks for the returnUrl parameter. This is the parameter returned by the IdentityServer4 middleware after a redirect from the /connect/authorize endpoint. The provider then searches for the ui_locales in the parameter and sets the culture if found.

Once the culture has been set, a localization cookie is set on the server and added to the response. The will be used if the client application/user tries to logout. This is required because the culture cannot be set for the endsession endpoint.

using Microsoft.AspNetCore.Localization;
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Primitives;
using Microsoft.AspNetCore.WebUtilities;
using System.Linq;

namespace IdentityServerWithIdentitySQLite
{
    public class LocalizationQueryProvider : RequestCultureProvider
    {
        public static readonly string DefaultParamterName = "culture";

        public string QureyParamterName { get; set; } = DefaultParamterName;

        /// <inheritdoc />
        public override Task<ProviderCultureResult> DetermineProviderCultureResult(HttpContext httpContext)
        {
            if (httpContext == null)
            {
                throw new ArgumentNullException(nameof(httpContext));
            }

            var query = httpContext.Request.Query;
            var exists = query.TryGetValue("ui_locales", out StringValues culture);

            if (!exists)
            {
                exists = query.TryGetValue("returnUrl", out StringValues requesturl);
                // hack because Identityserver4 does some magic here...
                // Need to set the culture manually
                if (exists)
                {
                    var request = requesturl.ToArray()[0];
                    Uri uri = new Uri("http://faketopreventexception" + request);
                    var query1 = QueryHelpers.ParseQuery(uri.Query);
                    var requestCulture = query1.FirstOrDefault(t => t.Key == "ui_locales").Value;

                    var cultureFromReturnUrl = requestCulture.ToString();
                    if(string.IsNullOrEmpty(cultureFromReturnUrl))
                    {
                        return NullProviderCultureResult;
                    }

                    culture = cultureFromReturnUrl;
                }
            }

            var providerResultCulture = ParseDefaultParamterValue(culture);

            // Use this cookie for following requests, so that for example the logout request will work
            if (!string.IsNullOrEmpty(culture.ToString()))
            {
                var cookie = httpContext.Request.Cookies[".AspNetCore.Culture"];
                var newCookieValue = CookieRequestCultureProvider.MakeCookieValue(new RequestCulture(culture));

                if (string.IsNullOrEmpty(cookie) || cookie != newCookieValue)
                {
                    httpContext.Response.Cookies.Append(".AspNetCore.Culture", newCookieValue);
                }
            }

            return Task.FromResult(providerResultCulture);
        }

        public static ProviderCultureResult ParseDefaultParamterValue(string value)
        {
            if (string.IsNullOrWhiteSpace(value))
            {
                return null;
            }

            var cultureName = value;
            var uiCultureName = value;

            if (cultureName == null && uiCultureName == null)
            {
                // No values specified for either so no match
                return null;
            }

            if (cultureName != null && uiCultureName == null)
            {
                uiCultureName = cultureName;
            }

            if (cultureName == null && uiCultureName != null)
            {
                cultureName = uiCultureName;
            }

            return new ProviderCultureResult(cultureName, uiCultureName);
        }
    }
}

The LocalizationQueryProvider can then be added as part of the localization configuration.

services.Configure<RequestLocalizationOptions>(
options =>
{
	var supportedCultures = new List<CultureInfo>
		{
			new CultureInfo("en-US"),
			new CultureInfo("de-CH"),
			new CultureInfo("fr-CH"),
			new CultureInfo("it-CH")
		};

	options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
	options.SupportedCultures = supportedCultures;
	options.SupportedUICultures = supportedCultures;

	var providerQuery = new LocalizationQueryProvider
	{
		QureyParamterName = "ui_locales"
	};

	options.RequestCultureProviders.Insert(0, providerQuery);
});

The client application can add the ui_locales parameter to the authorize request.

let culture = 'de-CH';
if (this.locale.getCurrentCountry()) {
   culture = this.locale.getCurrentLanguage() + '-' + this.locale.getCurrentCountry();
}

this.oidcSecurityService.setCustomRequestParameters({ 'ui_locales': culture});

this.oidcSecurityService.authorize();

The localization will now be sent from the client application to the server.

https://localhost:44318/account/login?returnUrl=%2Fconnect%2Fauthorize? …ui_locales%3Dfr-CH


Links:

https://damienbod.com/2017/11/01/shared-localization-in-asp-net-core-mvc/

https://github.com/IdentityServer/IdentityServer4

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization

https://github.com/robisim74/angular-l10n

https://damienbod.com/2017/11/06/identityserver4-localization-with-the-oidc-implicit-flow/

http://openid.net/specs/openid-connect-core-1_0.html


Damien Bowden: IdentityServer4 Localization with the OIDC Implicit Flow

This post shows how to implement localization in IdentityServer4 when using the Implicit Flow with an Angular client.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

The problem

When the oidc implicit client calls the endpoint /connect/authorize to authenticate and authorize the client and the identity, the user is redirected to the AccountController login method using the IdentityServer4 package. If the culture and the ui-culture is set using the query string or using the default localization filter, it gets ignored in the host. By using a localization cookie, which is set from the client SPA application, it is possible to use this culture in IdentityServer4 and it’s host.

Part 2 IdentityServer4 Localization using ui_locales and the query string

IdentityServer 4 Localization

The ASP.NET Core localization is configured in the startup method of the IdentityServer4 host. The localization service, the resource paths and the RequestCultureProviders are configured here. A custom LocalizationCookieProvider is added to handle the localization cookie. The MVC middleware is then configured to use the localization.

public void ConfigureServices(IServiceCollection services)
{
	...

	services.AddSingleton<LocService>();
	services.AddLocalization(options => options.ResourcesPath = "Resources");

	services.AddAuthentication();

	services.AddIdentity<ApplicationUser, IdentityRole>()
	.AddEntityFrameworkStores<ApplicationDbContext>();

	services.Configure<RequestLocalizationOptions>(
		options =>
		{
			var supportedCultures = new List<CultureInfo>
				{
					new CultureInfo("en-US"),
					new CultureInfo("de-CH"),
					new CultureInfo("fr-CH"),
					new CultureInfo("it-CH")
				};

			options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
			options.SupportedCultures = supportedCultures;
			options.SupportedUICultures = supportedCultures;

			options.RequestCultureProviders.Clear();
			var provider = new LocalizationCookieProvider
			{
				CookieName = "defaultLocale"
			};
			options.RequestCultureProviders.Insert(0, provider);
		});

	services.AddMvc()
	 .AddViewLocalization()
	 .AddDataAnnotationsLocalization(options =>
	 {
		 options.DataAnnotationLocalizerProvider = (type, factory) =>
		 {
			 var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
			 return factory.Create("SharedResource", assemblyName.Name);
		 };
	 });

	...

	services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddInMemoryIdentityResources(Config.GetIdentityResources())
		.AddInMemoryApiResources(Config.GetApiResources())
		.AddInMemoryClients(Config.GetClients())
		.AddAspNetIdentity<ApplicationUser>()
		.AddProfileService<IdentityWithAdditionalClaimsProfileService>();
}

The localization is added to the pipe in the Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
	app.UseRequestLocalization(locOptions.Value);

	app.UseStaticFiles();

	app.UseIdentityServer();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The LocalizationCookieProvider class implements the RequestCultureProvider to handle the localization sent from the Angular client as a cookie. The class uses the defaultLocale cookie to set the culture. This was configured in the startup class previously.

using Microsoft.AspNetCore.Localization;
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;

namespace IdentityServerWithIdentitySQLite
{
    public class LocalizationCookieProvider : RequestCultureProvider
    {
        public static readonly string DefaultCookieName = ".AspNetCore.Culture";

        public string CookieName { get; set; } = DefaultCookieName;

        /// <inheritdoc />
        public override Task<ProviderCultureResult> DetermineProviderCultureResult(HttpContext httpContext)
        {
            if (httpContext == null)
            {
                throw new ArgumentNullException(nameof(httpContext));
            }

            var cookie = httpContext.Request.Cookies[CookieName];

            if (string.IsNullOrEmpty(cookie))
            {
                return NullProviderCultureResult;
            }

            var providerResultCulture = ParseCookieValue(cookie);

            return Task.FromResult(providerResultCulture);
        }

        public static ProviderCultureResult ParseCookieValue(string value)
        {
            if (string.IsNullOrWhiteSpace(value))
            {
                return null;
            }

            var cultureName = value;
            var uiCultureName = value;

            if (cultureName == null && uiCultureName == null)
            {
                // No values specified for either so no match
                return null;
            }

            if (cultureName != null && uiCultureName == null)
            {
                uiCultureName = cultureName;
            }

            if (cultureName == null && uiCultureName != null)
            {
                cultureName = uiCultureName;
            }

            return new ProviderCultureResult(cultureName, uiCultureName);
        }
    }
}

The Account login view uses the localization to translate the different texts into one of the supported cultures.

@using System.Globalization
@using IdentityServerWithAspNetIdentity.Resources
@model IdentityServer4.Quickstart.UI.Models.LoginViewModel
@inject SignInManager<ApplicationUser> SignInManager

@inject LocService SharedLocalizer

@{
    ViewData["Title"] = @SharedLocalizer.GetLocalizedHtmlString("login");
}

<h2>@ViewData["Title"]</h2>
<div class="row">
    <div class="col-md-8">
        <section>
            <form asp-controller="Account" asp-action="Login" asp-route-returnurl="@Model.ReturnUrl" method="post" class="form-horizontal">
                <h4>@CultureInfo.CurrentCulture</h4>
                <hr />
                <div asp-validation-summary="All" class="text-danger"></div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("email")</label>
                    <div class="col-md-8">
                        <input asp-for="Email" class="form-control" />
                        <span asp-validation-for="Email" class="text-danger"></span>
                    </div>
                </div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("password")</label>
                    <div class="col-md-8">
                        <input asp-for="Password" class="form-control" type="password" />
                        <span asp-validation-for="Password" class="text-danger"></span>
                    </div>
                </div>
                <div class="form-group">
                    <label class="col-md-4 control-label">@SharedLocalizer.GetLocalizedHtmlString("rememberMe")</label>
                    <div class="checkbox col-md-8">
                        <input asp-for="RememberLogin" />
                    </div>
                </div>
                <div class="form-group">
                    <div class="col-md-offset-4 col-md-8">
                        <button type="submit" class="btn btn-default">@SharedLocalizer.GetLocalizedHtmlString("login")</button>
                    </div>
                </div>
                <p>
                    <a asp-action="Register" asp-route-returnurl="@Model.ReturnUrl">@SharedLocalizer.GetLocalizedHtmlString("registerAsNewUser")</a>
                </p>
                <p>
                    <a asp-action="ForgotPassword">@SharedLocalizer.GetLocalizedHtmlString("forgotYourPassword")</a>
                </p>
            </form>
        </section>
    </div>
</div>

@section Scripts {
    @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); }
}

The LocService uses the IStringLocalizerFactory interface to configure a shared resource for the resources.

using Microsoft.Extensions.Localization;
using System.Reflection;

namespace IdentityServerWithAspNetIdentity.Resources
{
    public class LocService
    {
        private readonly IStringLocalizer _localizer;

        public LocService(IStringLocalizerFactory factory)
        {
            var type = typeof(SharedResource);
            var assemblyName = new AssemblyName(type.GetTypeInfo().Assembly.FullName);
            _localizer = factory.Create("SharedResource", assemblyName.Name);
        }

        public LocalizedString GetLocalizedHtmlString(string key)
        {
            return _localizer[key];
        }
    }
}

Client Localization

The Angular SPA client uses the angular-l10n the localize the application.

 "dependencies": {
    "angular-l10n": "^4.0.0",

the angular-l10n is configured in the app module and is configured to save the current culture in a cookie called defaultLocale. This cookie matches what was configured on the server.

...

import { L10nConfig, L10nLoader, TranslationModule, StorageStrategy, ProviderType } from 'angular-l10n';

const l10nConfig: L10nConfig = {
    locale: {
        languages: [
            { code: 'en', dir: 'ltr' },
            { code: 'it', dir: 'ltr' },
            { code: 'fr', dir: 'ltr' },
            { code: 'de', dir: 'ltr' }
        ],
        language: 'en',
        storage: StorageStrategy.Cookie
    },
    translation: {
        providers: [
            { type: ProviderType.Static, prefix: './i18n/locale-' }
        ],
        caching: true,
        missingValue: 'No key'
    }
};

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpClientModule,
        TranslationModule.forRoot(l10nConfig),
		DataEventRecordsModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent,
        SecureFilesComponent
    ],
    providers: [
        OidcSecurityService,
        SecureFileService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

export class AppModule {

    clientConfiguration: any;

    constructor(
        public oidcSecurityService: OidcSecurityService,
        private http: HttpClient,
        configuration: Configuration,
        public l10nLoader: L10nLoader
    ) {
        this.l10nLoader.load();

        console.log('APP STARTING');
        this.configClient().subscribe((config: any) => {
            this.clientConfiguration = config;

            let openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
            openIDImplicitFlowConfiguration.stsServer = this.clientConfiguration.stsServer;
            openIDImplicitFlowConfiguration.redirect_url = this.clientConfiguration.redirect_url;
            // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
            // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
            openIDImplicitFlowConfiguration.client_id = this.clientConfiguration.client_id;
            openIDImplicitFlowConfiguration.response_type = this.clientConfiguration.response_type;
            openIDImplicitFlowConfiguration.scope = this.clientConfiguration.scope;
            openIDImplicitFlowConfiguration.post_logout_redirect_uri = this.clientConfiguration.post_logout_redirect_uri;
            openIDImplicitFlowConfiguration.start_checksession = this.clientConfiguration.start_checksession;
            openIDImplicitFlowConfiguration.silent_renew = this.clientConfiguration.silent_renew;
            openIDImplicitFlowConfiguration.post_login_route = this.clientConfiguration.startup_route;
            // HTTP 403
            openIDImplicitFlowConfiguration.forbidden_route = this.clientConfiguration.forbidden_route;
            // HTTP 401
            openIDImplicitFlowConfiguration.unauthorized_route = this.clientConfiguration.unauthorized_route;
            openIDImplicitFlowConfiguration.log_console_warning_active = this.clientConfiguration.log_console_warning_active;
            openIDImplicitFlowConfiguration.log_console_debug_active = this.clientConfiguration.log_console_debug_active;
            // id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time,
            // limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
            openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = this.clientConfiguration.max_id_token_iat_offset_allowed_in_seconds;

            configuration.FileServer = this.clientConfiguration.apiFileServer;
            configuration.Server = this.clientConfiguration.apiServer;

            this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration);

            // if you need custom parameters
            // this.oidcSecurityService.setCustomRequestParameters({ 'culture': 'fr-CH', 'ui-culture': 'fr-CH', 'ui_locales': 'fr-CH' });
        });
    }

    configClient() {

        console.log('window.location', window.location);
        console.log('window.location.href', window.location.href);
        console.log('window.location.origin', window.location.origin);
        console.log(`${window.location.origin}/api/ClientAppSettings`);

        return this.http.get(`${window.location.origin}/api/ClientAppSettings`);
    }
}

When the applications are started, the user can select a culture and login.

And the login view is localized correctly in de-CH

Or in french, if the culture is fr-CH

Links:

https://damienbod.com/2017/11/11/identityserver4-localization-using-ui_locales-and-the-query-string/

https://damienbod.com/2017/11/01/shared-localization-in-asp-net-core-mvc/

https://github.com/IdentityServer/IdentityServer4

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization

https://github.com/robisim74/angular-l10n


Anuraj Parameswaran: Getting started with OData in ASP.NET Core

This post is about getting started with OData in ASP.NET Core. OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.


Damien Bowden: Shared Localization in ASP.NET Core MVC

This article shows how ASP.NET Core MVC razor views and view models can use localized strings from a shared resource. This saves you creating many different files and duplicating translations for the different views and models. This makes it much easier to manage your translations, and also reduces the effort required to export, import the translations.

Code: https://github.com/damienbod/AspNetCoreMvcSharedLocalization

A default ASP.NET Core MVC application with Individual user accounts authentication is used to create the application.

A LocService class is used, which takes the IStringLocalizerFactory interface as a dependency using construction injection. The factory is then used, to create an IStringLocalizer instance using the type from the SharedResource class.

using Microsoft.Extensions.Localization;
using System.Reflection;

namespace AspNetCoreMvcSharedLocalization.Resources
{
    public class LocService
    {
        private readonly IStringLocalizer _localizer;

        public LocService(IStringLocalizerFactory factory)
        {
            var type = typeof(SharedResource);
            var assemblyName = new AssemblyName(type.GetTypeInfo().Assembly.FullName);
            _localizer = factory.Create("SharedResource", assemblyName.Name);
        }

        public LocalizedString GetLocalizedHtmlString(string key)
        {
            return _localizer[key];
        }
    }
}

The dummy SharedResource is required to create the IStringLocalizer instance using the type from the class.

namespace AspNetCoreMvcSharedLocalization.Resources
{
    /// <summary>
    /// Dummy class to group shared resources
    /// </summary>
    public class SharedResource
    {
    }
}

The resx resource files are added with the name, which matches the IStringLocalizer definition. This example uses SharedResource.de-CH.resx and the other localizations as required. One of the biggest problems with ASP.NET Core localization, if the name of the resx does not match the name/type of the class, view using the resource, it will not be found and so not localized. It will then use the default string, which is the name of the resource. This is also a problem as we programme in english, but the default language is german or french. Some programmers don’t understand german. It is bad to have german strings throughout the english code base.

The localization setup is then added to the startup class. This application uses de-CH, it-CH, fr-CH and en-US. The QueryStringRequestCultureProvider is used to set the request localization.

public void ConfigureServices(IServiceCollection services)
{
	...

	services.AddSingleton<LocService>();
	services.AddLocalization(options => options.ResourcesPath = "Resources");

	services.AddMvc()
		.AddViewLocalization()
		.AddDataAnnotationsLocalization(options =>
		{
			options.DataAnnotationLocalizerProvider = (type, factory) =>
			{
				var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
				return factory.Create("SharedResource", assemblyName.Name);
			};
		});

	services.Configure<RequestLocalizationOptions>(
		options =>
		{
			var supportedCultures = new List<CultureInfo>
				{
					new CultureInfo("en-US"),
					new CultureInfo("de-CH"),
					new CultureInfo("fr-CH"),
					new CultureInfo("it-CH")
				};

			options.DefaultRequestCulture = new RequestCulture(culture: "de-CH", uiCulture: "de-CH");
			options.SupportedCultures = supportedCultures;
			options.SupportedUICultures = supportedCultures;

			options.RequestCultureProviders.Insert(0, new QueryStringRequestCultureProvider());
		});

	services.AddMvc();
}

The localization is then added as a middleware.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
	...

	var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
	app.UseRequestLocalization(locOptions.Value);

	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

Razor Views

The razor views use the shared resource localization by injecting the LocService. This was registered in the IoC in the startup class. The localized strings can then be used as required.

@model RegisterViewModel
@using AspNetCoreMvcSharedLocalization.Resources

@inject LocService SharedLocalizer

@{
    ViewData["Title"] = @SharedLocalizer.GetLocalizedHtmlString("register");
}
<h2>@ViewData["Title"]</h2>
<form asp-controller="Account" asp-action="Register" asp-route-returnurl="@ViewData["ReturnUrl"]" method="post" class="form-horizontal">
    <h4>@SharedLocalizer.GetLocalizedHtmlString("createNewAccount")</h4>
    <hr />
    <div asp-validation-summary="All" class="text-danger"></div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("email")</label>
        <div class="col-md-10">
            <input asp-for="Email" class="form-control" />
            <span asp-validation-for="Email" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("password")</label>
        <div class="col-md-10">
            <input asp-for="Password" class="form-control" />
            <span asp-validation-for="Password" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label class="col-md-2 control-label">@SharedLocalizer.GetLocalizedHtmlString("confirmPassword")</label>
        <div class="col-md-10">
            <input asp-for="ConfirmPassword" class="form-control" />
            <span asp-validation-for="ConfirmPassword" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <button type="submit" class="btn btn-default">@SharedLocalizer.GetLocalizedHtmlString("register")</button>
        </div>
    </div>
</form>
@section Scripts {
    @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); }
}

View Model

The models validation messages are also localized. The ErrorMessage of the attributes are used to get the localized strings.

using System.ComponentModel.DataAnnotations;

namespace AspNetCoreMvcSharedLocalization.Models.AccountViewModels
{
    public class RegisterViewModel
    {
        [Required(ErrorMessage = "emailRequired")]
        [EmailAddress]
        [Display(Name = "Email")]
        public string Email { get; set; }

        [Required(ErrorMessage = "passwordRequired")]
        [StringLength(100, ErrorMessage = "passwordStringLength", MinimumLength = 8)]
        [DataType(DataType.Password)]
        [Display(Name = "Password")]
        public string Password { get; set; }

        [DataType(DataType.Password)]
        [Display(Name = "Confirm password")]
        [Compare("Password", ErrorMessage = "confirmPasswordNotMatching")]
        public string ConfirmPassword { get; set; }
    }
}

The AddDataAnnotationsLocalization DataAnnotationLocalizerProvider is setup to always use the SharedResource resx files for all of the models. This prevents duplicating the localizations for each of the different models.

.AddDataAnnotationsLocalization(options =>
{
	options.DataAnnotationLocalizerProvider = (type, factory) =>
	{
		var assemblyName = new AssemblyName(typeof(SharedResource).GetTypeInfo().Assembly.FullName);
		return factory.Create("SharedResource", assemblyName.Name);
	};
});

The localization can be tested using the following requests:

https://localhost:44371/Account/Register?culure=de-CH&ui-culture=de-CH
https://localhost:44371/Account/Register?culure=it-CH&ui-culture=it-CH
https://localhost:44371/Account/Register?culure=fr-CH&ui-culture=fr-CH
https://localhost:44371/Account/Register?culure=en-US&ui-culture=en-US

The QueryStringRequestCultureProvider reads the culture and the ui-culture from the parameters. You could also use headers or cookies to send the required localization in the request, but this needs to be configured in the Startup class.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization


Dominick Baier: Using iOS11 SFAuthenticationSession with IdentityModel.OidcClient

Starting with iOS 11, there’s a special system service for browser-based authentication called SFAuthenticationSession. This is the recommended approach for OpenID Connect and OAuth 2 native iOS clients (see RFC8252).

If you are using our OidcClient library – this is how you would wrap that in an IBrowser:

using Foundation;
using System.Threading.Tasks;
using IdentityModel.OidcClient.Browser;
using SafariServices;
 
namespace iOS11Client
{
    public class SystemBrowser : IBrowser
    {
        SFAuthenticationSession _sf;
 
        public Task InvokeAsync(BrowserOptions options)
        {
            var wait = new TaskCompletionSource();
 
            _sf = new SFAuthenticationSession(
                new NSUrl(options.StartUrl),
                options.EndUrl,
                (callbackUrl, error) =>
                {
                    if (error != null)
                    {
                        var errorResult = new BrowserResult
                        {
                            ResultType = BrowserResultType.UserCancel,
                            Error = error.ToString()
                        };
 
                        wait.SetResult(errorResult);
                    }
                    else
                    {
                        var result = new BrowserResult
                        {
                            ResultType = BrowserResultType.Success,
                            Response = callbackUrl.AbsoluteString
                        };
 
                        wait.SetResult(result);
                    }
                });
 
            _sf.Start();
            return wait.Task;
        }
    }
}


Dominick Baier: Templates for IdentityServer4 v2

I finally found the time to update the templates for IdentityServer4 to version 2. You can find the source code and instructions here.

To be honest, I didn’t have time to research more advanced features like post-actions (wanted to do automatic restore, but didn’t work for me) and VSIX for Visual Studio integration. If anyone has experience in this area, feel free to contact me on github.

Also – more advanced templates are coming soon (e.g. ASP.NET Identity, EF etc…)

IS4 templates.gif


Pedro Félix: RFC 8252 and OAuth 2.0 for Native Apps

Introduction

RFC 8252 – OAuth 2.0 for Native Apps, published this month by IETF as a Best Current Practice, contains much needed guidance on how to use the OAuth 2.0 framework in native applications.
In this post I present a brief summary of the defined best practices.

OAuth for native applications

When the OAuth 1.0 protocol was introduced 10 years ago, its main goal was delegated authorization for client applications accessing services on behalf of users.
It was focused on the model “du jour”, where client applications were mostly server-side rendered Web sites, interacting with end users via browsers.
For instance, it was assumed that client applications were capable of holding long-term secrets, which is rather easy for servers but not for browser-side applications or native applications running on the user’s device.

OAuth 2.0, published on 2012, introduced a framework with multiple different flows, including the support for public clients, that is, clients that don’t need to hold secrets.
However it was still pretty much focused on classical Web sites and using this framework in the context of native applications was mostly left as an exercise for the reader.
Some of the questions that didn’t had a clear or straightforward answer were:

  • What is the adequate flow for a native application?
  • Should a native application be considered a confidential client or a public client?
  • Assuming an authorization code flow or intrinsic flow, how should the authorization request be performed: via an embedded web view or via the system browser?
  • How is the authorization response redirected back into the client application, since it isn’t a server any more? Via listening on a loopback port or using platform specific mechanisms (e.g. Android intents and custom URI schemes)?
  • What’s the proper way for avoiding code or token leakage into malicious applications also installed in the user’s device?

The first major guidance to these questions came with RFC 7636 – Proof Key for Code Exchange by OAuth Public Clients, published in 2015.
This document defines a way to use the authorization code flow with public clients, i.e. adequate to native applications, protected against the interception of the authorization code by another application (e.g. malicious applications installed in the same user device).
The problem that it addresses as well as the proposed solutions are described on a previous post: OAuth 2.0 and PKCE.

The recently published RFC 8252 – OAuth 2.0 for Native Apps (October 2017) builds upon RFC 7636 and defines a set of best practices for when using OAuth 2.0 on native applications, with emphasis on the user-agent integration aspects.

In summary, it defines the following best practices:

  • A native client application must be a public client, except if using dynamic client registration (RFC7591) to provision per device unique clients, where each application installation has an set of secret credentials) – section 8.4.

  • The client application should use the authorization code grant flow with PKCE (RFC 7636 – Proof Key for Code Exchange by OAuth Public Clients), instead of the implicit flow, namely because the later does not support the protection provided by PKCE – section 8.2.

  • The application should use an external user-agent, such as the system browser, instead of an embedded user-agent such as a web view – section 4.

    • An application using a web view can control everything that happens inside it, namely access the user’s credentials when they are inserted on it.
      Using an external user-agent isolates the user credentials from the client application, which is one of the OAuth 2.0 original goals.

    • Using the system-browser can also provide a kind of Single Sign-On – users delegating access to multiple applications using the same authorization server (or delegated identity provider) only have to authenticate once because the session artifacts (e.g. cookies) will still be available.

    • To avoid switching out of the application into the external user-agent, which may not provide a good user experience, some platforms support “in-app browser tabs” where the user agent seems to be embedded into the application, while supporting full data isolation – iOS SFAuthenticationSession or Android’s Chrome Custom Tabs.

  • The authorization request should use one of the chosen user-agent mechanism, by providing it with the URI for the authorization endpoint with the embedded request on it.

  • The redirect back to the application can use one of multiple techniques.

    • Use a redirect endpoint (e.g. com.example.myapp:/oauth2/redirect) with a private scheme (e.g. com.example.myapp) that points to the application.
      Android’s implicit intents are an example of a mechanism allowing this.
      When using this technique, the custom URI scheme must be the reversal of a domain name under the application’s control (e.g. com.example.myapp if the myapp.example.com name is controlled by the application’s author) – section 7.1.

    • Another option is to use a claimed HTTPS redirect URI, which is a feature provided by some platforms (e.g. Android’s App Links) where a request to a claimed URI triggers a call into the application instead of a regular HTTP request. This is considered to be the preferred method – section 7.2.

    • As a final option, the redirect can be performed by having the application listening on the loopback interface (127.0.0.1 or ::1).

To illustrate these best practices, the following diagram represents an end-to-end OAuth 2.0 flow on a native application

native.auth

  • On step 1, the application invokes the external user-agent, using a platform specific mechanism, and passing in the authorization URI with the embedded authorization request.

  • As a consequence, on step 2, the external user-agent is activated and does a HTTP request to the authorization endpoint using the provided URI.

  • The response to step 2 depends on the concrete authorization endpoint and is not defined by the OAuth specifications.
    A common pattern is for the response to be a redirect to a login page, followed by a consent page.

  • On step 3, after ending the direct user interaction, the authorization endpoint produces a HTTP response with the authorization response embedded inside (e.g. 302 status code with the authorization response URI in the Location header).

  • On step 4, the user-agent reacts to this response by processing the redirect URI.
    If using a private scheme or a claimed redirect URI, the user-agent uses a platform specific inter process communication mechanism to deliver the authorization response to the application (e.g. Android’s intents).
    If using localhost as the redirect URI host, the user-agent does a regular HTTP requests to the loopback interface, which is being listened by the application, thereby providing the authorization response to it.

  • On steps 5 and 6, the application exchanges the authorization code for the access and refresh tokens, using a straightforward token request.
    This interaction is done directly between the client and the token endpoint, without going through the user-agent, since no user interaction is needed (back-channel vs. front-channel).
    Since the client is public, this interaction is not authenticated (i.e. does not include any client application credentials).
    Due to this anonymous characteristic and to protect against code hijack, the code_verifier` parameter from the PKCE extension must be added to the token request.

  • Finally, on step 7, the application can use the resource server on the user’s behalf, by adding the access token to the Authorization header.

The AppAuth libraries for iOS and Android already follows these best practices.

I hope this brief summary helps.
As always, questions, comments, and suggestions are highly appreciated.



Damien Bowden: Implementing custom policies in ASP.NET Core using the HttpContext

This article shows how to implement a custom ASP.NET Core policy using the AuthorizationHandler class. The handler validates, that the identity from the HttpContext has the authorization to update the object in the database.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Scenerio

In the example, each admin user of the client application, can create DataEventRecord entities which can only be accessed by the corresponding identity. If a different identity with a different user sends a PUT request to update the object, a 401 response is returned. Because the Username from the identity is saved in the database for each entity, the custom policy can validate the identity and the entity to be updated.

Creating the Requirement for the Policy

A simple requirement is created for the policy implementation. The AuthorizationHandler implementation requires this and the requirement class is also used to add the policy to the application.

using Microsoft.AspNetCore.Authorization;

namespace ApiServer.Policies
{
    public class CorrectUserRequirement : IAuthorizationRequirement
    {
        public class CorrectUserRequirement : IAuthorizationRequirement{}
    }
}

Creating the custom Handler for the Policy

If a method is called, which is protected by the CorrectUserHandler, the HandleRequirementAsync is executed. In this method, the id of the object to be updated is extracted from the url path. The id is then used to select the Username from the database, which is then compared to the Username from the HttpContext identity name. If the values are not equal, no success message is returned.

using ApiServer.Repositories;
using Microsoft.AspNetCore.Authorization;
using System;
using System.Threading.Tasks;

namespace ApiServer.Policies 
{
    public class CorrectUserHandler : AuthorizationHandler<CorrectUserRequirement>
    {
        private readonly IDataEventRecordRepository _dataEventRecordRepository;

        public CorrectUserHandler(IDataEventRecordRepository dataEventRecordRepository)
        {
            _dataEventRecordRepository = dataEventRecordRepository;
        }

        protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, CorrectUserRequirement requirement)
        {
            if (context == null)
                throw new ArgumentNullException(nameof(context));
            if (requirement == null)
                throw new ArgumentNullException(nameof(requirement));

            var authFilterCtx = (Microsoft.AspNetCore.Mvc.Filters.AuthorizationFilterContext)context.Resource;
            var httpContext = authFilterCtx.HttpContext;
            var pathData = httpContext.Request.Path.Value.Split("/");
            long id = long.Parse(pathData[pathData.Length -1]);

            var username = _dataEventRecordRepository.GetUsername(id);
            if (username == httpContext.User.Identity.Name)
            {
                context.Succeed(requirement);
            }

            return Task.CompletedTask;
        }
    }
}

Adding the policy to the application

The custom policy is added to the ASP.NET Core application using the AddAuthorization extension using the requirement.

services.AddAuthorization(options =>
{
	...
	options.AddPolicy("correctUser", policyCorrectUser =>
	{
		policyCorrectUser.Requirements.Add(new CorrectUserRequirement());
	});
});

Using the policy in the ASP.NET Core controller

The PUT method uses the correctUser policy to authorize the request.

[Authorize("dataEventRecordsAdmin")]
[Authorize("correctUser")]
[HttpPut("{id}")]
public IActionResult Put(long id, [FromBody]DataEventRecordDto dataEventRecordDto)
{
	_dataEventRecordRepository.Put(id, dataEventRecordDto);
	return NoContent();
}

If a user logs in, and tries to update an entity belonging to a different user, the request is rejected.

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/policies


Anuraj Parameswaran: Dockerize an existing ASP.NET MVC 5 application

This post is about describe the process of the migrating of existing ASP.NET MVC 5 or ASP.NET Web Forms application to Windows Containers. Running an existing .NET Framework-based application in a Windows container doesn’t require any changes to your app. To run your app in a Windows container you create a Docker image containing your app and start the container.


Dominick Baier: SAML2p Identity Provider Support for IdentityServer4

One very common feature request is support for acting as a SAML2p identity provider.

This is not a trivial task, but our friends at Rock Solid Knowledge were working hard, and now published a beta version. Give it a try!

 


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.