Anuraj Parameswaran: Getting started with Blazor

This post is about how to get started with Blazor. Blazor is an experimental .NET web framework using C#/Razor and HTML that runs in the browser with WebAssembly. Blazor enables full stack web development with the stability, consistency, and productivity of .NET. While this release is alpha quality and should not be used in production, the code for this release was written from the ground up with an eye towards building a production quality web UI framework.


Anuraj Parameswaran: Dockerize an ASP.NET MVC 5 Angular application with Docker for Windows

Few days back I wrote a post about working with Angular 4 in ASP.NET MVC. I received multiple queries on deployment aspects - how to setup the development environment or how to deploy it in IIS, or in Azure etc. In this post I am explaining how to deploy a ASP.NET MVC - Angular application to Docker environment.


Andrew Lock: Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

In this post I describe a .NET Core CLI global tool I created that can be used to compress images using the TinyPNG developer API. I'll give some background on .NET Core CLI tools, describe the changes to tooling in .NET Core 2.1, and show some of the code required to build your own global tools. You can find the code for the tool in this post at https://github.com/andrewlock/dotnet-tinify.

The code for my global tool was heavily based on the dotnet-serve tool by Nate McMaster. If you're interested in global tools, I strongly suggest reading his post on them, as it provides background, instructions, and an explanation of what's happening under the hood. He's also created a CLI template you can install to get started.

.NET CLI tools prior to .NET Core 2.1

The .NET CLI (which can be used for .NET Core and ASP.NET Core development) includes the concept of "tools" that you can install into your project. This includes things like the EF Core migration tool, the user-secrets tool, and the dotnet watch tool.

Prior to .NET Core 2.1, you need to specifically install these tools in every project where you want to use them. Unfortunately, there's no tooling for doing this either in the CLI or in Visual Studio. Instead, you have to manually edit your .csproj file and add a DotNetCliToolReference:

<ItemGroup>  
    <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="2.0.0" />
</ItemGroup>  

The tools themselves are distributed as NuGet packages, so when you run a dotnet restore on the project, it will restore the tool at the same time.

Adding tool references like this to every project has both upsides and downsides. On the one hand, adding them to the project file means that everyone who clones your repository from source control will automatically have the correct tools installed. Unfortunately, having to manually add this line to every project means that I rarely bother installing non-essential-but-useful tools like dotnet watch anymore.

.NET Core 2.1 global tools

In .NET Core 2.1, a feature was introduced that allows you to globally install a .NET Core CLI tool. Rather than having to install the tool manually in every project, you install it once globally on your machine, and then you can run the tool from any project.

You can think of this as synonymous with npm -g global packages

The intention is to expose all the first-party CLI tools (such as dotnet-user-secrets and dotnet-watch) as global tools, so you don't have to remember to explicitly install them into your projects. Obviously this has the downside that all your team have to have the same tools (and potentially the same version of the tools) installed already.

You can install a global tool using the .NET Core 2.1 SDK (preview 1). For example, to install Nate's dotnet serve tool, you just need to run:

dotnet install tool --global dotnet-serve  

You can then run dotnet serve from any folder.

In the next section I'll describe how I built my own global tool dotnet-tinify that uses the TinyPNG api to compress images in a folder.

Compressing images using the TinyPNG API

Images make up a huge proportion of the size of a website - a quick test on the Amazon home page shows that 94% of the page's size is due to images. That means it's important to make sure your images aren't using more data than they need too, as it will slow down your page load times.

Page load times are important when you're running an ecommerce site, but they're important everywhere else too. I'm much more likely to abandon a blog if it takes 10 seconds to load the page, than if it pops in instantly.

Before I publish images on my blog, I always wake sure they're as small as they can be. That means resizing them as necessary, using the correct format (.png for charts etc, .jpeg for photos), but also squashing them further.

Different programs will save images with different quality, different algorithms, and different metadata. You can often get smaller images without a loss in quality by just stripping the metadata and using a different compression algorithm. When I as using a Mac, I typically used ImageOptim; now I typically use the TinyPNG website.

Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

To improve my workflow, rather than manually uploading and downloading images, I decided a global tool would be perfect. I could install it once, and run dotnet tinify . to squash all the images in the current folder.

Creating a .NET Core global tool

Creating a .NET CLI global tool is easy - it's essentially just a console app with a few additions to the .csproj file. Create a .NET Core Console app, for example using dotnet new console, and update your .csproj to add the IsPackable and PackAsTool elements:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <IsPackable>true</IsPackable>
    <PackAsTool>true</PackAsTool>
    <TargetFramework>netcoreapp2.1</TargetFramework>
  </PropertyGroup>

</Project>

It's as easy as that!

You can add NuGet packages to your project, reference other projects, anything you like; it's just a .NET Core console app! In the final section of this post I'll talk briefly about the dontet-tinify tool I created.

dotnet-tinify: a global tool for squashing images

To be honest, creating the tool for dotnet-tinify really didn't take long. Most of the hard work had already been done for me, I just plugged the bits together.

TinyPNG provides a developer API you can use to access their service. It has an impressive array of client libraries to choose from (e.g HTTP, Ruby, PHP, Node.js, Python, Java and .NET), and is even free to use for the first 500 compressions per month. To get started, head to https://tinypng.com/developers and signup (no credit card) to get an API key:

Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

Given there's already an official client library (and it's .NET Standard 1.3 too!) I decided to just use that in dotnet-tinify. Compressing an image is essentially a 4 step process:

1. Set the API key on the static Tinify object:

Tinify.Key = apiKey;  

2. Validate the API key

await Tinify.Validate();  

3. Load a file

var source = Tinify.FromFile(file);  

4. Compress the file and save it to disk

await source.ToFile(file);  

There's loads more you can with the API: resizing images, loading and saving to buffers, saving directly to s3. For details, take a look at the documentation.

With the functionality aspect of the tool sorted, I needed a way to pass the API key and path to the files to compress to the tool. I chose to use Nate McMaster's CommandLineUtils fork, McMaster.Extensions.CommandLineUtils, which is one of many similar libraries you can use to handle command-line parsing and help message generation.

You can choose to use either the builder API or an attribute API with the CommandLineUtils package, so you can choose whichever makes you happy. With a small amount of setup I was able to get easy command line parsing into strongly typed objects, along with friendly help messages on how to use the tool with the --help argument:

> dotnet tinify --help
Usage: dotnet tinify [arguments] [options]

Arguments:  
  path  Path to the file or directory to squash

Options:  
  -?|-h|--help            Show help information
  -a|--api-key <API_KEY>  Your TinyPNG API key

You must provide your TinyPNG API key to use this tool  
(see https://tinypng.com/developers for details). This
can be provided either as an argument, or by setting the  
TINYPNG_APIKEY environment variable. Only png, jpeg, and  
jpg, extensions are supported  

And that's it, the tool is finished. It's very basic at the moment (no tests 😱!), but currently that's all I need. I've pushed an early package to NuGet and the code is on GitHub so feel free to comment / send issues / send PRs.

You can install the tool using

dotnet install tool --global dotnet-tinify  

You need to set your tiny API key in the TINYPNG_APIKEY environment for your machine (e.g. by executing setx TINYPNG_APIKEY abc123 in a command prompt), or you can pass the key as an argument to the dotnet tinify command (see below)

Typical usage might be

  • dotnet tinify image.png - compress image.png in the current directory
  • dotnet tinify . - compress all the png and jpeg images in the current directory
  • dotnet tinify "C:\content" - compress all the png and jpeg images in the "C:\content" path
  • dotnet tinify image.png -a abc123 - compress image.png , providing your API key as an argument

So give it a try, and have a go at writing your own global tool, it's probably easier than you think!

Summary

In this post I described the upcoming .NET Core global tools, and how they differ from the existing .NET Core CLI tools. I then described how I created a .NET Core global tool to compress my images using the TinyPNG developer API. Creating a global tool is as easy as setting a couple of properties in your .csproj file, so I strongly suggest you give it a try. You can find the dotnet-tinify tool I created on NuGet or on GitHub. Thanks to Nate McMaster for (heavily) inspiring this post!


Damien Bowden: Comparing the HTTPS Security Headers of Swiss banks

This post compares the security HTTP Headers used by different banks in Switzerland. securityheaders.io is used to test each of the websites. The website of each bank as well as the e-banking login was tested. securityheaders.io views the headers like any browser.

The tested security headers help protect against some of the possible attacks, especially during the protected session. I would have expected all the banks to reach at least a grade of A, but was surprised to find, even on the login pages, many websites are missing some of the basic ways of protecting the application.

Credit Suisse provide the best protection for the e-banking login, and Raiffeisen have the best usage of the security headers on the website. Strange that the Raiffeisen webpage is better protected than the Raiffeisen e-banking login.

Scott Helme explains each of the different headers here, and why you should use them:

TEST RESULTS

Best A+, Worst F

e-banking

1. Grade A Credit Suisse
1. Grade A Basler Kantonalbank
3. Grade B Post Finance
3. Grade B Julius Bär
3. Grade B WIR Bank
3. Grade B DC Bank
3. Grade B Berner Kantonalbank
3. Grade B St. Galler Kantonalbank
3. Grade B Thurgauer Kantonalbank
3. Grade B J. Safra Sarasin
11. Grade C Raiffeisen
12. Grade D Zürcher Kantonalbank
13. Grade D UBS
14. Grade D Valiant

web

1. Grade A Raiffeisen
2. Grade A Credit Suisse
2. Grade A WIR Bank
2. Grade A J. Safra Sarasin
5. Grade A St. Galler Kantonalbank
6. Grade B Post Finance
6. Grade B Valiant
8. Grade C Julius Bär
9. Grade C Migros Bank
10. Grade D UBS
11. Grade D Zürcher Kantonalbank
12. Grade D Berner Kantonalbank
13. Grade F DC Bank
14. Grade F Thurgauer Kantonalbank
15. Grade F Basler Kantonalbank

TEST RESULTS DETAILS

UBS

https://www.ubs.com

This is one of the worst protected of all the bank e-banking logins tested. It is missing most of the security headers. The website is also missing most of the security headers.

https://ebanking-ch.ubs.com

The headers returned from the e-banking login is even worst than the D rating, as it is also missing the X-Frame-options protection.

cache-control →no-store, no-cache, must-revalidate, private
connection →Keep-Alive
content-encoding →gzip
content-type →text/html;charset=UTF-8
date →Tue, 27 Mar 2018 11:46:15 GMT
expires →Thu, 1 Jan 1970 00:00:00 GMT
keep-alive →timeout=5, max=10
p3p →CP="OTI DSP CURa OUR LEG COM NAV INT"
server →Apache
strict-transport-security →max-age=31536000
transfer-encoding →chunked

No CSP is present here…

Credit Suisse

The Credit Suisse website and login are protected with most of the headers and have a good CSP. The no-referrer header is missing from the e-banking login and could be added.

https://www.credit-suisse.com/ch/en.html

CSP

default-src 'self' 'unsafe-inline' 'unsafe-eval' data: *.credit-suisse.com 
*.credit-suisse.cspta.ch *.doubleclick.net *.decibelinsight.net 
*.mookie1.com *.demdex.net *.adnxs.com *.facebook.net *.google.com 
*.google-analytics.com *.googletagmanager.com *.google.ch *.googleapis.com 
*.youtube.com *.ytimg.com *.gstatic.com *.googlevideo.com *.twitter.com 
*.twimg.com *.qq.com *.omtrdc.net *.everesttech.net *.facebook.com 
*.adobedtm.com *.ads-twitter.com t.co *.licdn.com *.linkedin.com 
*.credit-suisse.wesit.rowini.net *.zemanta.com *.inbenta.com 
*.adobetag.com sc-static.net

The CORS header is present, but it allows all origins, which is a bit lax, but CORS is not really a securtiy feature. I think is still should be more strict.

https://direct.credit-suisse.com/dn/c/cls/auth?language=en

CSP

default-src dnmb: 'self' *.credit-suisse.com *.directnet.com *.nab.ch; 
script-src dnmb: 'self' 'unsafe-inline' 'unsafe-eval' *.credit-suisse.com 
*.directnet.com *.nab.ch ; style-src 'self' 'unsafe-inline' *.credit-suisse.com *.directnet.com *.nab.ch; img-src 'self' http://img.youtube.com data: 
*.credit-suisse.com *.directnet.com *.nab.ch; connect-src 'self' wss: ; 
font-src 'self' data:

Raiffeisen

The Raiffeisen website is the best protected of all the tested banks. The e-banking could be improved.

https://www.raiffeisen.ch/rch/de.html

CSP

This is pretty good, but it allows unsafe-eval, probably due to the javascript lib used to implement the UI. This could be improved.

Security-Policy	default-src 'self' ; script-src 'self' 'unsafe-inline' 
'unsafe-eval' assets.adobedtm.com maps.googleapis.com login.raiffeisen.ch ;
 style-src 'self' 'unsafe-inline' fonts.googleapis.com ; img-src 'self' 
statistics.raiffeisen.ch dmp.adform.net maps.googleapis.com maps.gstatic.com 
csi.gstatic.com khms0.googleapis.com khms1.googleapis.com www.homegate.ch 
dpm.demdex.net raiffeisen.demdex.net ; font-src 'self' fonts.googleapis.com 
fonts.gstatic.com ; connect-src 'self' api.raiffeisen.ch statistics.raiffeisen.ch 
www.homegate.ch prod1.solid.rolotec.ch dpm.demdex.net login.raiffeisen.ch ;
 media-src 'self' ruz.ch ; child-src * ; frame-src * ;

https://ebanking.raiffeisen.ch/

Zürcher Kantonalbank

https://www.zkb.ch/

The website is pretty bad. It has a mis-configuration in the X-Frame-Options. The e-banking login is missing most of the headers.

https://onba.zkb.ch/page/logon/logon.page

Post Finance

Post Finance is missing the CSP header and the no-referrer header in both the website and the login. This could be improved.

https://www.postfinance.ch/de/privat.html

https://www.postfinance.ch/ap/ba/fp/html/e-finance/home?login

Julius Bär

Julius Bär is missing the CSP header and the no-referrer header for the e-banking login, and the X-Frame-Options is also missing from the website.

https://www.juliusbaer.com/global/en/home/

https://ebanking.juliusbaer.com/bjbLogin/login?lang=en

Migros Bank

The website is missing a lot of headers as well.

https://www.migrosbank.ch/de/privatpersonen.html

Migro Bank provided no login link from the browser.

WIR Bank

The WIR bank have one of the best websites, and is missing the the no-referrer header. It’s e-banking solution is missing both a CSP Header as well as a referrer policy. Here the website is more secure than the e-banking, strange.

https://www.wir.ch/

CSP

frame-ancestors 'self' https://www.jobs.ch;

https://wwwsec.wir.ch/authen/login?lang=de

DC Bank

The DC Bank is missing all the security headers on the website. This could really be improved! The e-banking is better, but missing the CSP and the referrer policies.

https://www.dcbank.ch/

https://banking.dcbank.ch/login/login.jsf?bank=74&lang=de&path=layout/dcb

Basler Kantonalbank

This is an interesting test. Basler Kantonalbank has a no security headers in the website, and even an incorrect X-Frame-Options. The e-banking is good, but missing the no-referrer policy. So it has the best and the worst of the banks tested.

https://www.bkb.ch/en

https://login.bkb.ch/auth/login

CSP

default-src https://*.bkb.ch https://*.mybkb.ch; 
img-src data: https://*.bkb.ch https://*.mybkb.ch; 
script-src 'unsafe-inline' 'unsafe-eval' 
https://*.bkb.ch https://*.mybkb.ch; style-src 
https://*.bkb.ch https://*.mybkb.ch 'unsafe-inline';

Berner Kantonalbank

https://www.bekb.ch/

The Berner Kantonalbank has implemented 2 security headers on the website , but is missing the HSTS header. The e-banking is missing 2 of the security headers, no-referrer policy and the CSP.

CSP

frame-ancestors 'self'

https://banking.bekb.ch/login/login.jsf?bank=5&lang=de&path=layout/bekb

Valiant

Valiant has one of the better websites, but the worst e-banking concerning the security headers. Only has the X-Frame-Options supported.

https://www.valiant.ch/privatkunden

https://wwwsec.valiant.ch/authen/login

St. Galler Kantonalbank

The website is an A-Grade, but missing 2 headers, the X-Frame-Options and the no-referrer header. The e-banking is less protected compared to the website, has a grade B. It is missing the CSP and the referrer policy.

https://www.sgkb.ch/

CSP

default-src 'self' 'unsafe-inline' 'unsafe-eval' recruitingapp-1154.umantis.com 
*.googleapis.com *.gstatic.com prod1.solid.rolotec.ch beta.idisign.ch 
test.idisign.ch dis.swisscom.ch www.newhome.ch www.wuestpartner.com; 
img-src * data: android-webview-video-poster:; font-src * data:

https://www.onba.ch/login/login

Thurgauer Kantonalbank

The Thurgauer website is missing all the security headers, not even the HSTS supported, and the e-banking is missing the CSP and the no-referrer headers.

https://www.tkb.ch/

https://banking.tkb.ch/login/login

J. Safra Sarasin

J. Safra Sarasin website uses most security headers, it is only missing the no-referrer header. The e-banking webite is missing the CSP and the referrer headers.

https://www.jsafrasarasin.ch

CSP

frame-ancestors 'self'

https://ebanking-ch.jsafrasarasin.com/ebankingLogin/login

It would be nice if the this part of the security could be improved for all of these websites.


Andrew Lock: How to create a Helm chart repository using Amazon S3

How to create a Helm chart repository using Amazon S3

Helm is a package manager for Kubernetes. You can bundle Kubernetes resources together as charts that define all the necessary resources and dependencies of an application. You can then use the Helm CLI to install all the pods, services, and ingresses for an application in one simple command.

Just like Docker or NuGet, there's a common public repository for Helm charts that the helm CLI uses by default. And just like Docker and NuGet, you can host your own Helm repository for your charts.

In this post, I'll show how you can use an AWS S3 bucket to host a Helm chart repository, how to push custom charts to it, and how to install charts from the chart repository. I won't be going into Helm or Kubernetes in depth, I suggest you check the Helm quick start guide if they're new to you.

If you're not using AWS, and you'd like to store your charts on Azure, Michal Cwienczek has a post on how to create a Helm chart repository using Blob Storage instead.

Installing the prerequisites

Before you start working with Helm properly, youu need to do some setup. The Helm S3 plugin you'll be using later requires that you have the AWS CLI installed and configured on your machine. You'll also need an S3 bucket to use as your repository.

Installing the AWS CLI

I'm using an Ubuntu 16.04 virtual machine for this post, so all the instructions assume you have the same setup.

The suggested approach to install the AWS CLI is to use pip, the Python package index. This obviously requires Python, which you can confirm is installed using:

$ python -V
Python 2.7.12  

According to the pip website:

pip is already installed if you are using Python 2 >=2.7.9 or Python 3 >=3.4

However, running which pip returned nothing for me, so I installed it anyway using

$ sudo apt-get install python-pip

Finally, we can install the AWS CLI using:

$ pip install awscli

The last thing to do is to configure your environment to access your AWS account. Add the ~./aws/config and ~./aws/credentials files to your home directory with the appropriate access keys, as described in the docs

Creating the repository S3 bucket

You're going to need an S3 bucket to store your charts. You can create the bucket anyway you like, either using the AWS CLI, or using the AWS Management Console. I used the Management Console to create a bucket called my-helm-charts:

How to create a Helm chart repository using Amazon S3

Whenever you create a new bucket, it's a good idea to think about who is able to access it, and what they're able to do. You can control this using IAM policies or S3 policies, whatever works for you. Just make sure you've looked into it!

The policy below, for example, grants read and write access to the IAM user andrew.

Once your repository is working correctly, you might want to update this so that only your CI/CD pipeline can push charts to your repository, but that any of your users can list and fetch charts. It may also be wise to remove the delete action completely.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowListObjects",
      "Effect": "Allow",
      "Principal": {
        "AWS": ["arn:aws:iam::111122223333:user/andrew"]
      },
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::my-helm-charts"
    },
    {
      "Sid": "AllowObjectsFetchAndCreate",
      "Effect": "Allow",
      "Principal": {
        "AWS": ["arn:aws:iam::111122223333:user/andrew"]
      },
      "Action": [
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::my-helm-charts/*"
    }
  ]
}

Installing the Helm S3 plugin

You're almost set now. If you've haven't already, install Helm using the instructions in the quick start guide.

The final prerequisite is the Helm S3 plugin. This acts as an intermediary between Helm and your S3 bucket. It's not the only way to create a custom repository, but it simplifies a lot of things.

You can install the plugin from the GitHub repo by running:

$ helm plugin install https://github.com/hypnoglow/helm-s3.git
Downloading and installing helm-s3 v0.5.2 ...  
Installed plugin: s3  

This downloads the latest version of the plugin from GitHub, and registers it with Helm.

Creating your Helm chart repository

You're finally ready to start playing with charts properly!

The first thing to do is to turn the my-helm-charts bucket into a valid chart repository. This requires adding an index.yaml to it. The Helm S3 plugin has a helper method to do that for you, which generates a valid index.yaml and uploads it to your S3 bucket:

$ helm S3 init s3://my-helm-charts/charts
Initialized empty repository at s3://my-helm-charts/charts  

If you fetch the contents of the bucket now, you'll find an index.yamlfile under the /charts key

How to create a Helm chart repository using Amazon S3

Note, the /charts prefix is entirely optional. If you omit the prefix, the Helm chart repository will be in the root of the bucket. I just included it for demonstration purposes here.

The contents of the index.yaml file is very basic at the moment:

apiVersion: v1  
entries: {}  
generated: 2018-02-10T15:27:15.948188154-08:00  

To work with the chart repository by name instead of needing the whole URL, you can add an alias. For example, to create a my-charts alias:

$ helm repo add my-charts s3://my-helm-charts/charts
"my-charts" has been added to your repositories

If you run helm repo list now, you'll see your repo listed (along with the standard stable and local repos:

$ helm repo list
NAME            URL  
stable          https://kubernetes-charts.storage.googleapis.com  
local           http://127.0.0.1:8879/charts  
my-charts       s3://my-helm-charts/charts  

You now have a functioning chart repository, but it doesn't have any charts yet! In the next section I'll show how to push charts to, and install charts from, your S3 repository.

Uploading a chart to the repository

Before you can push a chart to the repository, you need to create one. If you already have one, you could use that, or you could copy one of the standard charts from the stable repository. For the sake of completion, I'll create a basic chart, and use that for the rest of the post.

Creating a simple test Helm chart

I used the example from the Helm docs for this test, which creates one of the simplest templates, a ConfigMap, and adds it at the path test-chart/templates/configmap.yaml:

$ helm create test-chart
Creating test-chart  
# Remove the initial cruft
$ rm -rf test-chart/templates/*.*
# Create a ConfigMap template at test-chart/templates/configmap.yaml
$ cat >test-chart/templates/configmap.yaml <<EOL
apiVersion: v1  
kind: ConfigMap  
metadata:  
  name: test-chart-configmap
data:  
  myvalue: "Hello World"
EOL  

You can install this chart into your kubernetes cluster using:

$ helm install ./test-chart
NAME:   zeroed-armadillo  
LAST DEPLOYED: Fri Feb  9 17:10:38 2018  
NAMESPACE: default  
STATUS: DEPLOYED

RESOURCES:  
==> v1/ConfigMap
NAME               DATA  AGE  
test-chart-configmap  1     0s  

and remove it again completely using the release name presented when you installed it (zeroed-armadillo) :

# --purge removes the release from the "store" completely
$ helm delete --purge zeroed-armadillo
release "zeroed-armadillo" deleted  

Now you have a chart to work with it's time to push it to your repository.

Uploading the test chart to the chart repository

To push the test chart to your repository you must first package it. This takes all the files in your ./test-chart repository and bundles them into a single .tgz file:

$ helm package ./test-chart
Successfully packaged chart and saved it to: ~/test-chart-0.1.0.tgz  

Once the file is packaged, you can push it to your repository using the S3 plugin, by specifying the packaged file name, and the my-charts alias you specified earlier.

$ helm s3 push ./test-chart-0.1.0.tgz my-charts

Note that without the plugin you would normally have to "manually" sync your local and remote repos, merging the remote repository with your locally added charts. The S3 plugin handles all that for you.

If you check your S3 bucket after pushing the chart, you'll see that the tgz file has been uploaded:

How to create a Helm chart repository using Amazon S3

That's it, you've pushed a chart to an S3 repository!

Searching and installing from the repository

If you do a search for a test chart using helm search you can see your chart listed:

$ helm search test-chart
NAME                    CHART VERSION   APP VERSION     DESCRIPTION  
my-charts/test-chart    0.1.0           1.0             A Helm chart for Kubernetes  

You can fetch and/or unpack the chart locally using helm fetch my-charts/test-chart or you can jump straight to installing it using:

$ helm install my-charts/test-chart
NAME:   rafting-crab  
LAST DEPLOYED: Sat Feb 10 15:53:34 2018  
NAMESPACE: default  
STATUS: DEPLOYED

RESOURCES:  
==> v1/ConfigMap
NAME               DATA  AGE  
mychart-configmap  1     0s  

To remove the test chart from the repository, you provide the chart name and version you wish to delete:

$ helm s3 delete test-chart --version 0.1.0 my-charts

That's basically all there is to it! You now have a central repository on S3 for storing your charts. You can fetch, search, and install charts from your repository, just as you would any other.

A warning - make sure you version your charts correctly

Helm charts should be versioned using Semantic versioning, so if you make a change to a chart, you should be sure to bump the version before pushing it to your repository. You should treat the chart name + version as immutable.

Unfortunately, there's currently nothing in the tooling to enforce this, and prevent you overwriting an existing chart with a chart with the same name and version number. There's an open issue to address this in the S3 plugin, but in the mean time, just be careful, and potentially enable versioning of files in S3 to catch any issues.
As of version 0.6.0, the plugin will block overwriting a chart if it already exists.

In a similar vein, you may want to disable the ability to delete charts from a repository. I feel like it falls under the same umbrella as immutability of charts in general - you don't want to break downstream charts that have taken a dependency on your chart.

Summary

In this post I showed how to create a Helm chart repository in S3 using the Helm S3 plugin. I showed how to prepare an S3 bucket as a Helm repository, and how to push a chart to it. Finally, I showed how to search and install charts from the S3 repository.
.


Andrew Lock: Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

In ASP.NET core 2.1 (currently in preview 1) Microsoft have changed the way the ASP.NET core framework is deployed for .NET Core apps, by moving to a system of shared frameworks instead of using the runtime store.

In this post, I look at some of the history and motivation for this change, the changes that you'll see when you install the ASP.NET Core 2.1 SDK or runtime on your machine, and what it all means for you as an ASP.NET Core developer.

If you're not interested in the history side, feel free to skip ahead to the impact on you as an ASP.NET Core developer:

The Microsoft.AspNetCore.All metapackage and the runtime store

In this section, I'll recap over some of the problems that the Microsoft.AspNetCore.All was introduced to solve, as well as some of the issues it introduces. This is entirely based on my own understanding of the situation (primarily gleaned from these GitHub issues), so do let me know in the comments if I've got anything wrong or misrepresented the situation!

In the beginning, there were packages. So many packages.

With ASP.NET Core 1.0, Microsoft set out to create a highly modular, layered, framework. Instead of the monolithic .NET framework that you had to install in it's entirety in a central location, you could reference individual packages that provide small, discrete piece of functionality. Want to configure your app using JSON files? Add the Microsoft.Extensions.Configuration.Json package. Need environment variables? That's a different package (Microsoft.Extensions.Configuration.EnvironmentVariables).

This approach has many benefits, for example:

  • You get a clear "layering" of dependencies
  • You can update packages independently of others
  • You only have to include the packages that you actually need, reducing the published size of your app.

Unfortunately, these benefits diminished as the framework evolved.

Initially, all the framework packages started at version 1.0.0, and it was simply a case of adding or removing packages as necessary for the required functionality. But bug fixes arrived shortly after release, and individual packages evolved at different rates. Suddenly .csproj files were awash with different version numbers, 1.0.1, 1.0.3, 1.0.2. It was no longer easy to tell at a glance whether you were on the latest version of a package, and version management became a significant chore. The same was true when ASP.NET Core 1.1 was released - a brief consolidation was followed by diverging package versions:

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

On top of that, the combinatorial problem of testing every version of a package with every other version, meant that there was only one "correct" combination of versions that Microsoft would support. For example, using the 1.1.0 version of the StaticFiles middleware with the 1.0.0 MVC middleware was easy to do, and would likely work without issue, but was not a configuration Microsoft could support.

It's worth noting that the Microsoft.AspNetCore metapackage partially solved this issue, but it only included a limited number of packages, so you would often still be left with a degree of external consolidation required.

Add to that the discoverability problem of finding the specific package that contains a given API, slow NuGet restore times due to the sheer number of packages, and a large published output size (as all packages are copied to the bin folder) and it was clear a different approach was required.

Unifying package versions with a metapackage

In ASP.NET Core 2.0, Microsoft introduced the Microsoft.AspNetCore.All metapackage and the .NET Core runtime store. These two pieces were designed to workaround many of the problems that we've touched on, without sacrificing the ability to have distinct package dependency layers and a well factored framework.

I discussed this metapackage and the runtime store in a previous post, but I'll recap here for convenience.

The Microsoft.AspNetCore.All metapackage solves the issue of discoverability and inconsistent version numbers by including a reference to every package that is part of ASP.NET Core 2.0, as well as third-party packages referenced by ASP.NET Core. This includes both integral packages like Newtonsoft.Json, but also packages like StackExchange.Redis that are used by somewhat-peripheral packages like Microsoft.Extensions.Caching.Redis.

On the face of it, you might expect shipping a larger metapackage to cause everything to get even slower - there would be more packages to restore, and a huge number of packages in your app's published output.

However, .NET Core 2.0 includes a new feature called the runtime store. This essentially lets you pre-install packages on a machine, in a central location, so you don't have to include them in the publish output of your individual apps. When you install the .NET Core 2.0 runtime, all the packages required by the Microsoft.AspNetCore.All metapackage are installed globally (at C:\Program Files\dotnet\store. on Windows):

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

When you publish your app, the Microsoft.AspNetCore.All metapackage trims out all the dependencies that it knows will be in the runtime store, significantly reducing the number of dlls in your published app's folder.

The runtime store has some additional benefits. It can use "ngen-ed" libraries that are already optimised for the the target machine, improving start up time. You can also use the store to "light-up" features at runtime such as Application insights, but you can create your own manifests too.

Unfortunately, there are a few downsides to the store...

The ever-growing runtime stores

By design, if your app is built using the Microsoft.AspNetCore.All metapacakge, and hence uses the runtime store output-trimming, you can only run your app on a machine that has the correct version of the runtime store installed (via the .NET Core runtime installer).

For example, if you use the Microsoft.AspNetCore.All metapackage for version 2.0.1, you must have the runtime store for 2.0.1 installed, version 2.0.0 and 2.0.2 are no good. That means if you need to fix a critical bug in production, you would need to install the next version of the runtime store, and you would need to update, recompile, and republish all of your apps to use it. This generally leads to runtime stores growing, as you can't easily delete old versions.

This problem is a particular issue if you're running a platform like Azure, so Microsoft are acutely aware of the issue. If you deploy your apps using Docker for example, this doesn't seem like as big of a problem.

The solution Microsoft have settled on is somewhat conceptually similar to the runtime store, but it actually goes deeper than that.

Introducing Shared Frameworks in ASP.NET Core 2.1

In ASP.NET Core 2.1 (currently at preview 1), ASP.NET Core is now a Shared Framework, very similar to the existing Microsoft.NETCore.App shared framework that effectively "is" .NET Core. When you install the .NET Core runtime you can also install the ASP.NET Core runtime:

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

After you install the preview, you'll find you have three folders in C:\Program Files\dotnet\shared (on Windows):

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

These are the three Shared frameworks for ASP.NET Core 2.1:

  • Microsoft.NETCore.App - the .NET Core framework that previously was the only framework installed
  • Microsoft.AspNetCore.App - all the dlls from packages that make up the "core" of ASP.NET Core, with as many packages that have third-party dependencies removed
  • Microsoft.AspNetCore.All - all the packages that were previously referenced by the Microsoft.AspNetCore.All metapackage, including all their dependencies.

Each of these frameworks "inherits" from the last, so there's no duplication of libraries between them, but the folder layout is much simpler - just a flat list of libraries:

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

So why should I care?

That's all nice and interesting, but how does it affect how we develop ASP.NET Core applications? Well for the most part, things are much the same, but there's a few points to take note of.

Reference Microsoft.AspNetCore.App in your apps

As described in this issue, Microsoft have introduced another metapackage called Microsoft.AspNetCore.App with ASP.NET Core 2.1. This contains all of the libraries that make up the core of ASP.NET Core that are shipped by the .NET and ASP.NET team themselves. Microsoft recommend using this package instead of the All metapackage, as that way they can provide direct support, instead of potentially having to rely on third-party libraries (like StackExchange.Redis or SQLite).

In terms of behaviour, you'll still effectively get the same publish output dependency-trimming that you do currently (though the mechanism is slightly different), so there's no need to worry about that. If you need some of the extra packages that aren't part of the new Microsoft.AspNetCore.App metapackage, then you can just reference them individually.

Note that you are still free to reference the Microsoft.AspNetCore.All metapackage, it's just not recommended as it locks you into specific versions of third-party dependencies. As you saw previously, the All shared framework inherits from the App shared framework, so it should be easy enough to switch between them

Framework version mismatches

By moving away from the runtime store, and instead moving to a shared-framework approach, it's easier for the .NET Core runtime to handle mis-matches between the requested runtime and the installed runtimes.

With ASP.NET Core prior to 2.1, the runtime would automatically roll-forward patch versions if a newer version of the runtime was installed on the machine, but it would never roll forward minor versions. For example, if versions 2.0.2 and 2.0.3 were installed, then an app targeting 2.0.2 would use 2.0.3 automatically. However if only version 2.1.0 was installed and the app targeted version 2.0.0, the app would fail to start.

With ASP.NET Core 2.1, the runtime can roll-forward by using a newer minor version of the framework than requested. So in the previous example, an app targeting 2.0.0 would be able to run on a machine that only has 2.1.0 or 2.2.1 installed for example.

An exact minor match is always chosen preferentially; the minor version only rolls-forward when your app would otherwise be unable to run.

Exact dependency ranges

The final major change introduced in Microsoft.AspNetCore.App is the use of exact-version requirements for referenced NuGet packages. Typically, most NuGet packages specify their dependencies using "at least" ranges, where any dependent package will satisfy the requirement.

For example, the image below shows some of the dependencies of the Microsoft.AspNetCore.All (version 2.0.6) package.

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

Due to the way these dependencies are specified, it would be possible to silently "lift" a dependency to a higher version than that specified. For example, if you added a package which depended on a newer version, say 2.1.0 of Microsoft.AspNetCore.Authentication, to an app using version 2.0.0 of the All package then NuGet would select 2.1.0 as it satisfies all the requirements. That could result in you trying to use using untested combinations of the ASP.NET Core framework libraries.

Consequently, the Microsoft.AspNetCore.App package specifies exact versions for it's dependencies (note the = instead of >=)

Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

Now if you attempt to pull in a higher version of a framework library transitively, you'll get an error from NuGet when it tries to restore, warning you about the issue. So if you attempt to use version 2.2.0 of Microsoft.AspNetCore.Antiforgery with version 2.1.0 of the App metapackage for example, you'll get an error.

It's still possible to pull in a higher version of a framework package if you need to, by referencing it directly and overriding the error, but at that point you're making a conscious decision to head into uncharted waters!

Summary

ASP.NET Core 2.1 brings a surprising number of fundamental changes under the hood for a minor release, and fundamentally re-architects the way ASP.NET Core apps are delivered. However as a developer you don't have much to worry about. Other than switching to the Microsoft.AspNetCore.App metapackage and making some minor adjustments, the upgrade from 2.0 to 2.1 should be very smooth. If you're interested in digging further into the under-the-hood changes, I recommend checking out the links below:


Damien Bowden: Using Message Pack with ASP.NET Core SignalR

This post shows how SignalR could be used to send messages between different C# console clients using Message Pack as the protocol. An ASP.NET Core web application is used to host the SignalR Hub.

Code: https://github.com/damienbod/AspNetCoreAngularSignalR

Posts in this series

Setting up the Message Pack SignalR server

Add the Microsoft.AspNetCore.SignalR and the Microsoft.AspNetCore.SignalR.MsgPack NuGet packages to the ASP.NET Core server application where the SignalR Hub will be hosted. The Visual Studio NuGet Package Manager can be used for this.

Or just add it directly to the .csproj project file.

<PackageReference 
   Include="Microsoft.AspNetCore.SignalR" 
   Version="1.0.0-preview1-final" />
<PackageReference 
   Include="Microsoft.AspNetCore.SignalR.MsgPack" 
   Version="1.0.0-preview1-final" />

Setup a SignalR Hub as required. This is done by implementing the Hub class.

using Dtos;
using Microsoft.AspNetCore.SignalR;
using System.Threading.Tasks;

namespace AspNetCoreAngularSignalR.SignalRHubs
{
    // Send messages using Message Pack binary formatter
    public class LoopyMessageHub : Hub
    {
        public Task Send(MessageDto data)
        {
            return Clients.All.SendAsync("Send", data);
        }
    }
}

A DTO class is created to send the Message Pack messages. Notice that the class is a plain C# class with no Message Pack attributes, or properties.

using System;

namespace Dtos
{
    public class MessageDto
    {
        public Guid Id { get; set; }

        public string Name { get; set; }

        public int Amount { get; set; }
    }
}

Then add the Message Pack protocol to the SignalR service.

services.AddSignalR(options =>
{
	options.KeepAliveInterval = TimeSpan.FromSeconds(5);
})
.AddMessagePackProtocol();

And configure the SignalR Hub in the Startup class Configure method of the ASP.NET Core server application.

app.UseSignalR(routes =>
{
	routes.MapHub<LoopyMessageHub>("/loopymessage");
});

Setting up the Message Pack SignalR client

Add the Microsoft.AspNetCore.SignalR.Client and the Microsoft.AspNetCore.SignalR.Client.MsgPack NuGet packages to the SignalR client console application.

The packages are added to the project file.

<PackageReference  
   Include="Microsoft.AspNetCore.SignalR.Client" 
   Version="1.0.0-preview1-final" />
<PackageReference  
   Include="Microsoft.AspNetCore.SignalR.Client.MsgPack" 
   Version="1.0.0-preview1-final" />

Create a Hub client connection using the Message Pack Protocol. The Url must match the URL configuration on the server.

public static async Task SetupSignalRHubAsync()
{
 _hubConnection = new HubConnectionBuilder()
   .WithUrl("https://localhost:44324/loopymessage")
   .WithMessagePackProtocol()
   .WithConsoleLogger()
   .Build();

 await _hubConnection.StartAsync();
}

The Hub can then be used to send or receive SignalR messages using the Message Pack as the binary serializer.

using Dtos;
using Microsoft.AspNetCore.SignalR.Client;
using System;
using System.Threading.Tasks;

namespace ConsoleSignalRMessagePack
{
    class Program
    {
        private static HubConnection _hubConnection;

        public static void Main(string[] args) => MainAsync().GetAwaiter().GetResult();

        static async Task MainAsync()
        {
            await SetupSignalRHubAsync();
            _hubConnection.On<MessageDto>("Send", (message) =>
            {
                Console.WriteLine($"Received Message: {message.Name}");
            });
            Console.WriteLine("Connected to Hub");
            Console.WriteLine("Press ESC to stop");
            do
            {
                while (!Console.KeyAvailable)
                {
                    var message = Console.ReadLine();
                    await _hubConnection.SendAsync("Send", new MessageDto() { Id = Guid.NewGuid(), Name = message, Amount = 7 });
                    Console.WriteLine("SendAsync to Hub");
                }
            }
            while (Console.ReadKey(true).Key != ConsoleKey.Escape);

            await _hubConnection.DisposeAsync();
        }

        public static async Task SetupSignalRHubAsync()
        {
            _hubConnection = new HubConnectionBuilder()
                 .WithUrl("https://localhost:44324/loopymessage")
                 .WithMessagePackProtocol()
                 .WithConsoleLogger()
                 .Build();

            await _hubConnection.StartAsync();
        }
    }
}

Testing

Start the server application, and 2 console applications. Then you can send and receive SignalR messages, which use Message Pack as the protocol.


Links:

https://msgpack.org/

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://radu-matei.com/blog/signalr-core/


Anuraj Parameswaran: Exploring Global Tools in .NET Core

This post is about Global Tools in .NET Core, Global Tools is new feature in .NET Core. Global Tools helps you to write .NET Core console apps that can be packaged and delivered as NuGet packages. It is similar to npm global tools.


Damien Bowden: First experiments with makecode and micro:bit

At the MVP Global Summit, I heard about MakeCode for the first time. The project makes it really easy for people to get a first contact, introduction with code and computer science. I got the chance to play around with the Micro:bit which has a whole range of sensors and can easily be programmed from MakeCode.

I decided to experiment and tried it out with two 12 year olds and a 10 ten old.

MakeCode

The https://makecode.com/ website provides a whole range of links, getting started lessons, or great ideas on how people can use this. We experimented firstly with the MakeCode Micro:bit. This software can be run from any browser, and the https://makecode.microbit.org/ can be used to experiment, or programme the Micro:bit.

Micro:bit

The Micro:bit is a 32 bit arm computer with all types of sensors, inputs and outputs. Here’s a link with the features: http://microbit.org/guide/features/

The Micro:bit can be purchased almost anywhere in the world. Links for your country can be found here: https://www.microbit.org/resellers/

The Micro:bit can be connected to your computer using a USB cable.

Testing

Once setup, I gave the kids a simple introduction, explained the different blocks, and very quickly they started to experiment themselves. The first results looked like this, which they called a magic show. Kind of like the name.

I also explained the code, and they could understand how to change to code, and how it mapped back to the block code. The mapping between the block code, and the text code is a fantastic feature of MakeCode. First question was why bother with the text code, which meant they understood the relationship, so I could explain the advantages then.

input.onButtonPressed(Button.A, () => {
    basic.showString("HELLO!")
    basic.pause(2000)
})
input.onGesture(Gesture.FreeFall, () => {
    basic.showIcon(IconNames.No)
    basic.pause(4000)
})
input.onButtonPressed(Button.AB, () => {
    basic.showString("DAS WARS")
})
input.onButtonPressed(Button.B, () => {
    basic.showIcon(IconNames.Angry)
})
input.onGesture(Gesture.Shake, () => {
    basic.showLeds(`
        # # # # #
        # # # # #
        # # # # #
        # # # # #
        # # # # #
        `)
})
basic.forever(() => {
    led.plot(2, 2)
    basic.pause(30)
    led.unplot(2, 2)
    basic.pause(300)
})

Then it was downloaded to the Micro:bit. The MakeCode Micro:bit software provides a download button. If you use this from the browser, it creates a hex file, which can be downloaded to the hardware per drag and drop. If you use the MakeCode Micro:bit Windows Store application, it will download it directly for you.

Once downoaded, the magic show could begin.

The following was produced by the 10 year, who needed a bit more help. He discovered the sound.

Notes

This is a super project, and would highly recommended it to schools, or as a present for kids. There are so many ways to try out new things, or code with different hardware, or even Minecraft. The kids have started to introduce it to other kids already. It would be great, if they could do this in school. If you have questions or queries, the MakeCode team are really helpful and can be reached here at twitter: @MsMakeCode, or you can create a github issue. The docs are really excellent if you require help with programming, and provides some really cool examples and ideas.

Links:

http://makecode.com/

https://makecode.microbit.org/

https://www.microbit.org/resellers/

http://microbit.org/guide/features/


Damien Bowden: Securing the CDN links in the ASP.NET Core 2.1 templates

This article uses the the ASP.NET Core 2.1 MVC template and shows how to secure the CDN links using the integrity parameter.

A new ASP.NET Core MVC application was created using the 2.1 template in Visual Studio.

This template uses HTTPS per default and has added some of the required HTTPS headers like HSTS which is required for any application. The template has added the integrity parameter to the javascript CDN links, but on the CSS CDN links, it is missing.

<script src="https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.2.0.min.js"
 asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
 asp-fallback-test="window.jQuery"  
 crossorigin="anonymous"
 integrity="sha384-K+ctZQ+LL8q6tP7I94W+qzQsfRV2a+AfHIi9k8z8l9ggpc8X+Ytst4yBo/hH+8Fk">
</script>

If the value of the integrity is changed, or the CDN script was changed, or for example a bitcoin miner was added to it, the MVC application will not load the script.

To test this, you can change the value of the integrity parameter on the script, and in the production environment, the script will not load and fallback to the localhost deployed script. By changing the value of the integrity parameter, it simulates a changed script on the CDN. The following snapshot shows an example of the possible errors sent to the browser:

Adding the integrity parameter to the CSS link

The template creates a bootstrap link in the _Layout.cshtml as follows:

<link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
              asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
              asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />

This is missing the integrity parameter. To fix this, the integrity parameter can be added to the link.

<link rel="stylesheet" 
          integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" 
          crossorigin="anonymous"
          href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
          asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
          asp-fallback-test-class="sr-only"
          asp-fallback-test-property="position" 
          asp-fallback-test-value="absolute" />

The value of the integrity parameter was created using SRI Hash Generator. When creating this, you have to be sure, that the link is safe. By using this CDN, your application trusts the CDN links.

Now if the css file was changed on the CDN server, the application will not load it.

The CSP Header of the application can also be improved. The application should only load from the required CDNs and no where else. This can be forced by adding the following CSP configuration:

content-security-policy: 
script-src 'self' https://ajax.aspnetcdn.com;
style-src 'self' https://ajax.aspnetcdn.com;
img-src 'self';
font-src 'self' https://ajax.aspnetcdn.com;
form-action 'self';
frame-ancestors 'self';
block-all-mixed-content

Or you can use NWebSec and add it to the startup.cs

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.FontSources(s => s.Self()
		.CustomSources("https://ajax.aspnetcdn.com"))
	.FormActions(s => s.Self())
	.FrameAncestors(s => s.Self())
	.ImageSources(s => s.Self())
	.StyleSources(s => s.Self()
		.CustomSources("https://ajax.aspnetcdn.com"))
	.ScriptSources(s => s.Self()
		.UnsafeInline()
		.CustomSources("https://ajax.aspnetcdn.com"))
);

Links:

https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity

https://www.srihash.org/

https://www.troyhunt.com/protecting-your-embedded-content-with-subresource-integrity-sri/

https://scotthelme.co.uk/tag/cdn/

https://rehansaeed.com/tag/subresource-integrity-sri/

https://rehansaeed.com/subresource-integrity-taghelper-using-asp-net-core/


Andrew Lock: Fixing Nginx "upstream sent too big header" error when running an ingress controller in Kubernetes

Fixing Nginx

In this post I describe a problem I had running IdentityServer 4 behind an Nginx reverse proxy. In my case, I was running Nginx as an ingress controller for a Kubernetes cluster, but the issue is actually not specific to Kubernetes, or IdentityServer - it's an Nginx configuration issue.

The error: "upstream sent too big header while reading response header from upstream"

Initially, the Nginx ingress controller appeared to be configured correctly. I could view the IdentityServer home page, and could click login, but when I was redirected to the authorize endpoint (as part of the standard IdentityServer flow), I would get a 502 Bad Gateway error and a blank page.

Looking through the logs, IdentityServer showed no errors - as far as it was concerned there were no problems with the authorize request. However, looking through the Nginx logs revealed this gem (formatted slightly for legibility):

2018/02/05 04:55:21 [error] 193#193:  
    *25 upstream sent too big header while reading response header from upstream, 
client:  
    192.168.1.121, 
server:  
    example.com, 
request:  
  "GET /idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2Fclient%2F%23%2Fcallback%3F&amp;response_type=id_token%20token&amp;scope=profile%20openid%20email&amp;acr_values=tenant%3Atenant1 HTTP/1.1",
upstream:  
  "http://10.32.0.9:80/idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2F.client%2F%23%

Apparently, this is a common problem with Nginx, and is essentially exactly what the error says. Nginx sometimes chokes on responses with large headers, because its buffer size is smaller than some other web servers. When it gets a response with large headers, as was the case for my IdentityServer OpenID Connect callback, it falls over and sends a 502 response.

The solution is to simply increase Nginx's buffer size. If you're running Nginx on bare metal you could do this by increasing the buffer size in the config file, something like:

proxy_buffers         8 16k;  # Buffer pool = 8 buffers of 16k  
proxy_buffer_size     16k;    # 16k of buffers from pool used for headers  

However, in this case, I was working with Nginx as an ingress controller to a Kubernetes cluster. The question was, how do you configure Nginx when it's running in a container?

How to configure the Nginx ingress controller

Luckily, the Nginx ingress controller is designed for exactly this situation. It uses a ConfigMap of values that are mapped to internal Nginx configuration values. By changing the ConfigMap, you can configure the underlying Nginx Pod.

The Nginx ingress controller only supports changing a subset of options via the ConfigMap approach, but luckily proxy‑buffer‑size is one such option! There's two things you need to do to customise the ingress:

  1. Deploy the ConfigMap containing your customisations
  2. Point the Nginx ingress controller Deployment to your ConfigMap

I'm just going to show the template changes in this post, assuming you have a cluster created using kubeadm and kubectl

Creating the ConfigMap

The ConfigMap is one of the simplest resources in kubernets; it's essentially just a collection of key-value pairs. The following manifest creates a ConfigMap called nginx-configuration and sets the proxy-buffer-size to "16k", to solve the 502 errors I was seeing previously.

kind: ConfigMap  
apiVersion: v1  
metadata:  
  name: nginx-configuration
  namespace: kube-system
  labels:
    k8s-app: nginx-ingress-controller
data:  
  proxy-buffer-size: "16k"

If you save this to a file nginx-configuration.yaml then you can apply it to your cluster using

kubectl apply -f nginx-configuration.yaml  

However, you can't just apply the ConfigMap and have the ingress controller pick it up automatically - you have to update your Nginx Deployment so it knows which ConfigMap to use.

Configuring the Nginx ingress controller to use your ConfigMap

In order for the ingress controller to use your ConfigMap, you must pass the ConfigMap name (nginx-configuration) as an argument in your deployment. For example:

args:  
  - /nginx-ingress-controller
  - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
  - --configmap=$(POD_NAMESPACE)/nginx-configuration

Without this argument, the ingress controller will ignore your ConfigMap. The complete deployment manifest will look something like the following (adapted from the Nginx ingress controller repo)

apiVersion: extensions/v1beta1  
kind: Deployment  
metadata:  
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:  
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true' 
    spec:
      initContainers:
      - command:
        - sh
        - -c
        - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: sysctl
        securityContext:
          privileged: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

Summary

While deploying a local Kubernetes cluster locally, the Nginx ingess controller was returning 502 errors for some requests. This was due to the headers being too large for Nginx to handle. Increasing the proxy_buffer_size configuration parmeter solved the problem. To achieve this with the ingress controller, you must provide a ConfigMap and point your ingress controller to it by passing an additional arg in your Deployment.


Ben Foster: Injecting UrlHelper in ASP.NET Core MVC

One of our APIs has a dynamic routing system that invokes a different handler based on attributes of the incoming HTTP request.

Each of these handlers is responsible for building the API response which includes generating hypermedia links that describe the state and capabilities of the resource, for example:

{
  "total_count": 3,
  "limit": 10,
  "from": "2018-01-25T06:36:08Z",
  "to": "2018-03-10T07:13:24Z",
  "data": [
    {
      "event_id": "evt_b7ykb47ryaouznsbmbn7ul4uai",
      "event_type": "payment.declined",
      "created_on": "2018-03-10T07:13:24Z",
      "_links": {
        "self": {
          "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai"
        },
        "webhooks-retry": {
          "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai/webhooks/retry"
        }
      }
    },
  ...
}

To avoid hardcoding paths into these handlers we wanted to take advantage of UrlHelper to build the links. Unlike many components in ASP.NET Core, this is not something that is injectable by default.

To register it with the built-in container, add the following to your Startup class:

services.AddSingleton<IActionContextAccessor, ActionContextAccessor>();
services.AddScoped<IUrlHelper>(x => {
    var actionContext = x.GetRequiredService<IActionContextAccessor>().ActionContext;
    var factory = x.GetRequiredService<IUrlHelperFactory>();
    return factory.GetUrlHelper(actionContext);
});

Both IActionContextAccessor and IUrlHelperFactory live in the Microsoft.AspNetCore.Mvc.Core package. If you're using the Microsoft.AspNetCore.All metapackage you should have this referenced already.

Once done, you'll be able to use IUrlHelper in any of your components, assuming you're in the context of a HTTP request:

if (authResponse.ThreeDsSessionId.HasValue)
{
    return new PaymentAcceptedResponse
    {
        Id = id,
        Reference = paymentRequest.Reference,
        Status = authResponse.Status
    }
    .WithLink("self", _urlHelper.PaymentLink(id))
    .WithLink("redirect",
        _urlHelper.Link("AcsRedirect", new { id = authResponse.ThreeDsSessionId }));
}


Anuraj Parameswaran: Bulk Removing Azure Active Directory Users using PowerShell

This post is about deleting Azure Active directory. Sometimes you can’t remove your Azure Active Directory, because of the users and / or applications created or synced on it. So you can’t remove the users from Azure Portal.


Andrew Lock: ASP.NET Core in Action - Filters

ASP.NET Core in Action - Filters

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post is a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

The book is now finished and completely available in the MEAP, so now is the time to act if you're interested! Thanks 🙂

Understanding filters and when to use them

The MVC filter pipeline is a relatively simple concept, in that it provides “hooks” into the normal MVC request as shown in figure 1. For example, say you wanted to ensure that users can only create or edit products on an ecommerce app if they’re logged in. The app would redirect anonymous users to a login page instead of executing the action.

Without filters, you’d need to include the same code to check for a logged in user at the start of each specific action method. With this approach, the MvcMiddleware still executes the model binding and validation, even if the user wasn’t logged in.

With filters, you can use the “hooks” into the MVC request to run common code across all, or a sub-set of requests. This way you can do a wide range of things, such as:

  • Ensure a user is logged in before an action method, model binding or validation runs
  • Customize the output format of particular action methods
  • Handle model validation failures before an action method is invoked
  • Catch exceptions from an action method and handle them in a special way

ASP.NET Core in Action - Filters

Figure 1 Filters run at multiple points in the MvcMiddleware in the normal handling of a request.

In many ways, the MVC filter pipeline is like a middleware pipeline, but restricted to the MvcMiddleware only. Like middleware, filters are good for handling cross-cutting concerns for your application, and are a useful tool for reducing code duplication in many cases.

The MVC filter pipeline

As you saw in figure 1, MVC filters run at a number of different points in the MVC request. This “linear” view of an MVC request and the filter pipeline that we’ve used this far doesn’t quite match up with how these filters execute. Five different types of filter, each of which runs at a different “stage” in the MvcMiddleware, are shown in figure 2.

Each stage lends itself to a particular use case, thanks to its specific location in the MvcMiddleware, with respect to model binding, action execution, and result execution.

  • Authorization filters – These run first in the pipeline, and are useful for protecting your APIs and action methods. If an authorization filter deems the request unauthorized, it short-circuits the request, preventing the rest of the filter pipeline from running.
  • Resource filters – After authorization, resource filters are the next filters to run in the pipeline. They can also execute at the end of the pipeline, in much the same way middleware components can handle both the incoming request and the outgoing response. Alternatively, they can completely short-circuit the request pipeline, and return a response directly. Thanks to their early position in the pipeline, resource filters can have a variety of uses. You could add metrics to an action method, prevent an action method from executing if an unsupported content type is requested, or, as they run before model binding, control the way model binding works for that request.
  • Action filters – Action filters run before and after an action is executed. As model binding has already happened, action filters let you manipulate the arguments to the method, before it executes, or they can short-circuit the action completely and return a different IActionResult. As they also run after the action executes, they can optionally customize the IActionResult before it’s executed.
  • Exception filters – Exception filters can catch exceptions that occur in the filter pipeline, and handle them appropriately. They let you write custom MVC-specific error handling code, which can be useful in some situations. For example, you could catch exceptions in Web API actions and format them differently to exceptions in your MVC actions.
  • Result filters – Result filters run before and after an action method’s IActionResult is executed. This lets you control the execution of the result, or even short-circuit the execution of the result.

ASP.NET Core in Action - Filters

Figure 2 The MVC filter pipeline, including the five different filters stages. Some filter stages (Resource, Action, Result filters) run twice, before and after the remainder of the pipeline.

Exactly which filter you pick to implement depends on the functionality you’re trying to introduce. Want to short-circuit a request as early as possible? Resource filters are a good fit. Need access to the action method parameters? Use an action filter.

You can think of the filter pipeline as a small middleware pipeline that lives by itself in the MvcMiddleware. Alternatively, you could think of them as “hooks” into the MVC action invocation process, which let you run code at a particular point in a request’s “lifecycle.”

That’s all for this article. For more information, read the free first chapter of ASP.NET Core in Action and see this Slideshare presentation.


Anuraj Parameswaran: WebHooks in ASP.NET Core

This post is about consuming webhooks in ASP.NET Core. A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. From ASP.NET Core 2.1 preview onwards ASP.NET Core supports WebHooks. As usual, to use WebHooks, you need to install package for WebHook support. In this post I am consuming webhook from GitHub. So you need to install Microsoft.AspNetCore.WebHooks.Receivers.GitHub. You can do it like this.


Andrew Lock: Coming in ASP.NET Core 2.1 - top-level MVC parameter validation

Coming in ASP.NET Core 2.1 - top-level MVC parameter validation

This post looks at a feature coming in ASP.NET Core 2.1 related to Model Binding in ASP.NET Core MVC/Web API Controllers. I say it's a feature, but from my point of view it feels more like a bug-fix!

Note, ASP.NET Core 2.1 isn't actually in preview yet, so this post might not be accurate! I'm making a few assumptions from looking at the code and issues, I haven't tried it out myself yet.

Model validation in ASP.NET Core 2.0

Model validation is an important part of the MVC pipeline in ASP.NET Core. There are many ways you can hook into the validation layer (using FluentValidation for example), but probably the most common approach is to decorate your binding models with validation attributes from the System.ComponentModel.DataAnnotations namespace. For example:

public class UserModel  
{
    [Required, EmailAddress]
    public string Email { get; set; }

    [Required, StringLength(1000)]
    public string Name { get; set; }
}

If you use the UserModel in a controller's action method, the MvcMiddleware will automatically create a new instance of the object, bind the properties of the model, and validate it using three sources:

  1. Form values – Sent in the body of an HTTP request when a form is sent to the server using a POST
  2. Route values – Obtained from URL segments or through default values after matching a route
  3. Querystring values – Passed at the end of the URL, not used during routing.

Note, currently, data sent as JSON isn't bound by default. If you wish to bind JSON data in the body, you need to decorate your model with the [FromBody] attribute as described here.

In your controller action, you can simply check the ModelState property, and find out if the data provided was valid:

public class CheckoutController : Controller  
{
    public IActionResult SaveUser(UserModel model)
    {
        if(!ModelState.IsValid)
        {
            // Something wasn't valid on the model
            return View(model);
        }

        // The model passed validation, do something with it
    }
}

This is all pretty standard MVC stuff, but what if you don't want to create a whole binding model, but you still want to validate the incoming data?

Top-level parameters in ASP.NET Core 2.0

The DataAnnotation attributes used by the default MVC validation system don't have to be applied to the properties of a class, they can also be applied to parameters. That might lead you to think that you could completely replace the UserModel in the above example with the following:

public class CheckoutController : Controller  
{
    public IActionResult SaveUser(
        [Required, EmailAddress] string Email 
        [Required, StringLength(1000)] string Name)
    {
        if(!ModelState.IsValid)
        {
            // Something wasn't valid on the model
            return View(model);
        }

        // The model passed validation, do something with it
    }
}

Unfortunately, this doesn't work! While the properties are bound, the validation attributes are ignored, and ModelState.IsValid is always true!

Top level parameters in ASP.NET Core 2.1

Luckily, the ASP.NET Core team were aware of the issue, and a fix has been merged as part of ASP.NET Core 2.1. As a consequence, the code in the previous section behaves as you'd expect, with the parameters validated, and the ModelState.IsValid updated accordingly.

As part of this work you will now also be able to use the [BindRequired] attribute on parameters. This attribute is important when you're binding non-nullable value types, as using the [Required] attribute with these doesn't give the expected behaviour.

That means you can now do the following for example, and be sure that the testId parameter was bound correctly from the route parameters, and the qty parameter was bound from the querystring. Before ASP.NET Core 2.1 this won't even compile!

[HttpGet("test/{testId}")]
public IActionResult Get([BindRequired, FromRoute] Guid testId, [BindRequired, FromQuery] int qty)  
{
    if(!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }
    // Valid and bound
}

For an excellent description of this problem and the difference between Required and BindRequired, see this article by Filip.

Summary

In ASP.NET Core 2.0 and below, validation attributes applied to top-level parameters are ignored, and the ModelState is not updated. Only validation parameters on complex model types are considered.

In ASP.NET Core 2.1 validation attributes will now be respected on top-level parameters. What's more, you'll be able to apply the [BindReqired] attribute to parameters.

ASP.NET Core 2.1 looks to be shaping up to have a ton of new features. This is one of those nice little improvements that just makes things a bit easier, a little bit more consistent - just the sort of changes I like 🙂


Andrew Lock: Gotchas upgrading from IdentityServer 3 to IdentityServer 4

Gotchas upgrading from IdentityServer 3 to IdentityServer 4

This post covers a couple of gotchas I experienced upgrading an IdentityServer 3 implementation to IdentityServer 4. I've written about a previous issue I ran into with an OWIN app in this scenario - where JWTs could not be validated correctly after upgrading. In this post I'll discuss two other minor issues I ran into:

  1. The URL of the JSON Web Key Set (JWKS) has changed from /.well-known/jwks to .well-known/openid-configuration/jwks.
  2. The KeyId of the X509 certificate signing material (used to validate the identity token) changes between IdentityServer 3 and IdentityServer 4. That means a token issued by IdentityServer 3 will not be validated using IdentityServer 4, leaving users stuck in a redirect loop.

Both of these issues are actually quite minor, and weren't a problem for us to solve, they just caused a bit of confusion initially! This is just a quick post about these problems - if you're looking for more information on upgrading from IdentityServer 3 to 4 in general, I suggest checking out the docs, the announcement post, or this article by Scott Brady.

1. The JWKS URL has changed

OpenID Connect uses a "discovery document" to describe the capabilities and settings of the server - in this case, IdentityServer. This includes things like the Claims and Scopes that are available and the supported grants and response types. It also includes a number of URLs indicating other available endpoints. As a very compressed example, it might look like the following:

{
    "issuer": "https://example.com",
    "jwks_uri": "https://example.com/.well-known/openid-configuration/jwks",
    "authorization_endpoint": "https://example.com/connect/authorize",
    "token_endpoint": "https://example.com/connect/token",
    "userinfo_endpoint": "https://example.com/connect/userinfo",
    "end_session_endpoint": "https://example.com/connect/endsession",
    "scopes_supported": [
        "openid",
        "profile",
        "email"
    ],
    "claims_supported": [
        "sub",
        "name",
        "family_name",
        "given_name"
    ],
    "grant_types_supported": [
        "authorization_code",
        "client_credentials",
        "refresh_token",
        "implicit"
    ],
    "response_types_supported": [
        "code",
        "token",
        "id_token",
        "id_token token",
    ],
    "id_token_signing_alg_values_supported": [
        "RS256"
    ],
    "code_challenge_methods_supported": [
        "plain",
        "S256"
    ]
}

The discovery document is always located at the URL /.well-known/openid-configuration, so a new client connecting to the server knows where to look, but the other endpoints are free to move, as long as the discovery document reflects that.

In our move from IdentityServer 3 to IdentityServ4, the JSWKs URL did just that - it moved from /.well-known/jwks to /.well-known/openid-configuration/jwks. The discovery document obviously reflected that, and all of the IdentityServer .NET client libraries for doing token validation, both with .NET Core and for OWIN, switched to the correct URLs without any problems.

What I didn't appreciate, was that we had a Python app which was using IdentityServer for authentication, but which wasn't using the discovery document. Rather than go to the effort of calling the discovery document and parsing out the URL, and knowing that we controlled the IdentityServer implementation, the /.well-known/jwks URL was hard coded.

Oops!

Obviously it was a simple hack to update the hard coded URL to the new location, though a much better solution would be to properly parse the discovery document.

2. The KeyId of the signing material has changed

This is a slightly complex issue, and I confess, this has been on my backlog to write up for so long that I can't remember all the details myself! I do, however, remember the symptom quite vividly - a crazy, endless, redirect loop on the client!

The sequence of events looked something like this:

  1. The client side app authenticates with IdentityServer 3, obtaining an id and access token.
  2. Upgrade IdentityServer to IdentityServer 4.
  3. The client side app calls the API, which tries to validate the token using the public keys exposed by IdenntityServer 4. However IdentityServer 4 can't seem to find the key that was used to sign the token, so this validation fails causing a 401 redirect.
  4. The client side app handles the 401, and redirects to IdentityServer 4 to login.
  5. However, you're already logged in (the cookie persists across IdentityServer versions), so IdentityServer 4 redirects you back.
  6. Go to 4.

Gotchas upgrading from IdentityServer 3 to IdentityServer 4

It's possible that this issue manifested as it did due to something awry in the client side app, but the root cause of the issue was the fact a token issued by IdentityServer 3 could not be validated using the exposed public keys of IdentityServer 4, even though both implementations were using the same signing material - an X509 certificate.

The same public and private keypair is used in both IdentityServer 3 and IdentityServer4, but they have different identifiers, so IdentityServer thinks they are different keys.

In order to validate an access token, an app must obtain the public key material from IdentityServer, which it can use to confirm the token was signed with the associated private key. The public keys are exposed at the jwks endpoint (mentioned earlier), something like the following (truncated for brevity):

{
  "keys": [
    {
      "kty": "RSA",
      "use": "sig",
      "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
      "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk",
      "e": "AQAB",
      "n": "rHRhPtwUwp-i3lA_CINLooJygpJwukbw",
      "x5c": [
        "MIIDLjCCAhagAwIBAgIQ9tul\/q5XHX10l7GMTDK3zCna+mQ="
      ],
      "alg": "RS256"
    }
  ]
}

As you can see, this JSON object contains a keys property which is an array of objects (though we only have one here). Therefore, when validating an access token, the API server needs to know which key to use for the validation.

The JWT itself contains metadata indicating which signing material was used:

{
  "alg": "RS256",
  "kid": "E23F0643F144C997D6FEEB320F00773286C2FB09",
  "typ": "JWT",
  "x5t": "4j8GQ_FEyZfW_usyDwB3MobC-wk"
}

As you can see, there's a kid property (KeyId) which matches in both the jwks response and the value in the JWT header. The API token validator uses the kid contained in the JWT to locate the appropriate signing material from the jwks endpoint, and can confirm the access token hasn't been tampered with.

Unfortunately, the kid was not consistent across IdentityServer 3 and IdentityServer 4. When trying to use a token issued by IdentityServer 3, IdentityServer 4 was unable to find a matching token, and validation failed.

For those interested, IdentityServer3 uses the bae 64 encoded certificate thumbprint as the KeyId - Base64Url.Encode(x509Key.Certificate.GetCertHash()). IdentityServer 4 [uses X509SecurityKey.KeyId] (https://github.com/IdentityServer/IdentityServer4/blob/993103d51bff929e4b0330f6c0ef9e3ffdcf8de3/src/IdentityServer4/ResponseHandling/DiscoveryResponseGenerator.cs#L316) which is slightly different - a base 16 encoded version of the hash.

Our simple solution to this was to do the upgrade of IdentityServer out of hours - in the morning, the IdentityServer cookies had expired and so everyone had to re-authenticate anyway. IdentityServer 4 issued new access tokens with a kid that matched its jwks values, so there were no issues 🙂

In practice, this solution might not work for everyone, for example if you're not able to enforce a period of downtime. There are other options, like explicitly providing the kid material yourself as described in this issue if you need it. If the kid doesn't change between versions, you shouldn't have any issues validating old tokens in the upgrade.

Alternatively, you could add the signing material to IdentityServer 4 using both the old and new kids. That way, IdentityServer 4 can validate tokens issued by IdentityServer 3 (using the old kid), while also issuing (and validating) new tokens using the new kid.

Summary

This post describes a couple of minor issues upgrading a deployment from IdentityServer 3 to IdentitySerrver4. The first issue, the jwks URL changing, is not an issue I expect many people to run into - if you're using the discovery document you won't have this problem. The second issue is one you might run into when upgrading from IdentityServer 3 to IdentityServer 4 in production; even if you use the same X509 certificate in both implementations, tokens issued by IdentityServer 3 can not be validated by IdentityServer 4 due to mis-matching kids.


Andrew Lock: Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

This post builds on my previous posts on building ASP.NET Core apps in Docker and using Cake in Docker. In this post I show how you can optimise your Dockerfiles for dotnet restore, without having to manually specify all your app's .csproj files in the Dockerfile.

Background - optimising your Dockerfile for dotnet restore

When building ASP.NET Core apps using Docker, there are many best-practices to consider. One of the most important aspects is using the correct base image - in particular, a base image containing the .NET SDK to build your app, and a base image containing only the .NET runtime to run your app in production.

In addition, there are a number of best practices which apply to Docker and the way it caches layers to build your app. I discussed this process in a previous post on building ASP.NET Core apps using Cake in Docker, so if that's new to you, i suggest checking it out.

A common way to take advantage of the build cache when building your ASP.NET Core in, is to copy across only the .csproj, .sln and nuget.config files for your app before doing a restore, rather than the entire source code for your app. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the result of the restore, so it doesn't need to run twice, if all you do is change a .cs file.

For example, in a previous post I used the following Docker file for building an ASP.NET Core app with three projects - a class library, an ASP.NET Core app, and a test project:

# Build image
FROM microsoft/dotnet:2.0.3-sdk AS builder  
WORKDIR /sln  
COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./

# Copy all the csproj files and restore to cache the layer for faster builds
# The dotnet_build.sh script does this anyway, so superfluous, but docker can 
# cache the intermediate images so _much_ faster
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  
RUN dotnet restore

COPY ./test ./test  
COPY ./src ./src  
RUN dotnet build -c Release --no-restore

RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore

RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

As you can see, the first things we do are copy the .sln file and nuget.config files, followed by all the .csproj files. We can then run dotnet restore, before we copy the /src and /test folders.

While this is great for optimising the dotnet restore point, it has a couple of minor downsides:

  1. You have to manually reference every .csproj (and .sln) file in the Dockerfile
  2. You create a new layer for every COPY command. (This is a very minor issue, as the layers don't take up much space, but it's a bit annoying)

The ideal solution

My first thought for optimising this process was to simply use wildcards to copy all the .csproj files at once. This would solve both of the issues outlined above. I'd hoped that all it would take would be the following:

# Copy all csproj files (WARNING, this doesn't work!)
COPY ./**/*.csproj ./  

Unfortunately, while COPY does support wildcard expansion, the above snippet doesn't do what you'd like it to. Instead of copying each of the .csproj files into their respective folders in the Docker image, they're dumped into the root folder instead!

Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files

The problem is that the wildcard expansion happens before the files are copied, rather than by the COPY file itself. Consequently, you're effectively running:

# Copy all csproj files (WARNING, this doesn't work!)
# COPY ./**/*.csproj ./
COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj ./  

i.e. copy the three .csproj files into the root folder. It sucks that this doesn't work, but you can read more in the issue on GitHub, including how there's no plans to fix it 🙁

The solution - tarball up the csproj

The solution I'm using to the problem is a bit hacky, and has some caveats, but it's the only one I could find that works. It goes like this:

  1. Create a tarball of the .csproj files before calling docker build.
  2. In the Dockerfile, expand the tarball into the root directory
  3. Run dotnet restore
  4. After the docker file is built, delete the tarball

Essentially, we're using other tools for bundling up the .csproj files, rather than trying to use the capabilities of the Dockerfile format. The big disadvantage with this approach is that it makes running the build a bit more complicated. You'll likely want to use a build script file, rather than simplu calling docker build .. Similarly, this means you won't be able to use the automated builds feature of DockerHub.

For me, those are easy tradeoffs, as I typically use a build script anyway. The solution in this post just adds a few more lines to it.

1. Create a tarball of your project files

If you're not familiar with Linux, a tarball is simply a way of packaging up multiple files into a single file, just like a .zip file. You can package and unpackage files using the tar command, which has a daunting array of options.

There's a plethora of different ways we could add all our .csproj files to a .tar file, but the following is what I used. I'm not a Linux guy, so any improvements would be greatly received 🙂

find . -name "*.csproj" -print0 \  
    | tar -cvf projectfiles.tar --null -T -

Note: Don't use the -z parameter here to GZIP the file. Including it causes Docker to never cache the COPY command (shown below) which completely negates all the benefits of copying across the .csproj files first!

This actually uses the find command to iterate through sub directories, list out all the .csproj files, and pipe them to the tar command. The tar command writes them all to a file called projectfiles.tar in the root directory.

2. Expand the tarball in the Dockerfile and call dotnet run

When we call docker build . from our build script, the projectfiles.tar file will be available to copy in our Dockerfile. Instead of having to individually copy across every .csproj file, we can copy across just our .tar file, and the expand it in the root directory.

The first part of our Dockerfile then becomes:

FROM microsoft/aspnetcore-build:2.0.3 AS builder  
WORKDIR /sln  
COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./

COPY projectfiles.tar .  
RUN tar -xvf projectfiles.tar  
RUN dotnet restore

# The rest of the build process

Now, it doesn't matter how many new projects we add or delete, we won't need to touch the Dockerfile

3. Delete the old projectfiles.tar

The final step is to delete the old projectfiles.tar after the build has finished. This is sort of optional - if the file already exists the next time you run your build script, tar will just overwrite the existing file.

If you want to delete the file, you can use

rm projectfiles.tar  

at the end of your build script. Either way, it's best to add projectfiles.tar as an ignored file in your .gitignore file, to avoid accidentally committing it to source control.

Further optimisation - tar all the things!

We've come this far, why not go a step further! As we're already taking the hit of using tar to create and extract an archive, we may as well package everything we need to run dotnet restore i.e. the .sln and _NuGet.config files!. That lets us do a couple more optimisations in the Docker file.

All we need to change, is to add "OR" clauses to the find command of our build script (urgh, so ugly):

find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
    | tar -cvf projectfiles.tar --null -T -

and then we can remove the COPY ./aspnetcore-in-docker.sln ./NuGet.config ./ line from our Dockerfile.

The very last optimisation I want to make is to combine the layer that expands the .tar file with the line that runs dotnet restore by using the && operator. Given the latter is dependent on the first, there's no advantage to caching them separately, so we may as well inline it:

RUN tar -xvf projectfiles.tar && dotnet restore  

Putting it all together - the build script and Dockerfile

And we're all done! For completeness, the final build script and Dockerfile are shown below. This is functionally identical to the Dockerfile we started with, but it's now optimised to better handle changes to our ASP.NET Core app. If we add or remove a project from our app, we won't have to touch the Dockerfile, which is great! 🙂

The build script:

#!/bin/bash
set -eux

# tarball csproj files, sln files, and NuGet.config
find . \( -name "*.csproj" -o -name "*.sln" -o -name "NuGet.config" \) -print0 \  
    | tar -cvf projectfiles.tar --null -T -

docker build  .

rm projectfiles.tar  

The Dockerfile

# Build image
FROM microsoft/aspnetcore-build:2.0.3 AS builder  
WORKDIR /sln

COPY projectfiles.tar .  
RUN tar -xvf projectfiles.tar && dotnet restore

COPY ./test ./test  
COPY ./src ./src  
RUN dotnet build -c Release --no-restore

RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore

RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.3  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

Summary

In this post I showed how you can use tar to package up your ASP.NET Core .csproj files to send to Docker. This lets you avoid having to manually specify all the project files explicitly in your Dockerfile.


Damien Bowden: Adding HTTP Headers to improve Security in an ASP.NET MVC Core application

This article shows how to add headers in a HTTPS response for an ASP.NET Core MVC application. The HTTP headers help protect against some of the attacks which can be executed against a website. securityheaders.io is used to test and validate the HTTP headers as well as F12 in the browser. NWebSec is used to add most of the HTTP headers which improve security for the MVC application. Thanks to Scott Helme for creating securityheaders.io, and André N. Klingsheim for creating NWebSec.

Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

2018-02-09: Updated, added feedback from different sources, removing extra headers, add form actions to the CSP configuration, adding info about CAA.

A simple ASP.NET Core MVC application was created and deployed to Azure. securityheaders.io can be used to validate the headers in the application. The deployed application used in this post can be found here: https://webhybridclient20180206091626.azurewebsites.net/status/test

Testing the default application using securityheaders.io gives the following results with some room for improvement.

Fixing this in ASP.NET Core is pretty easy due to NWebSec. Add the NuGet package to the project.

<PackageReference Include="NWebsec.AspNetCore.Middleware" Version="1.1.0" />

Or using the NuGet Package Manager in Visual Studio

Add the Strict-Transport-Security Header

By using HSTS, you can force that all communication is done using HTTPS. If you want to force HTTPS on the first request from the browser, you can use the HSTS preload: https://hstspreload.appspot.com

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

Add the X-Content-Type-Options Header

The X-Content-Type-Options can be set to no-sniff to prevent content sniffing.

app.UseXContentTypeOptions();

https://www.keycdn.com/support/what-is-mime-sniffing/

https://en.wikipedia.org/wiki/Content_sniffing

Add the Referrer Policy Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites. This is set to no referrer.

app.UseReferrerPolicy(opts => opts.NoReferrer());

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Add the X-XSS-Protection Header

The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. (Text copied from here)

app.UseXXssProtection(options => options.EnabledWithBlockMode());

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection

Add the X-Frame-Options Header

You can use the X-frame-options Header to block iframes and prevent click jacking attacks.

app.UseXfo(options => options.Deny());

Add the Content-Security-Policy Header

Content Security Policy can be used to prevent all sort of attacks, XSS, click-jacking attacks, or prevent mixed mode (HTTPS and HTTP). The following configuration works for ASP.NET Core MVC applications, the mixed mode is activated, styles can be read from unsafe inline, due to the razor controls, or tag helpers, and everything can only be loaded from the same origin.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.StyleSources(s => s.Self())
	.StyleSources(s => s.UnsafeInline())
	.FontSources(s => s.Self())
	.FormActions(s => s.Self())
	.FrameAncestors(s => s.Self())
	.ImageSources(s => s.Self())
	.ScriptSources(s => s.Self())
);

Due to this CSP configuration, the public CDNs need to be removed from the MVC application which are per default included in the dotnet template for an ASP.NET Core MVC application.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

NWebSec configuration in the Startup

//Registered before static files to always set header
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseXContentTypeOptions();
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.StyleSources(s => s.Self())
	.StyleSources(s => s.UnsafeInline())
	.FontSources(s => s.Self())
	.FormActions(s => s.Self())
	.FrameAncestors(s => s.Self())
	.ImageSources(s => s.Self())
	.ScriptSources(s => s.Self())
);

app.UseStaticFiles();

When the application is tested again, things look much better.

Or view the headers in the browser, for example F12 in Chrome, and then the network view:

Here’s the securityheaders.io test results for this demo.

https://securityheaders.io/?q=https%3A%2F%2Fwebhybridclient20180206091626.azurewebsites.net%2Fstatus%2Ftest&followRedirects=on

Removing the extra infomation from the Headers

You could also remove the extra information from the HTTPS headers, for example X-Powered-By, or Server, so that less information is sent to the client.

Remove the server headers from the kestrel server, by using the UseKestrel extension method.

.UseKestrel(c => c.AddServerHeader = false)

Add a web.config to your project with the following settings:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.web>
    <httpRuntime enableVersionHeader="false"/>
  </system.web>
  <system.webServer>
    <security>
      <requestFiltering removeServerHeader="true" />
    </security>
    <httpProtocol>
      <customHeaders>
        <remove name="X-Powered-By"/>
      </customHeaders>
    </httpProtocol>
  </system.webServer>
</configuration>

Now by viewing the response in the browser, you can see some unrequired headers have been removed.


Further steps in hardening the application:

Use CAA

You can fix your domain to a selected amount of authorities. You can control the authorities which can issue the certs for your domain. This reduces the risk, that another cert authority produces a cert for your domain to a different person. This can be checked here:

https://toolbox.googleapps.com/apps/dig/

Or configured here:
https://sslmate.com/caa/

Then add it to the hosting provider.

Use a WAF

You could also add a WAF, for example to only expose public URLs and not private ones, or protect against DDoS attacks.

Certificate testing

The certificate should also be tested and validated.

https://www.ssllabs.com is a good test tool.

Here’s the result for the cert used in the demo project.

https://www.ssllabs.com/ssltest/analyze.html?d=webhybridclient20180206091626.azurewebsites.net

I would be grateful for feedback, or suggestions to improve this.

Links:

https://securityheaders.io

https://docs.nwebsec.com/en/latest/

https://github.com/NWebsec/NWebsec

https://www.troyhunt.com/shhh-dont-let-your-response-headers/

https://anthonychu.ca/post/aspnet-core-csp/

https://rehansaeed.com/content-security-policy-for-asp-net-mvc/

https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

https://www.troyhunt.com/the-6-step-happy-path-to-https/

https://www.troyhunt.com/understanding-http-strict-transport/

https://hstspreload.appspot.com

https://geekflare.com/http-header-implementation/

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options

https://docs.microsoft.com/en-us/aspnet/core/tutorials/publish-to-azure-webapp-using-vs

https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key_Pinning

https://toolbox.googleapps.com/apps/dig/

https://sslmate.com/caa/


Dominick Baier: NDC London 2018 Artefacts

“IdentityServer v2 on ASP.NET Core v2: An update” video

“Authorization is hard! (aka the PolicyServer announcement) video

DotNetRocks interview audio

 


Andrew Lock: Sharing appsettings.json configuration files between projects in ASP.NET Core

Sharing appsettings.json configuration files between projects in ASP.NET Core

A pattern that's common for some apps is the need to share settings across multiple projects. For example, imagine you have both an ASP.NET Core RazorPages app and an ASP.NET Core Web API app in the same solution:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Each of the apps will have its own distinct configuration settings, but it's likely that there will also be settings common to both, like a connection string or logging settings for example.

Sensitive configuration settings like connection strings should only be stored outside the version control repository (for example in UserSecrets or Environment Variables) but hopefully you get the idea.

Rather than having to duplicate the same values in each app's appsettings.json, it can be useful to have a common shared .json file that all apps can use, in addition to their specific appsettings.json file.

In this post I show how you can extract common settings to a SharedSettings.json file,how to configure your projects to use them both when running locally with dotnet run, and how to handle the the issues that arise after you publish your app!

The initial setup

If you create a new ASP.NET Core app from a template, it will use the WebHost.CreateDefaultBuilder(args) helper method to setup the web host. This uses a set of "sane" defaults to get you up and running quickly. While I often use this for quick demo apps, I prefer to use the long-hand approach to creating a WebHostBuilder in my production apps, as I think it's clearer to the next person what's going on.

As we're going to be modifying the ConfigureAppConfiguration call to add our shared configuration files, I'll start by modifying the apps to use the long-hand WebHostBuilder configuration. This looks something like the following (some details elided for brevity)

public class Program  
{
    public static void Main(string[] args) => BuildWebHost(args).Run();

    public static IWebHost BuildWebHost(string[] args) =>
        new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .ConfigureAppConfiguration((hostingContext, config) =>
            {
                // see below
            })
            .ConfigureLogging((ctx, log) => { /* elided for brevity */ })
            .UseDefaultServiceProvider((ctx, opts) => { /* elided for brevity */ })
            .UseStartup<Startup>()
            .Build();
}

We'll start by just using the standard appsettings.json files, and the environment-specific appsettings.json files, just as you would in a default ASP.NET Core app. I've included the environment variables in there as well for good measure, but it's the JSON files we're interested in for this post.

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

To give us something to test, I'll add some configuration values to the appsettings.json files for both apps. This will consist of a section with one value that should be the same for both apps, and one value that is app specific. So for the Web API app we have:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "Value for Api"
    }
}

while for the Razor app we have:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "Value for Razor app"
    }
}

Finally, so we can view the actual values received by the app, we'll just dump the configuration section to the screen in the Razor app with the following markup:

@page
@using Microsoft.Extensions.Configuration
@inject IConfiguration _configuration;

@foreach (var kvp in _configuration.GetSection("MySection").AsEnumerable())
{
    <p>@kvp.Key : @kvp.Value</p>
}

which, when run, gives

Sharing appsettings.json configuration files between projects in ASP.NET Core

With our apps primed and ready, we can start extracting the common settings to a shared file.

Extracting common settings to SharedSettings.json

The first question we need to ask is where are we going to actually put the shared file? Logically it doesn't belong to either app directly, so we'll move it outside of the two app folders. I created a folder called Shared at the same level as the project folders:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Inside this folder I created a file called SharedSettings.json, and inside that I added the following JSON:

{
    "MySection": {
        "SharedValue": "This value is shared across both apps",
        "AppSpecificValue": "override me"
    }
}

Note, I added an AppSpecificValue setting here, just to show that the appsettings.json files will override it, but you could omit it completely from SharedSettings.json if there's no valid default value.

I also removed the SharedValue key from each app's appsettings.json file - the apps should use the value from SharedSettings.json instead. The appsettings.json file for the Razor app would be:

{
    "MySection": {
        "AppSpecificValue": "Value for Razor app"
    }
}

If we run the app now, we'll see that the shared value is no longer available, though the AppSpecificValue from appsettings.json is still there:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Loading the SharedSettings.json in ConfigureAppConfiguration

At this point, we've extracted the common setting to SharedSettings.json but we still need to configure our apps to load their configuration from that file as well. That's pretty straight forward, we just need to get the path to the file, and add it in our ConfigureAppConfiguration method, right before we add the appsettings.json files:

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    // find the shared folder in the parent folder
    var sharedFolder = Path.Combine(env.ContentRootPath, "..", "Shared");

    //load the SharedSettings first, so that appsettings.json overrwrites it
    config
        .AddJsonFile(Path.Combine(sharedFolder, "SharedSettings.json"), optional: true)
        .AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

Now if we run our app again, the setting's back:

Sharing appsettings.json configuration files between projects in ASP.NET Core

Great, it works!

Or does it?

While this works fine in development, we'll have a problem when we publish and deploy the app. The app is going to be looking for the SharedSettings.json file in a parent Shared folder, but that won't exist when we publish - the SharedSettings.json file isn't included in any project files, so as it stands you'd have to manually copy the Shared folder across when you publish. Yuk!

Publishing the SharedSettings.json file with your project.

There's a number of possible solutions to this problem. The one I've settled on isn't necessarily the best or the most elegant, but it works for me and is close to an approach I was using in ASP.NET.

To publish the SharedSettings.json file with each app, I create a link to the file in each app as described in this post, and set the CopyToPublishDirectory property to Always. That way, I can be sure that when the app is published, the SharedSettings.json file will be there in the output directory:

Sharing appsettings.json configuration files between projects in ASP.NET Core

However, that leaves us with a different problem. The SharedSettings.json file will be in a different place depending on if you're running locally with dotnet run (in ../Shared) or the published app with dotnet MyApp.Api.dll (in the working directory).

This is where things get a bit hacky.

For simplicity, rather than trying to work out in which context the app's running (I don't think that's directly possible), I simply try and load the file from both locations - one of them won't exist, but as long as we make the files "optional" that won't be an issue:

.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    var sharedFolder = Path.Combine(env.ContentRootPath, "..", "Shared");

    config
        .AddJsonFile(Path.Combine(sharedFolder, "SharedSettings.json"), optional: true) // When running using dotnet run
        .AddJsonFile("SharedSettings.json", optional: true) // When app is published
        .AddJsonFile("appsettings.json", optional: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    config.AddEnvironmentVariables();
})

It's not a particularly elegant solution, but it does the job for me. With the code in place we can now happily share settings across multiple apps, override them with app-specific values, and have the correct behaviour both when developing and after publishing.

Summary

This post showed how you can use a shared configuration file to share settings between multiple apps in a solution. By storing the configuration in a central JSON file accessible by both apps, you can avoid duplicating settings in appsettings.json.

Unfortunately this solution is a bit hacky due to the need to cater to the file being located at two different paths, depending on whether the app has been published or not. If anyone has a better solution, please let me know in the comments!

The sample code for this post can be found on GitHub.


Anuraj Parameswaran: Deploying Your Angular Application To Azure

This post is about deploying you Angular application to Azure App service. Unlike earlier versions of Angular JS, Angular CLI is the preferred way to develop and deploy Angular applications. In this post I will show you how to build a CI/CD pipeline with GitHub and Kudu, which will deploy your Angular application to an Azure Web App. I am using ASP.NET Core Web API application, which will be the backend and Angular application is the frontend.


Anuraj Parameswaran: Anti forgery validation with ASP.NET MVC and Angular

This post is how to implement anti forgery validation with ASP.NET MVC and Angular. The anti-forgery token can be used to help protect your application against cross-site request forgery. To use this feature, call the AntiForgeryToken method from a form and add the ValidateAntiForgeryTokenAttribute attribute to the action method that you want to protect.


Damien Bowden: Securing an ASP.NET Core MVC application which uses a secure API

The article shows how an ASP.NET Core MVC application can implement security when using an API to retrieve data. The OpenID Connect Hybrid flow is used to secure the ASP.NET Core MVC application. The application uses tokens stored in a cookie. This cookie is not used to access the API. The API is protected using a bearer token.

To access the API, the code running on the server of the ASP.NET Core MVC application, implements the OAuth2 client credentials resource owner flow to get the access token for the API and can then return the data to the razor views.

Code: https://github.com/damienbod/AspNetCoreHybridFlowWithApi

Setup

IdentityServer4 and OpenID connect flow configuration

Two client configurations are setup in the IdentityServer4 configuration class. The OpenID Connect Hybrid Flow client is used for the ASP.NET Core MVC application. This flow, after a successful login, will return a cookie to the client part of the application which contains the tokens. The second client is used for the API. This is a service to service communication between two trusted applications. This usually happens in a protected zone. The client API uses a secret to connect to the API. The secret should be a secret and different for each deployment.

public static IEnumerable<Client> GetClients()
{
	return new List<Client>
	{
		new Client
		{
			ClientName = "hybridclient",
			ClientId = "hybridclient",
			ClientSecrets = {new Secret("hybrid_flow_secret".Sha256()) },
			AllowedGrantTypes = GrantTypes.Hybrid,
			AllowOfflineAccess = true,
			RedirectUris = { "https://localhost:44329/signin-oidc" },
			PostLogoutRedirectUris = { "https://localhost:44329/signout-callback-oidc" },
			AllowedCorsOrigins = new List<string>
			{
				"https://localhost:44329/"
			},
			AllowedScopes = new List<string>
			{
				IdentityServerConstants.StandardScopes.OpenId,
				IdentityServerConstants.StandardScopes.Profile,
				IdentityServerConstants.StandardScopes.OfflineAccess,
				"scope_used_for_hybrid_flow",
				"role"
			}
		},
		new Client
		{
			ClientId = "ProtectedApi",
			ClientName = "ProtectedApi",
			ClientSecrets = new List<Secret> { new Secret { Value = "api_in_protected_zone_secret".Sha256() } },
			AllowedGrantTypes = GrantTypes.ClientCredentials,
			AllowedScopes = new List<string> { "scope_used_for_api_in_protected_zone" }
		}
	};
}

The GetApiResources defines the scopes and the APIs for the different resources. I usually define one scope per API resource.

public static IEnumerable<ApiResource> GetApiResources()
{
	return new List<ApiResource>
	{
		new ApiResource("scope_used_for_hybrid_flow")
		{
			ApiSecrets =
			{
				new Secret("hybrid_flow_secret".Sha256())
			},
			Scopes =
			{
				new Scope
				{
					Name = "scope_used_for_hybrid_flow",
					DisplayName = "Scope for the scope_used_for_hybrid_flow ApiResource"
				}
			},
			UserClaims = { "role", "admin", "user", "some_api" }
		},
		new ApiResource("ProtectedApi")
		{
			DisplayName = "API protected",
			ApiSecrets =
			{
				new Secret("api_in_protected_zone_secret".Sha256())
			},
			Scopes =
			{
				new Scope
				{
					Name = "scope_used_for_api_in_protected_zone",
					ShowInDiscoveryDocument = false
				}
			},
			UserClaims = { "role", "admin", "user", "safe_zone_api" }
		}
	};
}

Securing the Resource API

The protected API uses the IdentityServer4.AccessTokenValidation Nuget package to validate the access token. This uses the introspection endpoint to validate the token. The scope is also validated in this example using authorization policies from ASP.NET Core.

public void ConfigureServices(IServiceCollection services)
{
	services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
	  .AddIdentityServerAuthentication(options =>
	  {
		  options.Authority = "https://localhost:44352";
		  options.ApiName = "ProtectedApi";
		  options.ApiSecret = "api_in_protected_zone_secret";
		  options.RequireHttpsMetadata = true;
	  });

	services.AddAuthorization(options =>
		options.AddPolicy("protectedScope", policy =>
		{
			policy.RequireClaim("scope", "scope_used_for_api_in_protected_zone");
		})
	);

	services.AddMvc();
}

The API is protected using the Authorize attribute and checks the defined policy. If this is ok, the data can be returned to the server part of the MVC application.

[Authorize(Policy = "protectedScope")]
[Route("api/[controller]")]
public class ValuesController : Controller
{
	[HttpGet]
	public IEnumerable<string> Get()
	{
		return new string[] { "data 1 from the second api", "data 2 from the second api" };
	}
}

Securing the ASP.NET Core MVC application

The ASP.NET Core MVC application uses OpenID Connect to validate the user and the application and saves the result in a cookie. If the identity is ok, the tokens are returned in the cookie from the server side of the application. See the OpenID Connect specification, for more information concerning the OpenID Connect Hybrid flow.

public void ConfigureServices(IServiceCollection services)
{
	services.AddAuthentication(options =>
	{
		options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
		options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
	})
	.AddCookie()
	.AddOpenIdConnect(options =>
	{
		options.SignInScheme = "Cookies";
		options.Authority = "https://localhost:44352";
		options.RequireHttpsMetadata = true;
		options.ClientId = "hybridclient";
		options.ClientSecret = "hybrid_flow_secret";
		options.ResponseType = "code id_token";
		options.Scope.Add("scope_used_for_hybrid_flow");
		options.Scope.Add("profile");
		options.SaveTokens = true;
	});

	services.AddAuthorization();

	services.AddMvc();
}

The Configure method adds the authentication to the MVC middleware using the UseAuthentication extension method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseStaticFiles();

	app.UseAuthentication();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The home controller is protected using the authorize attribute, and the index method gets the data from the API using the api service.

[Authorize]
public class HomeController : Controller
{
	private readonly ApiService _apiService;

	public HomeController(ApiService apiService)
	{
		_apiService = apiService;
	}

	public async System.Threading.Tasks.Task<IActionResult> Index()
	{
		var result = await _apiService.GetApiDataAsync();

		ViewData["data"] = result.ToString();
		return View();
	}

	public IActionResult Error()
	{
		return View(new ErrorViewModel { RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier });
	}
}

Calling the protected API from the ASP.NET Core MVC app

The API service implements the HTTP request using the TokenClient from IdentiyModel. This can be downloaded as a Nuget package. First the access token is acquired from the server, then the token is used to request the data from the API.

var discoClient = new DiscoveryClient("https://localhost:44352");
var disco = await discoClient.GetAsync();
if (disco.IsError)
{
	throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}");
}

var tokenClient = new TokenClient(disco.TokenEndpoint, "ProtectedApi", "api_in_protected_zone_secret");
var tokenResponse = await tokenClient.RequestClientCredentialsAsync("scope_used_for_api_in_protected_zone");

if (tokenResponse.IsError)
{
	throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}");
}

using (var client = new HttpClient())
{
	client.BaseAddress = new Uri("https://localhost:44342");
	client.SetBearerToken(tokenResponse.AccessToken);

	var response = await client.GetAsync("/api/values");
	if (response.IsSuccessStatusCode)
	{
		var responseContent = await response.Content.ReadAsStringAsync();
		var data = JArray.Parse(responseContent);

		return data;
	}

	throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}");
}

Authentication and Authorization in the API

The ASP.NET Core MVC application calls the API using a service to service trusted association in the protected zone. Due to this, the identity which made the original request cannot be validated using the access token on the API. If authorization is required for the original identity, this should be sent in the URL of the API HTTP request, which can then be validated as required using an authorization filter. Maybe it is enough to validate that the service token is authenticated, and authorized. Care should be taken when sending user data, GDPR requirements, or user information which the IT admins should not have access to.

Should I use the same token as the access token returned to the MVC client?

This depends 🙂 If the API is a public API, then this is fine, if you have no problem re-using the same token for different applications. If the API is in the protected zone, for example behind a WAF, then a separate token would be better. Only tokens issued for the trusted app can be used to access the protected API. This can be validated by using separate scopes, secrets, etc. The tokens issued for the MVC app and the user, will not work, these were issued for a single purpose only, and not multiple applications. The token used for the protected API never leaves the trusted zone.

Links

https://docs.microsoft.com/en-gb/aspnet/core/mvc/overview

https://docs.microsoft.com/en-gb/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-gb/aspnet/core/security/

http://openid.net/

https://www.owasp.org/images/b/b0/Best_Practices_WAF_v105.en.pdf

https://tools.ietf.org/html/rfc7662

http://docs.identityserver.io/en/release/quickstarts/5_hybrid_and_api_access.html

https://github.com/aspnet/Security

https://elanderson.net/2017/07/identity-server-from-implicit-to-hybrid-flow/

http://openid.net/specs/openid-connect-core-1_0.html#HybridFlowAuth


Andrew Lock: ASP.NET Core in Action - MVC in ASP.NET Core

ASP.NET Core in Action - MVC in ASP.NET Core

In February 2017, the Manning Early Access Program (MEAP) started for the ASP.NET Core book I am currently writing - ASP.NET Core in Action. This post is a sample of what you can find in the book. If you like what you see, please take a look - for now you can even get a 37% discount with the code lockaspdotnet!

The Manning Early Access Program provides you full access to books as they are written, You get the chapters as they are produced, plus the finished eBook as soon as it’s ready, and the paper book long before it's in bookstores. You can also interact with the author (me!) on the forums to provide feedback as the book is being written.

The book is now finished and completely available in the MEAP, so now is the time to act if you're interested! Thanks 🙂

MVC in ASP.NET Core

As you may be aware, ASP.NET Core implements MVC using a single piece of middleware, which is normally placed at the end of the middleware pipeline, as shown in figure 1. Once a request has been processed by each middleware (and assuming none of them handle the request and short-circuit the pipeline), it is received by the MVC middleware.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 1. The middleware pipeline. The MVC Middleware is typically configured as the last middleware in the pipeline.

Middleware often handles cross-cutting concerns or narrowly defined requests such as requests for files. For requirements that fall outside of these functions, or which have many external dependencies, a more robust framework is required. The MvcMiddleware in ASP.NET Core can provide this framework, allowing interaction with your application’s core business logic, and generation of a user interface. It handles everything from mapping the request to an appropriate controller, to generating the HTML or API response.

In the traditional description of the MVC design pattern, there is only a single type of model, which holds all the non-UI data and behavior. The controller updates this model as appropriate and then passes it to the view, which uses it to generate a UI. This simple, three-component pattern may be sufficient for some basic applications, but for more complex applications, it often doesn’t scale.

One of the problems when discussing MVC is the vague and overloaded terms that it uses, such as “controller” and “model.” Model, in particular, is such an overloaded term that it’s often difficult to be sure exactly what it refers to – is it an object, a collection of objects, an abstract concept? Even ASP.NET Core uses the word “model” to describe several related, but different, components, as you’ll see shortly.

Directing a request to a controller and building a binding model

The first step when the MvcMiddleware receives a request is the routing of the request to an appropriate controller. Let’s think about another page in our ToDo application. On this page, you’re displaying a list of items marked with a given category, assigned to a particular user. If you’re looking at the list of items assigned to the user “Andrew” with a category of “Simple,” you’d make a request to the URL /todo/list/Simple/Andrew.

Routing takes the path of the request, /todo/list/Simple/Andrew, and maps it against a preregistered list of patterns. These patterns match a path to a single controller class and action method.

DEFINITION An action (or action method) is a method that runs in response to a request. A controller is a class that contains a number of logically grouped action methods.

Once an action method is selected, the binding model (if applicable) is generated, based on the incoming request and the method parameters required by the action method, as shown in figure 2. A binding model is normally a standard class, with properties that map to the request data.

DEFINITION A binding model is an object that acts a “container” for the data provided in a request which is required by an action method.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 2. Routing a request to a controller, and building a binding model. A request to the URL /todo/list/Simple/Andrew results in the ListCategory action being executed, passing in a populated binding model

In this case, the binding model contains two properties: Category, which is “bound” to the value "Simple"; and the property User which is bound to the value "Andrew". These values are provided in the request URL’s path and are used to populate a binding model of type TodoModel.

This binding model corresponds to the method parameter of the ListCategory action method. This binding model is passed to the action method when it executes, and it can be used to decide how to respond. For this example, the action method uses it to decide which ToDo items to display on the page.

Executing an action using the application model

The role of an action method in the controller is to coordinate the generation of a response to the request it’s handling. That means it should only perform a limited number of actions. In particular, it should:

  • Validate that the data contained in the binding model provided is valid for the request
  • Invoke the appropriate actions on the application model
  • Select an appropriate response to generate, based on the response from the application model

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 3. When executed, an action invokes the appropriate methods in the application model.

Figure 3 shows the action model invoking an appropriate method on the application model. Here you can see that the “application model” is a somewhat abstract concept, which encapsulates the remaining non-UI part of your application. It contains the domain model, a number of services, database interaction, and a few other things.

DEFINITION The domain model encapsulates complex business logic in a series of classes that don’t depend on any infrastructure and can be easily tested

The action method typically calls into a single point in the application model. In our example of viewing a product page, the application model might use a variety of different services to check whether the user is allowed to view the product, to calculate the display price for the product, to load the details from the database, or to load a picture of the product from a file.

Assuming the request is valid, the application model returns the required details back to the action method. It’s then up to the action method to choose a response to generate.

Generating a response using a view model

Once the action method is called out to the application model that contains the application business logic, it’s time to generate a response. A view model captures the details necessary for the view to generate a response.

DEFINITION A view model is a simple object that contains data required by the view to render a UI. It’s typically some transformation of the data contained in the application model, plus extra information required to render the page, for example the page’s title.

The action method selects an appropriate view template and passes the view model to it. Each view is designed to work with a particular view model, which it uses to generate the final HTML response. Finally, this is sent back through the middleware pipeline and out to the user’s browser, as shown in figure 4.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 4 The action method builds a view model, selects which view to use to generate the response, and passes it the view model. It is the view which generates the response itself.

It is important to note that although the action method selects which view to display, it doesn’t select what’s generated. It is the view itself that decides the content of the response.

Putting it all together: a complete mvc request

Now you’ve seen each of the steps that go into handling a request in ASP.NET Core using MVC, let’s put it all together from request to response. Figure 5 shows how each of the steps combine to handle the request to display the list of ToDos for user “Andrew” and category “Simple.” The traditional MVC pattern is still visible in ASP.NET Core, made up of the action/controller, the view, and the application model.

ASP.NET Core in Action - MVC in ASP.NET Core

Figure 5 A complete MVC request for the list of ToDos in the “Simple” category for user “Andrew”

By now, you might be thinking this whole process seems rather convoluted – numerous steps to display some HTML! Why not allow the application model to create the view directly, rather than having to go on a dance back and forth with the controller/action method?

The key benefit throughout this process is the separation of concerns.

  • The view is responsible for taking some data and generating HTML.
  • The application model is responsible for executing the required business logic.
  • The controller is responsible for validating the incoming request and selecting the appropriate view to display, based on the output of the application model.

By having clearly-defined boundaries it’s easier to update and test each of the components without depending on any of the others. If your UI logic changes, you won’t necessarily need to modify any of your business logic classes, and you’re less likely to introduce errors in unexpected places.

That’s all for this article. For more information, read the free first chapter of ASP.NET Core in Action and see this Slideshare presentation.


Andrew Lock: Including linked files from outside the project directory in ASP.NET Core

Including linked files from outside the project directory in ASP.NET Core

This post is just a quick tip that I found myself using recently- including files in a project that are outside the project directory. I suspect this feature may have slipped under the radar for many people due to the slightly obscure UI hints you need to pick up on in Visual Studio.

Adding files from outside the project by copying

Sometimes, you might want to include an existing item in your ASP.NET Core apps that lives outside the project directory. You can easily do this from Visual Studio by right clicking the project you want to include it in, and selecting Add > Existing Item…

Including linked files from outside the project directory in ASP.NET Core

You're then presented with a file picker dialog, so you can navigate to the file, and choose Add. Visual Studio will spot that the file is outside the project directory and will copy it in.

Sometimes this is the behaviour you want, but often you want the original file to remain where it is and for the project to just point to it, not to create a copy.

Adding files from outside the project by linking

To add a file as a link, right click and choose Add > Existing Item… as before, but this time, don't click the Add button. Instead, click the little dropdown arrow next to the Add button and select Add as Link.

Including linked files from outside the project directory in ASP.NET Core

Instead of copying the file into the project directory, Visual Studio will create a link to the original. That way, if you modify the original file you'll immediately see the changes in your project.

Visual Studio shows linked items with a slightly different icon, as you can see below where SharedSettings.json is a linked file and appsettings.json is a normally added file:

Including linked files from outside the project directory in ASP.NET Core

Directly editing the csproj file

As you'd expect for ASP.NET Core projects, you don't need Visual Studio to get this behaviour. You can always directly edit the .csproj file yourself and add the necessary items by hand.

The exact code required depends on the type of file you're trying to link and the type of MSBuild action required. For example, if you want to include a .cs file, you would use the <compile> element, nested in an <ItemGroup>:

<ItemGroup>  
  <Compile Include="..\OtherFolder\MySharedClass.cs" Link="MySharedClass.cs" />
</ItemGroup>  

Include gives the relative path to the file from the project folder, and the Link property tells MSBuild to add the file as a link, plus the name that should be used for it. If you change this file name, it will also change the filename as it's displayed in Visual Studio's Solution Explorer.

For content files like JSON configuration files, you would use the <content> element, for example:

<ItemGroup>  
  <Content Include="..\Shared\SharedSettings.json" Link="SharedSettings.json" CopyToOutputDirectory="PreserveNewest" />
</ItemGroup>  

In this example, I also set the CopyToOutputDirectory to PreserveNewest, so that the file will be copied to the output directory when the project is built or published.

Summary

Using linked files can be handy when you want to share code or resources between multiple projects. Just be sure that the files are checked in to source control along with your project, otherwise you might get build errors when loading your projects!


Anuraj Parameswaran: Using Yarn with Angular CLI

This post is about using Yarn in Angular CLI instead of NPM. Yarn is an alternative package manager for NPM packages with a focus on reliability and speed. It has been released in October 2016 and already gained a lot of traction and enjoys great popularity in the JavaScript community.


Damien Bowden: Using the dotnet Angular template with Azure AD OIDC Implicit Flow

This article shows how to use Azure AD with an Angular application implemented using the Microsoft dotnet template and the angular-auth-oidc-client npm package to implement the OpenID Implicit Flow. The Angular app uses bootstrap 4 and Angular CLI.

Code: https://github.com/damienbod/dotnet-template-angular

Setting up Azure AD

Log into https://portal.azure.com and click the Azure Active Directory button

Click App registrations and then the New application registration

Add an application name and set the URL to match the application URL. Click the create button.

Open the new application.

Click the Manifest button.

Set the oauth2AllowImplicitFlow to true.

Click the settings button and add the API Access required permissions as needed.

Now the Azure AD is ready to go. You will need to add your users which you want to login with and add them as admins if required. For example, I have add damien@damienbod.onmicrosoft.com as an owner.

dotnet Angular template from Microsoft.

Install the latest version and create a new project.

Installation:
https://docs.microsoft.com/en-gb/aspnet/core/spa/index#installation

Docs:
https://docs.microsoft.com/en-gb/aspnet/core/spa/angular?tabs=visual-studio

The dotnet template uses Angular CLI and can be found in the ClientApp folder.

Update all the npm packages including the Angular-CLI, and do a npm install, or use yarn to update the packages.

Add the angular-auth-oidc-client which implements the OIDC Implicit Flow for Angular applications.

{
  "name": "dotnet_angular",
  "version": "0.0.0",
  "license": "MIT",
  "scripts": {
    "ng": "ng",
    "start": "ng serve --extract-css",
    "build": "ng build --extract-css",
    "build:ssr": "npm run build -- --app=ssr --output-hashing=media",
    "test": "ng test",
    "lint": "ng lint",
    "e2e": "ng e2e"
  },
  "private": true,
  "dependencies": {
    "@angular-devkit/core": "0.0.28",
    "@angular/animations": "^5.2.1",
    "@angular/common": "^5.2.1",
    "@angular/compiler": "^5.2.1",
    "@angular/core": "^5.2.1",
    "@angular/forms": "^5.2.1",
    "@angular/http": "^5.2.1",
    "@angular/platform-browser": "^5.2.1",
    "@angular/platform-browser-dynamic": "^5.2.1",
    "@angular/platform-server": "^5.2.1",
    "@angular/router": "^5.2.1",
    "@nguniversal/module-map-ngfactory-loader": "^5.0.0-beta.5",
    "angular-auth-oidc-client": "4.0.0",
    "aspnet-prerendering": "^3.0.1",
    "bootstrap": "^4.0.0",
    "core-js": "^2.5.3",
    "es6-promise": "^4.2.2",
    "rxjs": "^5.5.6",
    "zone.js": "^0.8.20"
  },
  "devDependencies": {
    "@angular/cli": "1.6.5",
    "@angular/compiler-cli": "^5.2.1",
    "@angular/language-service": "^5.2.1",
    "@types/jasmine": "~2.8.4",
    "@types/jasminewd2": "~2.0.3",
    "@types/node": "~9.3.0",
    "codelyzer": "^4.1.0",
    "jasmine-core": "~2.9.1",
    "jasmine-spec-reporter": "~4.2.1",
    "karma": "~2.0.0",
    "karma-chrome-launcher": "~2.2.0",
    "karma-cli": "~1.0.1",
    "karma-coverage-istanbul-reporter": "^1.3.3",
    "karma-jasmine": "~1.1.1",
    "karma-jasmine-html-reporter": "^0.2.2",
    "protractor": "~5.2.2",
    "ts-node": "~4.1.0",
    "tslint": "~5.9.1",
    "typescript": "~2.6.2"
  }
}

Azure AD does not support CORS, so you have to GET the .well-known/openid-configuration with your tenant and add them to your application as a Json file.

https://login.microsoftonline.com/damienbod.onmicrosoft.com/.well-known/openid-configuration

Do the same for the jwt keys
https://login.microsoftonline.com/common/discovery/keys

Now change the URL in the well-known/openid-configuration json file to use the downloaded version of the keys.

{
  "authorization_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/authorize",
  "token_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/token",
  "token_endpoint_auth_methods_supported": [ "client_secret_post", "private_key_jwt", "client_secret_basic" ],
  "jwks_uri": "https://localhost:44347/jwks.json",
  "response_modes_supported": [ "query", "fragment", "form_post" ],
  "subject_types_supported": [ "pairwise" ],
  "id_token_signing_alg_values_supported": [ "RS256" ],
  "http_logout_supported": true,
  "frontchannel_logout_supported": true,
  "end_session_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/logout",
  "response_types_supported": [ "code", "id_token", "code id_token", "token id_token", "token" ],
  "scopes_supported": [ "openid" ],
  "issuer": "https://sts.windows.net/a0958f45-195b-4036-9259-de2f7e594db6/",
  "claims_supported": [ "sub", "iss", "cloud_instance_name", "cloud_instance_host_name", "cloud_graph_host_name", "msgraph_host", "aud", "exp", "iat", "auth_time", "acr", "amr", "nonce", "email", "given_name", "family_name", "nickname" ],
  "microsoft_multi_refresh_token": true,
  "check_session_iframe": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/oauth2/checksession",
  "userinfo_endpoint": "https://login.microsoftonline.com/a0958f45-195b-4036-9259-de2f7e594db6/openid/userinfo",
  "tenant_region_scope": "NA",
  "cloud_instance_name": "microsoftonline.com",
  "cloud_graph_host_name": "graph.windows.net",
  "msgraph_host": "graph.microsoft.com"
}

This can now be used in the APP_INITIALIZER of the app.module. In the OIDC configuration, set the OpenIDImplicitFlowConfiguration object to match the Azure AD application which was configured before.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule, APP_INITIALIZER } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpClientModule } from '@angular/common/http';
import { RouterModule } from '@angular/router';

import { AppComponent } from './app.component';
import { NavMenuComponent } from './nav-menu/nav-menu.component';
import { HomeComponent } from './home/home.component';

import {
  AuthModule,
  OidcSecurityService,
  OpenIDImplicitFlowConfiguration,
  OidcConfigService,
  AuthWellKnownEndpoints
} from 'angular-auth-oidc-client';
import { AutoLoginComponent } from './auto-login/auto-login.component';
import { routing } from './app.routes';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';
import { ProtectedComponent } from './protected/protected.component';
import { AuthorizationGuard } from './authorization.guard';
import { environment } from '../environments/environment';

export function loadConfig(oidcConfigService: OidcConfigService) {
  console.log('APP_INITIALIZER STARTING');
  // https://login.microsoftonline.com/damienbod.onmicrosoft.com/.well-known/openid-configuration
  // jwt keys: https://login.microsoftonline.com/common/discovery/keys
  // Azure AD does not support CORS, so you need to download the OIDC configuration, and use these from the application.
  // The jwt keys needs to be configured in the well-known-openid-configuration.json
  return () => oidcConfigService.load_using_custom_stsServer('https://localhost:44347/well-known-openid-configuration.json');
}

@NgModule({
  declarations: [
    AppComponent,
    NavMenuComponent,
    HomeComponent,
    AutoLoginComponent,
    ForbiddenComponent,
    UnauthorizedComponent,
    ProtectedComponent
  ],
  imports: [
    BrowserModule.withServerTransition({ appId: 'ng-cli-universal' }),
    HttpClientModule,
    AuthModule.forRoot(),
    FormsModule,
    routing,
  ],
  providers: [
	  OidcSecurityService,
	  OidcConfigService,
	  {
		  provide: APP_INITIALIZER,
		  useFactory: loadConfig,
		  deps: [OidcConfigService],
		  multi: true
    },
    AuthorizationGuard
	],
  bootstrap: [AppComponent]
})

export class AppModule {

  constructor(
    private oidcSecurityService: OidcSecurityService,
    private oidcConfigService: OidcConfigService,
  ) {
    this.oidcConfigService.onConfigurationLoaded.subscribe(() => {

      const openIDImplicitFlowConfiguration = new OpenIDImplicitFlowConfiguration();
      openIDImplicitFlowConfiguration.stsServer = 'https://login.microsoftonline.com/damienbod.onmicrosoft.com';
      openIDImplicitFlowConfiguration.redirect_url = 'https://localhost:44347';
      openIDImplicitFlowConfiguration.client_id = 'fd87184a-00c2-4aee-bc72-c7c1dd468e8f';
      openIDImplicitFlowConfiguration.response_type = 'id_token token';
      openIDImplicitFlowConfiguration.scope = 'openid profile email ';
      openIDImplicitFlowConfiguration.post_logout_redirect_uri = 'https://localhost:44347';
      openIDImplicitFlowConfiguration.post_login_route = '/home';
      openIDImplicitFlowConfiguration.forbidden_route = '/home';
      openIDImplicitFlowConfiguration.unauthorized_route = '/home';
      openIDImplicitFlowConfiguration.auto_userinfo = false;
      openIDImplicitFlowConfiguration.log_console_warning_active = true;
      openIDImplicitFlowConfiguration.log_console_debug_active = !environment.production;
      openIDImplicitFlowConfiguration.max_id_token_iat_offset_allowed_in_seconds = 600;

      const authWellKnownEndpoints = new AuthWellKnownEndpoints();
      authWellKnownEndpoints.setWellKnownEndpoints(this.oidcConfigService.wellKnownEndpoints);

      this.oidcSecurityService.setupModule(openIDImplicitFlowConfiguration, authWellKnownEndpoints);
      this.oidcSecurityService.setCustomRequestParameters({ 'prompt': 'admin_consent', 'resource': 'https://graph.windows.net'});
    });

    console.log('APP STARTING');
  }
}

Now an Auth Guard can be added to protect the protected routes.

import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';
import { Observable } from 'rxjs/Observable';
import { map } from 'rxjs/operators';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Injectable()
export class AuthorizationGuard implements CanActivate {

  constructor(
    private router: Router,
    private oidcSecurityService: OidcSecurityService
  ) { }

  public canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | boolean {
    console.log(route + '' + state);
    console.log('AuthorizationGuard, canActivate');

    return this.oidcSecurityService.getIsAuthorized().pipe(
      map((isAuthorized: boolean) => {
        console.log('AuthorizationGuard, canActivate isAuthorized: ' + isAuthorized);

        if (isAuthorized) {
          return true;
        }

        this.router.navigate(['/unauthorized']);
        return false;
      })
    );
  }
}

You can then add an app.routes and protect what you require.

import { Routes, RouterModule } from '@angular/router';

import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';
import { AutoLoginComponent } from './auto-login/auto-login.component';
import { ProtectedComponent } from './protected/protected.component';
import { AuthorizationGuard } from './authorization.guard';

const appRoutes: Routes = [
  { path: '', component: HomeComponent, pathMatch: 'full' },
  { path: 'home', component: HomeComponent },
  { path: 'autologin', component: AutoLoginComponent },
  { path: 'forbidden', component: ForbiddenComponent },
  { path: 'unauthorized', component: UnauthorizedComponent },
  { path: 'protected', component: ProtectedComponent, canActivate: [AuthorizationGuard] }
];

export const routing = RouterModule.forRoot(appRoutes);

The NavMenuComponent component is then updated to add the login, logout.

import { Component } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { OidcSecurityService } from 'angular-auth-oidc-client';

@Component({
  selector: 'app-nav-menu',
  templateUrl: './nav-menu.component.html',
  styleUrls: ['./nav-menu.component.css']
})
export class NavMenuComponent {
  isExpanded = false;
  isAuthorizedSubscription: Subscription;
  isAuthorized: boolean;

  constructor(public oidcSecurityService: OidcSecurityService) {
  }

  ngOnInit() {
    this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
      (isAuthorized: boolean) => {
        this.isAuthorized = isAuthorized;
      });
  }

  ngOnDestroy(): void {
    this.isAuthorizedSubscription.unsubscribe();
  }

  login() {
    this.oidcSecurityService.authorize();
  }

  refreshSession() {
    this.oidcSecurityService.authorize();
  }

  logout() {
    this.oidcSecurityService.logoff();
  }
  collapse() {
    this.isExpanded = false;
  }

  toggle() {
    this.isExpanded = !this.isExpanded;
  }
}

Start the application and click login

Enter your user which is defined in Azure AD

Consent page:

And you are redircted back to the application.

Notes:

If you don’t use any Microsoft API use the id_token flow, and not the id_token token flow. The resource of the API needs to be defined in both the request and also the Azure AD app definitions.

Links:

https://docs.microsoft.com/en-gb/aspnet/core/spa/angular?tabs=visual-studio

https://portal.azure.com


Andrew Lock: Creating GitHub pull requests from the command-line with Hub

Creating GitHub pull requests from the command-line with Hub

If you use GitHub much, you'll likely find yourself having to repeatedly use the web interface to raise pull requests. The web interface is great and all, but it can really take you out of your flow if you're used to creating branches, rebasing, pushing, and pulling from the command line!

Creating GitHub pull requests from the command-line with Hub

Luckily GitHub has a REST API that you can use to create pull requests instead, and a nice command line wrapper to invoke it called Hub! Hub wraps the git command line tool - effectively adding extra commands you can invoke from the command line. Once it's installed (and aliased) you'll be able to call:

> git pull-request

and a new pull request will be created in your repository:

Creating GitHub pull requests from the command-line with Hub

If you're someone who likes using the command line, this can really help streamline your workflow.

Installing Hub

Hub is available on GitHub so you can download binaries, or install it from source. As I use chocolatey on my dev machine, I chose to install Hub using chocolatey by running the following from an administrative powershell:

> choco install hub

Creating GitHub pull requests from the command-line with Hub

Chocolatey will download and install hub into its standard installation folder (C:\ProgramData\chocolatey by default). As this folder should be in your PATH, you can type hub version from the command line and you should get back something similar to:

> hub version
git version 2.15.1.windows.2  
hub version 2.2.9  

That's it, you're good to go. The first time you use Hub to create a pull request (PR), it will prompt you for your GitHub username and password.

Creating a pull request with Hub

Hub is effectively an extension of the git command line, so it can do everything git does, and just adds some helper GitHub methods on top. Anything you can do with git, you can do with hub.

You can view all the commands available by simply typing hub into the command line. As hub is a wrapper for git it starts by displaying the git help message:

> hub
usage: git [--version] [--help] [-C <path>] [-c name=value]  
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p | --paginate | --no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]

These are common Git commands used in various situations:

start a working area (see also: git help tutorial)  
   clone      Clone a repository into a new directory
   init       Create an empty Git repository or reinitialize an existing one
...

At the bottom, hub lists the GitHub specific commands available to you:

These GitHub commands are provided by hub:

   pull-request   Open a pull request on GitHub
   fork           Make a fork of a remote repository on GitHub and add as remote
   create         Create this repository on GitHub and add GitHub as origin
   browse         Open a GitHub page in the default browser
   compare        Open a compare page on GitHub
   release        List or create releases (beta)
   issue          List or create issues (beta)
   ci-status      Show the CI status of a commit

As you can see, there's a whole bunch of useful commands there. The one I'm interested in is pull-request.

Lets imagine we have already checked out a repository we own, and we have created a branch to work on a feature, feature-37:

Creating GitHub pull requests from the command-line with Hub

Before we can create a PR, we need to push our branch to the server:

> git push origin -u feature-37

To create the PR, we use hub. This will open up your configured text editor to enter a message for the PR (I use Notepad++) . In the comments you can see the commit messages for the branch, or if your PR only has a single commit (as in this example), hub will handily fill the message in for you, just as it does in the web interface:

Creating GitHub pull requests from the command-line with Hub

As you can see from the comments in the screenshot, the first line of your message forms the PR title, and the remainder forms the description of the PR. After saving your message, hub spits out the URL for your PR on GitHub. Follow that link, and you can see your shiny new PR ready and waiting approval:

Creating GitHub pull requests from the command-line with Hub

Hub can do lots more than just create pull requests, but for me that's the killer feature I use everyday. If you use more features, then you may want to consider aliasing your hub command to git as it suggests in the docs.

Aliasing hub as git

As I mentioned earlier, hub is a wrapper around git that provides some handy extra tweaks. It even enhances some of the standard git commands: it can expand partial URLs in a git clone to be github.com addresses for example:

> hub clone andrewlock/test

# expands to
git clone git://github.com/andrewlock/test.git  

If you find yourself using the hub command a lot, then you might want to consider aliasing your git command to actually use Hub instead. That means you can just do

> git clone andrewlock/test

for example, without having to think about which commands are hub specific, and which are available in git. Adding an alias is safe to do, you're not modifying the underlying git program or anything, so don't worry about that.

If you're using PowerShell, you can add the alias to your profile by running:

> Add-Content $PROFILE "`nSet-Alias git hub"

and then restarting your session. For troubleshooting and other scripts see https://github.com/github/hub#aliasing.

Streamlining PR creation with a git alias

I love how much time hub has saved me by keeping my hands on the keyboard, but there's one thing that was annoying me: having to run git push before opening the PR. I'm a big fan of Git aliases, so I decided to create an alias called pr that does two things: push, and create a pull request.

If you're new to git aliases, I highly recommend checking out this post from Phil Haack. He explains what aliases are, why you want them, and gives a bunch of really useful aliases to get started.

You can create aliases directly from the command line with git, but for all but the simplest ones I like to edit the .gitconfig file directly. To open your global .gitconfig for editing, use

> git config --global --edit

This will popup your editor of choice, and allow you to edit to your heart's content. Locate the [alias] section of your config file (or if it doesn't exist, add it), and enter the following:

[alias]
    pr="!f() { \
        BRANCH_NAME=$(git rev-parse --abbrev-ref HEAD); \
        git push -u origin $BRANCH_NAME; \
        hub pull-request; \
    };f "

This alias uses the slightly more complex script format that creates a function and executes it immediately. In that function, we do three things:

  • BRANCH_NAME=$(git rev-parse --abbrev-ref HEAD); - Get the name of the current branch from git and store it in a variable, BRANCH_NAME
  • git push -u origin $BRANCH_NAME; - Push the current branch to the remote origin, and associate it with the remote branch of the same name
  • hub pull-request - Create the pull request using hub

To use the alias, simply check out the branch you wish to create a PR for and run:

> git pr

This will push the branch if necessary and create the pull request for you, all in one (prompting you for the PR title in your editor as usual).

> git pr
Counting objects: 11, done.  
Delta compression using up to 8 threads.  
Compressing objects: 100% (11/11), done.  
Writing objects: 100% (11/11), 1012 bytes | 1012.00 KiB/s, done.  
Total 11 (delta 9), reused 0 (delta 0)  
remote: Resolving deltas: 100% (9/9), completed with 7 local objects.  
To https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders.git  
 * [new branch]      feature-37 -> feature-37
Branch 'feature-37' set up to track remote branch 'feature-37' from 'origin'.  
https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders/pull/40  

Note, there is code in the Hub GitHub repository indicating that hub pr is going to be a feature that allows you to check-out a given PR. If that's the case this alias may break, so I'll keep an eye out!

Summary

Hub is a great little wrapper from GitHub that just simplifies some of the things I do many times a day. If you find it works for you, check it out on GitHub - it's written in Go and I've no doubt they'd love to have more contributors.


Anuraj Parameswaran: Measuring code coverage of .NET Core applications with Visual Studio 2017

This post is about Measuring code coverage of .NET Core applications with Visual Studio. Test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.


Andrew Lock: Handy Docker commands for local development - Part 2

Handy Docker commands for local development - Part 2

This is a follow up to my previous post of handy Docker commands that I always find myself having to Google. The full list of commands discussed in this and the previous post are shown below. Hope you find them useful!

View the space used by Docker

One of the slightly insidious things about Docker is the way it can silently chew up your drive space. Even more than that, it's not obvious exactly how much space it's actually using!

Luckily Docker includes a handy command, which lets you know how much space you're using, in terms of images, containers, and local volumes (essentially virtual hard drives attached to containers):

$ docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE  
Images              47                  23                  9.919GB             6.812GB (68%)  
Containers          48                  18                  98.35MB             89.8MB (91%)  
Local Volumes       6                   6                   316.1MB             0B (0%)  

As well as the actual space used up, this table also shows how much you could reclaim by deleting old containers, images, and volumes. In the next section, I'll show you how.

Remove old docker images and containers.

Until recently, I was manually deleting my old images and containers using the scripts in this gist, but it turns out there's a native command in Docker to cleanup - docker system prune -a.

This command removes all unused containers, volumes (and networks), as well as any unused or dangling images. What's the difference between an unused and dangling image? I think it's described well in this stack overflow answer:

An unused image means that it has not been assigned or used in a container. For example, when running docker ps -a - it will list all of your exited and currently running containers. Any images shown being used inside any of containers are a "used image".

On the other hand, a dangling image just means that you've created the new build of the image, but it wasn't given a new name. So the old images you have becomes the "dangling image". Those old image are the ones that are untagged and displays "" on its name when you run docker images.

When running docker system prune -a, it will remove both unused and dangling images. Therefore any images being used in a container, whether they have been exited or currently running, will NOT be affected. Dangling images are layers that aren't used by any tagged images. They take up space.

When you run the prune command, Docker double checks that you really mean it, and then proceeds to clean up your space. It lists out all the IDs of removed objects, and gives a little summary of everything it reclaimed (truncated for brevity):

$ docker system prune -a
WARNING! This will remove:  
        - all stopped containers
        - all networks not used by at least one container
        - all images without at least one container associated to them
        - all build cache
Are you sure you want to continue? [y/N] y  
Deleted Containers:  
c4b642d3cdb67035278e3529e07d94574d62bce36a9330655c7b752695a54c2d  
91de184f79942877c334b20eb67d661ec569224aacf65071e52527230b92932b  
...
93d4a795a635ba0e614c0e0ba9855252d682e4e3290bed49a5825ca02e0b6e64  
4d7f75ec610cbb1fcd1070edb05b7864b3f4b4079eb01b77e9dde63d89319e43

Deleted Images:  
deleted: sha256:5bac995a88af19e91077af5221a991b901d42c1e26d58b2575e2eeb4a7d0150b  
deleted: sha256:82fd2b23a0665bd64d536e74607d9319107d86e67e36a044c19d77f98fc2dfa1  
...
untagged: microsoft/dotnet:2.0.3-runtime  
deleted: sha256:a75caa09eb1f7d732568c5d54de42819973958589702d415202469a550ffd0ea

Total reclaimed space: 6.679GB  

Be aware, if you are working on a new build using a Dockerfile, you may have dangling or unused images that you want to keep around. It's best to leave the pruning until you're at a sensible point.

Speeding up builds by minimising the Docker context

Docker is designed with two components: a client and a deamon/service. When you write docker commands, you're sending commands using the client to the Docker deamon which does all the work. The client and deamon can even be on two separate machines.

In order for the Docker deamon to build an image from a Dockerfile using docker build ., the client needs to send it the "context" in which the command should be executed. The context is essentially all the files in the directory passed to the docker build command (e.g., the current directory when you call docker build .). You can see the client sending this context when you build using a Dockerfile:

Sending build context to Docker daemon   38.5MB  

For big projects, the context can get very large. This slows down the building of your Dockerfiles as you have to wait for the client to send all the files to the deamon. In an ASP.NET Core app for example, the top level directory includes a whole bunch of files that just aren't required for most Dockerfile builds - Git files, Visual Studio / Visual Studio Code files, previous bin and obj folders. All these additional files slow down the build when they are sent as part of the context.

Handy Docker commands for local development - Part 2

Luckily, you can exclude files by creating a .dockerignore file in your root directory. This works like a .gitignore file, listing the directories and files that Docker should ignore when creating the context, for example:

.git
.vs
.vscode
artifacts  
dist  
docs  
tools  
**/bin/*
**/obj/*

The syntax isn't quite the same as for Git, but it's the same general idea. Depending on the size of your project, and how many extra files you have, adding a .dockerignore file can make a big difference. For this very small project, it reduced the context from 38.5MB to 2.476MB, and instead of taking 3 seconds to send the context, it's practially instantaneous. Not bad!

Viewing (and minimising) the Docker context

As shown in the last section, reducing the context is well worth the effort to speed up your builds. Unfortunately, there's no easy way to actually view the files that are part of the context.

The easiest approach I've found is described in this Stack Overflow question. Essentially, you build a basic image, and just copy all the files from the context. You can then run the container and browse the file system, to see what you've got.

The following Dockerfile builds a simple image using the common BusyBox base image, copies all the context files into the /tmp directory, and runs find as the command when run as a container.

FROM busybox  
WORKDIR /tmp  
COPY . .  
ENTRYPOINT ["find"]  

If you create a new Dockerfile in the root directory callled InspectContext.Dockerfile containing these layers, you can create an image from it using docker build and passing the -f argument. If you don't pass the -f, Docker will use the default Dockerfile file

$ docker build -f InspectContext.Dockerfile --tag inspect-context .
Sending build context to Docker daemon  2.462MB  
Step 1/4 : FROM busybox  
 ---> 6ad733544a63
Step 2/4 : WORKDIR /tmp  
 ---> 6b1d4fad3942
Step 3/4 : COPY . .  
 ---> c48db59b30d9
Step 4/4 : ENTRYPOINT find  
 ---> bffa3718c9f6
Successfully built bffa3718c9f6  
Successfully tagged inspect-context:latest  

Once the image is built (which only takes a second or two), you can run the container. The find entrypoint will then spit out a list of all the files and folders in the /tmp directory, i.e. all the files that were part of the context.

$ docker run --rm inspect-context
.
./LICENSE
./.dockerignore
./aspnetcore-in-docker.sln
./README.md
./build.sh
./build.cake
./.gitattributes
./docker_build.sh
./Dockerfile
./.gitignore
./src
./src/AspNetCoreInDocker.Lib
./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj
./src/AspNetCoreInDocker.Lib/Class1.cs
./src/AspNetCoreInDocker.Web
...

With this list of files you can tweak your .dockerignore file to keep your context as lean as possible. Alternatively, if you want to browse around the container a bit further you can override the entrypoint, for example: docker run --entrypoint sh -it --rm inspect-context

That's pretty much it for the Docker commands for this article. I'm going to finish off with a couple of commands that are somewhat related, in that they're Git commands I always find myself reaching for when working with Docker!

Bonus: Making a file executable in git

This has nothing to do with Docker specifically, but it's something I always forget when my Dockerfile uses an external build script (for example when using Cake with Docker). Even if the file itself has executable permissions, you have to tell Git to store it as executable too:

git update-index --chmod=+x build.sh  

Bonus 2: Forcing script files to keep Unix Line endings in git

If you're working on Windows, but also have scripts that will be run in Linux (for example via a mapped folder in the Linux VM), you need to be sure that the files are checked out with Linux line endings (LF instead of CRLF).

You can set the line endings Git uses when checking out files with a .gitattributes file. Typically, my file just contains * text=auto so that line endings are auto-normalised for all text files. That means my .sh files end up with CRLF line endings when I check out on Windows. You can add an extra line to the file that forces all .sh files to use LF endings, no matter which platform they're checked out on:

* text=auto
*.sh eol=lf

Summary

These are the commands I find myself using most often, but if you have any useful additions, please leave them in the comments! :)


Andrew Lock: Handy Docker commands for local development - Part 1

Handy Docker commands for local development - Part 1

This post includes a grab-bag of Docker commands that I always find myself having to Google. Now I can just bookmark my own post and have them all at my finger tips! I use Docker locally inside a Linux VM rather than using Docker for Windows, so they all apply to that setup. Your mileage may vary - I imagine they work in Docker for Windows but I haven't checked.

I've split the list over 2 posts, but this is what you can expect:

Don't require sudo to execute Docker commands

By default, when you first install Docker on a Linux machine, you might find that you get permission errors when you try and run any Docker commands:

Handy Docker commands for local development - Part 1

To get round this, you need to run all your commands with sudo, e.g. sudo docker images. There's good security reasons for requiring sudo, but when I'm just running it locally in a VM, I'm not too concerned about them. To get round it, and avoid having to type sudo every time, you can add your current user to the docker group, which effectively gives it root access.

sudo usermod -aG docker $USER  

After running the command, just log out of your VM (or SSH session) and log back in, and you should be able to run your docker commands without needing to type sudo for everything.

Examining the file system of a failed Docker build

When you're initially writing a Dockerfile to build your app, you may find that it fails to build for some reason. That could happen for lots of reasons - it could be an error in your code,it could be an error in your Dockerfile invoking the wrong commands, or you might not be copying all the required files into the image for example.

Sometimes you can get obscure errors that leave you unsure what went wrong. When that happens, you might want to inspect the filesystem when the build failed, to see what went wrong. You can do this by running one of the previous image layers for your Dockerfile.

When Docker builds an image using a Dockerfile, it does so in layers. Each command in the Dockerfile creates a new layer when it executes. When a command fails, the layer is not created. To view the filesystem of the image when the command failed, we can just run the image that contains all the preceding layers.

Luckily, when you build a Dockerfile, Docker shows you the unique reference for each layer it creates - it's the random strings of numbers and letters like b1f30360c452 in the following example:

Step 1/13 : FROM microsoft/aspnetcore-build:2.0.3 AS builder  
 ---> b1f30360c452
Step 2/13 : WORKDIR /sln  
 ---> Using cache
 ---> 4dee75249988
Step 3/13 : COPY ./build.sh ./build.cake ./NuGet.config ./aspnetcore-in-docker.sln ./  
 ---> fee6f958bf9f
Step 4/13 : RUN ./build.sh -Target=Clean  
 ---> Running in ab207cd28503
/usr/bin/env: 'bash\r': No such file or directory

This build failed executing the ./build.sh -Target=Clean command. To view the filesystem we can create a container from the image created by the previous layer, fee6f958bf9f by calling docker run. We can execute a bash shell inside the container, and inspect the contents of the filesystem (or do anything else we like).

docker run --rm -it fee6f958bf9f /bin/bash  

The arguments are as follows:

  • --rm - when we exit the container, it will be deleted. this prevents the build up of exited transient images like this.
  • -it - create an "interactive" session. When the docker container starts up, you will be connected to it, rather than it running in the background.
  • fee6f958bf9f - the image layer to run in, taken from our failed output.
  • /bin/bash - The command to execute in the container when it starts. Using /bin/bash creates a shell, so you can execute commands and generally inspect the filesystem.

Note that --rm deletes the container, not the image. A container is basically a running image. Think of the image as a hard drive, it contains all the details on how to run a process, but you need a computer or a VM to actually do anything with the hard drive.

Once you've looked around and figured out the problem, you can quit the bash shell by running exit, which will also kill the container.

This docker run command works if you want to inspect an image during a failed build, but what if your build succeeded, and now you want to check the filesystem rather than running the app it contains? For that you'll need the command in the next section.

Examining the file system of an image with an ENTRYPOINT

Lets say your Docker build succeeded, but for some reason your app isn't starting correctly when you call docker run. You suspect there may be a missing file, and so you want to inspect the filesystem. Unfortunately, the command in the previous section won't work for you. If you try it, you'll probably be greeted with an error that looks like the following:

$ docker run -it --rm myImage /bin/bash
Unhandled Exception: System.FormatException: Value for switch '/bin/bash' is missing.  
   at Microsoft.Extensions.Configuration.CommandLine.CommandLineConfigurationProvider.Load(

The problem, is that the Docker image you're running (myImage) in this case, already includes an ENTRYPOINT command in the Dockerfile used to build it. The ENTRYPOINT defines the command to run when the container is started, which for ASP.NET Core apps, typically looks something like the following:

ENTRYPOINT ["dotnet", "MyApp.dll"]  

With the previous example, the /bin/bash argument is actually passed as an extra command line argument to the previously-defined entrypoint, so you're actually running
dotnet MyApp.dll /bin/bash in the container when it starts up.

In order to override the entrypoint, and to just run the shell directly, you need to use the --entrypoint argument instead. Note that the argument order is different in this case - the image name goes at the end in this command, whereas the shell was at the end in the previous example

$ docker run --rm -it --entrypoint /bin/bash  myImage

You'll now have a shell inside the container that you can use to inspect the contents or run other commands.

Copying files from a Docker container to the host

If you're not particularly au fait with Linux (🙋) then trying to diagnose a problem from inside a shell can be tricky. Sometimes, I'd rather copy a file back from a container and inspect it on the Windows side, using tools I'm familiar with. I typically have folders mapped between my Windows machine and my Linux VM, so I just need to get the file out of the Docker container and into the Linux host.

If you have a running (or stopped) container, you can copy a file from the container to the host with the docker cp command. For example, if you created a container called my-image-app from the myImage container using:

docker run -d --name my-image-app myImage  

then you could copy a file from the container to the host using something like the following:

docker cp my-image-app:/app/MyApp.dll /path/on/the/host/MyApp.dll  

Copying files from a Docker image to the host

The previous example shows how to copy files from a container, but you can't use this to copy files directly from an image. Unfortunately, to do that you have to create a container from the image and run it.

There's a number of ways you can do this, depending on exactly what state your image is in (e.g. does it have a defined ENTRYPOINT), but I tend to just use the following:

docker run --rm -it --entrypoint cat myImage /app/MyApp.dll > /path/on/the/host/MyApp.dll  

This gives me a single command I can run to grab the /app/MyApp.dll file from the image, and write it out to /path/on/the/host/MyApp.dll. It relies on the cat command being available, which it is for the ASP.NET Core base image. If that's not available, you'll essentially have to manually create a container from your image, copy the file across, and then kill the container. For example:

id=$(docker create myImage)  
docker cp $id:/app/MyApp.dll /path/on/the/host/MyApp.dll  
docker rm -v $id  

If the image has a defined ENTRYPOINT you may need to override it in the docker create call:

id=$(docker create --entrypoint / andrewlock/aspnetcore-in-docker)  
docker cp $id:/app/MyApp.dll /path/on/the/host/MyApp.dll  
docker rm -v $id  

That gives you three ways to copy files out of your containers - hopefully at least one of them works for you!

Summary

That's it for this first post of Docker commands. Hope you find them useful!


Anuraj Parameswaran: Building Progressive Web apps with ASP.NET Core

This post is about building Progressive Web Apps or PWA with ASP.NET Core. Progressive Web App (PWA) are web applications that are regular web pages or websites, but can appear to the user like traditional applications or native mobile applications. The application type attempts to combine features offered by most modern browsers with the benefits of mobile experience.


Damien Bowden: Creating specific themes for OIDC clients using razor views with IdentityServer4

This post shows how to use specific themes in an ASPNET Core STS application using IdentityServer4. For each OpenId Connect (OIDC) client, a separate theme is used. The theme is implemented using Razor, based on the examples, code from Ben Foster. Thanks for these. The themes can then be customized as required.

Code: https://github.com/damienbod/AspNetCoreIdentityServer4Persistence

Setup

The applications are setup using 2 OIDC Implicit Flow clients which get the tokens and login using a single IdentityServer4 application. The client id is sent which each authorize request. The client id is used to select, switch the theme.

An instance of the ClientSelector class is used per request to set, save the selected client id. The class is registered as a scoped instance.

namespace IdentityServerWithIdentitySQLite
{
    public class ClientSelector
    {
        public string SelectedClient = "";
    }
}

The ClientIdFilter Action Filter is used to read the client id from the authorize request and saves this to the ClientSelector instance of the request. The client id is read from the requesturl parameter.

using System;
using Microsoft.Extensions.Primitives;
using Microsoft.AspNetCore.WebUtilities;
using System.Linq;
using Microsoft.AspNetCore.Mvc.Filters;

namespace IdentityServerWithIdentitySQLite
{
    public class ClientIdFilter : IActionFilter
    {
        public ClientIdFilter(ClientSelector clientSelector)
        {
            _clientSelector = clientSelector;
        }

        public string Client_id = "none";
        private readonly ClientSelector _clientSelector;

        public void OnActionExecuted(ActionExecutedContext context)
        {
            var query = context.HttpContext.Request.Query;
            var exists = query.TryGetValue("client_id", out StringValues culture);

            if (!exists)
            {
                exists = query.TryGetValue("returnUrl", out StringValues requesturl);

                if (exists)
                {
                    var request = requesturl.ToArray()[0];
                    Uri uri = new Uri("http://faketopreventexception" + request);
                    var query1 = QueryHelpers.ParseQuery(uri.Query);
                    var client_id = query1.FirstOrDefault(t => t.Key == "client_id").Value;

                    _clientSelector.SelectedClient = client_id.ToString();
                }
            }
        }

        public void OnActionExecuting(ActionExecutingContext context)
        {
            
        }
    }
}

Now that we have a ClientSelector instance which can be injected into the different views as required, we also want to use different razor templates for each theme.

The IViewLocationExpander interface is implemented and sets the locations for the different themes. For a request, the client_id is read from the authorize request. For a logout, the client_id is not available in the URL. The selectedClient is set in the logout action method, and this can be read then when rendering the views.

using Microsoft.AspNetCore.Mvc.Razor;
using Microsoft.AspNetCore.WebUtilities;
using Microsoft.Extensions.Primitives;
using System;
using System.Collections.Generic;
using System.Linq;

public class ClientViewLocationExpander : IViewLocationExpander
{
    private const string THEME_KEY = "theme";

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        var query = context.ActionContext.HttpContext.Request.Query;
        var exists = query.TryGetValue("client_id", out StringValues culture);

        if (!exists)
        {
            exists = query.TryGetValue("returnUrl", out StringValues requesturl);

            if (exists)
            {
                var request = requesturl.ToArray()[0];
                Uri uri = new Uri("http://faketopreventexception" + request);
                var query1 = QueryHelpers.ParseQuery(uri.Query);
                var client_id = query1.FirstOrDefault(t => t.Key == "client_id").Value;

                context.Values[THEME_KEY] = client_id.ToString();
            }
        }
    }

    public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations)
    {
        // add the themes to the view location if one of the theme layouts are required. 
        if (context.ViewName.Contains("_Layout") 
            && context.ActionContext.HttpContext.Request.Path.ToString().Contains("logout"))
        {
            string themeValue = context.ViewName.Replace("_Layout", "");
            context.Values[THEME_KEY] = themeValue;
        }

        string theme = null;
        if (context.Values.TryGetValue(THEME_KEY, out theme))
        {
            viewLocations = new[] {
                $"/Themes/{theme}/{{1}}/{{0}}.cshtml",
                $"/Themes/{theme}/Shared/{{0}}.cshtml",
            }
            .Concat(viewLocations);
        }

        return viewLocations;
    }
}

The logout method in the account controller sets the theme and opens the correct themed view.

public async Task<IActionResult> Logout(LogoutViewModel model)
{
	...
	
	// get context information (client name, post logout redirect URI and iframe for federated signout)
	var logout = await _interaction.GetLogoutContextAsync(model.LogoutId);

	var vm = new LoggedOutViewModel
	{
		PostLogoutRedirectUri = logout?.PostLogoutRedirectUri,
		ClientName = logout?.ClientId,
		SignOutIframeUrl = logout?.SignOutIFrameUrl
	};
	_clientSelector.SelectedClient = logout?.ClientId;
	await _persistedGrantService.RemoveAllGrantsAsync(subjectId, logout?.ClientId);
	return View($"~/Themes/{logout?.ClientId}/Account/LoggedOut.cshtml", vm);
}

In the startup class, the classes are registered with the IoC, and the ClientViewLocationExpander is added.

public void ConfigureServices(IServiceCollection services)
{
	...
	
	services.AddScoped<ClientIdFilter>();
	services.AddScoped<ClientSelector>();
	services.AddAuthentication();

	services.Configure<RazorViewEngineOptions>(options =>
	{
		options.ViewLocationExpanders.Add(new ClientViewLocationExpander());
	});

In the Views folder, all the default views are implemented like before. The _ViewStart.cshtml was changed to select the correct layout using the injected service _clientSelector.

@using System.Globalization
@using IdentityServerWithAspNetIdentity.Resources
@inject LocService SharedLocalizer
@inject IdentityServerWithIdentitySQLite.ClientSelector _clientSelector
@{
    Layout = $"_Layout{_clientSelector.SelectedClient}";
}

Then the layout from the corresponding theme for the client is used and can be styled, changed as required for each client. Each themed Razor template which uses other views, should call the themed view. For example the ClientOne theme _Layout Razor view uses the _LoginPartial themed cshtml and not the default one.

@await Html.PartialAsync("~/Themes/ClientOne/Shared/_LoginPartial.cshtml")

The required themed views can then be implemented as required.

Client One themed view:

Client Two themed view:

Logout themed view for Client Two:

Links:

http://benfoster.io/blog/asp-net-core-themes-and-multi-tenancy

http://docs.identityserver.io/en/release/

https://docs.microsoft.com/en-us/ef/core/

https://docs.microsoft.com/en-us/aspnet/core/

https://getmdl.io/started/


Andrew Lock: Exploring the .NET Core Docker files: dotnet vs aspnetcore vs aspnetcore-build

Exploring the .NET Core Docker files: dotnet vs aspnetcore vs aspnetcore-build

When you build and deploy an application in Docker, you define how your image should be built using a Dockerfile. This file lists the steps required to create the image, for example: set an environment variable, copy a file, or run a script. Whenever a step is run, a new layer is created. Your final Docker image consists of all the changes introduced by these layers in your Dockerfile.

Typically, you don't start from an empty image where you need to install an operating system, but from a "base" image that contains an already configured OS. For .NET development, Microsoft provide a number of different images depending on what it is you're trying to achieve.

In this post, I look at the various Docker base images available for .NET Core development, how they differ, and when you should use each of them. I'm only going to look at the Linux amd64 images, but there are Windows container versions and even Linux arm32 images available too. At the time of writing the latest images available are 2.1.2 and 2.0.3 for the sdk-based and runtime-based images respectively.

Note: You should normally be specific about exactly which version of a Docker image you build on in your Dockerfiles (e.g. don't use latest). For that reason, all the image I mention in this post use the current latest version suffix, 2.0.3.

I'll start by briefly discussing the difference between the .NET Core SDK and the .NET Core Runtime, as it's an important factor when deciding which base image you need. I'll then walk through each of the images in turn, using the Dockerfiles for each to explain what they contain, and hence what you should use them for.

tl;dr; This is a pretty long post, so for convenience, here's some links to the relevant sections and a one-liner use case:

The .NET Core Runtime vs the .NET Core SDK

One of the most often lamented aspects of .NET Core and .NET Core development, is around version numbers. There are so many different moving parts, and none of the version numbers match up, so it can be difficult to figure out what you need.

For example, on my dev machine I am building .NET Core 2.0 apps, so I installed the .NET Core 2.x SDK to allow me to do so. When I look at what I have installed using dotnet --info, I get the following:

> dotnet --info
.NET Command Line Tools (2.1.2)

Product Information:  
 Version:            2.1.2
 Commit SHA-1 hash:  5695315371

Runtime Environment:  
 OS Name:     Windows
 OS Version:  10.0.16299
 OS Platform: Windows
 RID:         win10-x64
 Base Path:   C:\Program Files\dotnet\sdk\2.1.2\

Microsoft .NET Core Shared Framework Host

  Version  : 2.0.3
  Build    : a9190d4a75f4a982ae4b4fa8d1a24526566c69df

There's a lot of numbers there, but the important ones are 2.1.2 which is the version of the command line tools or SDK I have installed, and 2.0.3 which is the version of the .NET Core runtime I have installed.

I genuinely have no idea why the SDK is version 2.1.2 - I thought it was 2.0.3 as well but apparently not. This is made all the more confusing by the fact the 2.1.2 version isn't mentioned anywhere in any of the Docker images. Welcome to the brave new world.

Whether you need the .NET Core SDK or the .NET Core runtime depends on what you're trying to do:

  • The .NET Core SDK - This is what you need to build .NET Core applications.
  • The .NET Core Runtime - This is what you need to run .NET Core applications.

When you install the SDK, you get the runtime as well, so on your dev machines you can just install the SDK. However, when it comes to deployment you need to give it a little more thought. The SDK contains everything you need to build a .NET Core app, so it's much larger than the runtime alone (122MB vs 22MB for the MSI files). If you're just going to be running the app on a machine (or in a Docker container) then you don't need the full SDK, the runtime will suffice, and will keep the image as small as possible.

For the rest of this post, I'll walk through the main Docker images available for .NET Core and ASP.NET Core. I assume you have a working knowledge of Docker - if you're new to Docker I suggest checking out Steve Gordon's excellent series on Docker for .NET developers.

1. microsoft/dotnet:2.0.3-runtime-deps

  • Contains native dependencies
  • No .NET Core runtime or .NET Core SDK installed
  • Use for running Self-Contained Deployment apps

The first image we'll look at forms the basis for most of the other .NET Core images. It actually doesn't even have .NET Core installed. Instead, it consists of the base debian:stretch image and has all the low-level native dependencies on which .NET Core depends.

The Dockerfile consists of a single RUN command that apt-get installs the required dependencies on top of the base image.

FROM debian:stretch

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        ca-certificates \
        \
# .NET Core dependencies
        libc6 \
        libcurl3 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        libunwind8 \
        libuuid1 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

What should you use it for?

The microsoft/dotnet:2.0.3-runtime-deps image is the basis for subsequent .NET Core runtime installations. Its main use is for when you are building self-contained deployments (SCDs). SCDs are app that are packaged with the .NET Core runtime for the specific host, so you don't need to install the .NET Core runtime. You do still need the native dependencies though, so this is the image you need.

Note that you can't build SCDs with this image. For that, you'll need one of the SDK-based images described later in the post, such as microsoft/dotnet:2.0.3-sdk.

2. microsoft/dotnet:2.0.3-runtime

  • Contains .NET Core runtime
  • Use for running .NET Core console apps

The next image is one you'll use a lot if you're running .NET Core console apps in production. microsoft/dotnet:2.0.3-runtime builds on the runtime-deps image, and installs the .NET Core Runtime. It downloads the tar ball using curl, verifies the hash, unpacks it, sets up symlinks and removes the old installer.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0-runtime-deps

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        curl \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core
ENV DOTNET_VERSION 2.0.3  
ENV DOTNET_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/Runtime/$DOTNET_VERSION/dotnet-runtime-$DOTNET_VERSION-linux-x64.tar.gz  
ENV DOTNET_DOWNLOAD_SHA 4FB483CAE0C6147FBF13C278FE7FC23923B99CD84CF6E5F96F5C8E1971A733AB968B46B00D152F4C14521561387DD28E6E64D07CB7365D43A17430905DA6C1C0

RUN curl -SL $DOTNET_DOWNLOAD_URL --output dotnet.tar.gz \  
    && echo "$DOTNET_DOWNLOAD_SHA dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

What should you use it for?

The microsoft/dotnet:2.0.3-runtime image contains the .NET Core runtime, so you can use it to run any .NET Core 2.0 app such as a console app. You can't use this image to build your app, only to run it.

If you're running a self-contained app then you would be better served by the runtime-deps image. Similarly, if you're running an ASP.NET Core app, then you should use the microsoft/aspnetcore:2.0.3 image instead (up next), as it contains optimisations for running ASP.NET Core apps.

3. microsoft/aspnetcore:2.0.3

  • Contains .NET Core runtime and the ASP.NET Core runtime store
  • Use for running ASP.NET Core apps
  • Sets the default URL for apps to http://+:80

.NET Core 2.0 introduced a new feature called the runtime store. This is conceptually similar to the Global Assembly Cache (GAC) from .NET Framework days, though without some of the issues.

Effectively, you can install certain NuGet packages globally by adding them to a Runtime Store. ASP.NET Core does this by registering all of the Microsoft NuGet packages that make up the Microsoft.AspNetCore.All metapackage with the runtime store (as described in this post). When your app is published, it doesn't need to include any of the dlls that are in the store. This makes your published output smaller, and improves layer caching for Docker images.

The microsoft/aspnetcore:2.0.3 image builds on the previous .NET Core runtime image, and simply installs the ASP.NET Core runtime store. It also sets the default listening URL for apps to port 80 by setting the ASPNETCORE_URLS environment variable.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-runtime-stretch

# set up network
ENV ASPNETCORE_URLS http://+:80  
ENV ASPNETCORE_PKG_VERSION 2.0.3

# set up the runtime store
RUN for version in '2.0.0' '2.0.3'; do \  
        curl -o /tmp/runtimestore.tar.gz https://dist.asp.net/runtimestore/$version/linux-x64/aspnetcore.runtimestore.tar.gz \
        && export DOTNET_HOME=$(dirname $(readlink $(which dotnet))) \
        && tar -x -C $DOTNET_HOME -f /tmp/runtimestore.tar.gz \
        && rm /tmp/runtimestore.tar.gz; \
    done

What should you use it for?

Fairly obviously, for running ASP.NET Core apps! This is the image to use if you've published an ASP.NET Core app and you need to run it in production. It has the smallest possible footprint (ignoring the Alpine-based images for now!) but all the necessary framework components and optimisations. You can't use it for building your app though, as it doesn't have the SDK installed. For that, you need one of the upcoming images.

4. microsoft/dotnet:2.0.3-sdk

  • Contains .NET Core SDK
  • Use for building .NET Core apps
  • Can also be used for building ASP.NET Core apps

We're onto the first of the .NET Core SDK images now. These images can all be used for building your apps. Unlike all the runtime images which use debian:stretch as the base, the microsoft/dotnet:2.0.3-sdk image (and those that build on it) use the buildpack-deps:stretch-scm image. According to the Docker Hub description, the buildpack image:

…includes a large number of "development header" packages needed by various things like Ruby Gems, PyPI modules, etc.…a majority of arbitrary gem install / npm install / pip install should be successful without additional header/development packages…

The stretch-scm tag also ensures common tools like curl, git, and ca-certificates are installed.

The microsoft/dotnet:2.0.3-sdk image installs the native prerequisites (as you saw in the microsoft/dotnet:2.0.3-runtime-deps image), and then installs the .NET Core SDK. Finally, it warms up the NuGet cache by running dotnet new in an empty folder, which makes subsequent dotnet operations in derived images faster.

You can view the Dockerfile for the image here:

FROM buildpack-deps:stretch-scm

# Install .NET CLI dependencies
RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        libc6 \
        libcurl3 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        libunwind8 \
        libuuid1 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 2.0.3  
ENV DOTNET_SDK_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$DOTNET_SDK_VERSION/dotnet-sdk-$DOTNET_SDK_VERSION-linux-x64.tar.gz  
ENV DOTNET_SDK_DOWNLOAD_SHA 74A0741D4261D6769F29A5F1BA3E8FF44C79F17BBFED5E240C59C0AA104F92E93F5E76B1A262BDFAB3769F3366E33EA47603D9D725617A75CAD839274EBC5F2B

RUN curl -SL $DOTNET_SDK_DOWNLOAD_URL --output dotnet.tar.gz \  
    && echo "$DOTNET_SDK_DOWNLOAD_SHA dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

# Trigger the population of the local package cache
ENV NUGET_XMLDOC_MODE skip  
RUN mkdir warmup \  
    && cd warmup \
    && dotnet new \
    && cd .. \
    && rm -rf warmup \
    && rm -rf /tmp/NuGetScratch

What should you use it for?

This image has the .NET Core SDK installed, so you can use it for building your .NET Core apps. You can build .NET Core console apps or ASP.NET Core apps, though in the latter case you may prefer one of the alternative images coming up in this post.

Technically you can also use this image for running your apps in production as the SDK includes the runtime, but you shouldn't do that in practice. As discussed at the beginning of this post, optimising your Docker images in production is important for performance reasons, but the microsoft/dotnet:2.0.3-sdk image weighs in at a hefty 1.68GB, compared to the 219MB for the microsoft/dotnet:2.0.3-runtime image.

To get the best of both worlds, you should use this image (or one of the later images) to build your app, and one of the runtime images to run your app in production. You can see how to do this using Docker multi-stage builds in Scott Hanselman's post here.

5. microsoft/aspnetcore-build:2.0.3

  • Contains .NET Core SDK
  • Has warmed-up package cache for Microsoft.AspNetCore.All package
  • Installs Node, Bower and Gulp
  • Use for building ASP.NET Core apps

You can happily build ASP.NET Core apps using the microsoft/dotnet:2.0.3-sdk package, but the microsoft/aspnetcore-build:2.0.3 image that builds on it includes a number of additional layers that are often required.

First, it installs Node, Bower, and Gulp into the image. These tools are (were?) commonly used for building client-side apps, so this image makes them available globally.

Finally, the image warms up the package cache for all the common ASP.NET Core packages found in the Microsoft.AspNetCore.All metapackage, so that dotnet restore will be faster for apps based on this image. It does this by copying a .csproj file into a temporary folder and running dotnet restore. The csproj simply references the metapackage (with a version passed via an Environment Variable in the Dockerfile)

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <RuntimeIdentifiers>debian.8-x64</RuntimeIdentifiers>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="$(ASPNETCORE_PKG_VERSION)" />
  </ItemGroup>

</Project>  

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-sdk-stretch

# set up environment
ENV ASPNETCORE_URLS http://+:80  
ENV NODE_VERSION 6.11.3  
ENV ASPNETCORE_PKG_VERSION 2.0.3

RUN set -x \  
    && apt-get update && apt-get install -y gnupg dirmngr --no-install-recommends \
    && rm -rf /var/lib/apt/lists/*

# Install keys required for node
RUN set -ex \  
  && for key in \
    9554F04D7259F04124DE6B476D5A82AC7E37093B \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
    FD3A5288F042B6850C66B31F09FE44734EB7990E \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    B9AE9905FFD7803F25714661B63B535A4C206CA9 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
  ; do \
    gpg --keyserver pgp.mit.edu --recv-keys "$key" || \
    gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key" || \
    gpg --keyserver keyserver.pgp.com --recv-keys "$key" ; \
  done

# set up node
RUN buildDeps='xz-utils' \  
    && set -x \
    && apt-get update && apt-get install -y $buildDeps --no-install-recommends \
    && rm -rf /var/lib/apt/lists/* \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
    && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
    && grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
    && tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
    && rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
    && apt-get purge -y --auto-remove $buildDeps \
    && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
    # set up bower and gulp
    && npm install -g bower gulp \
    && echo '{ "allow_root": true }' > /root/.bowerrc

# warmup NuGet package cache
COPY packagescache.csproj /tmp/warmup/  
RUN dotnet restore /tmp/warmup/packagescache.csproj \  
      --source https://api.nuget.org/v3/index.json \
      --verbosity quiet \
    && rm -rf /tmp/warmup/

WORKDIR /  

What should you use it for?

This image will likely be the main image you use to build ASP.NET Core apps. It contains the .NET Core SDK, the same as microsoft/dotnet:2.0.3-sdk, but it also includes the additional dependencies that are sometimes required to build traditional apps with ASP.NET Core, such as Bower and Gulp.

Even if you're not using those dependencies, the additional warming of the package cache is a nice optimisation. If you opt to use the microsoft/dotnet:2.0.3-sdk image instead for building your apps, I suggest you warm up the package cache in your own Dockerfile in a similar way.

As before, the SDK image is much larger than the runtime image. You should only use this image for building your apps; use one of the runtime images to deploy your app to production.

6. microsoft/aspnetcore-build:1.0-2.0

  • Contains multiple .NET Core SDKs: 1.0, 1.1, and 2.0
  • Has warmed-up package cache for Microsoft.AspNetCore.All package
  • Installs Node, Bower and Gulp
  • Installs the Docker SDK for building solutions containing a Docker tools project
  • Use for building ASP.NET Core apps or anything really!

The final image is one I wasn't even aware of until I started digging around in the aspnet-docker GitHub repository. It's contained in the (aptly titled) kitchensink folder, and it really does have everything you could need to build your apps!

The microsoft/aspnetcore-build:1.0-2.0 image contains the .NET Core SDK for all current major and minor versions, namely .NET Core 1.0, 1.1, and 2.0. This has the advantage that you should be able to build any of your .NET Core apps, even if they are tied to a specific .NET Core version using a global.json file.

Just as for the microsoft/aspnetcore-build:2.0.3 image, Node, Bower, and Gulp are installed, and the package cache for the Microsoft.AspNetCore.All is warmed up. Additionally, the kitchensink image installs the Microsoft.Docker.SDK SDK that is required when building a project that has Docker tools enabled (through Visual Studio).

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.0.3-sdk-stretch

# set up environment
ENV ASPNETCORE_URLS http://+:80  
ENV NODE_VERSION 6.11.3  
ENV NETCORE_1_0_VERSION 1.0.8  
ENV NETCORE_1_1_VERSION 1.1.5  
ENV ASPNETCORE_PKG_VERSION 2.0.3

RUN set -x \  
    && apt-get update && apt-get install -y gnupg dirmngr --no-install-recommends \
    && rm -rf /var/lib/apt/lists/*

RUN set -ex \  
  && for key in \
    9554F04D7259F04124DE6B476D5A82AC7E37093B \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
    FD3A5288F042B6850C66B31F09FE44734EB7990E \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    B9AE9905FFD7803F25714661B63B535A4C206CA9 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
  ; do \
    gpg --keyserver pgp.mit.edu --recv-keys "$key" || \
    gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key" || \
    gpg --keyserver keyserver.pgp.com --recv-keys "$key" ; \
  done

# set up node
RUN buildDeps='xz-utils' \  
    && set -x \
    && apt-get update && apt-get install -y $buildDeps --no-install-recommends \
    && rm -rf /var/lib/apt/lists/* \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
    && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
    && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
    && grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
    && tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
    && rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
    && apt-get purge -y --auto-remove $buildDeps \
    && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
    # set up bower and gulp
    && npm install -g bower gulp \
    && echo '{ "allow_root": true }' > /root/.bowerrc

# Install the 1.x runtimes
RUN for url in \  
      "https://dotnetcli.blob.core.windows.net/dotnet/Runtime/${NETCORE_1_0_VERSION}/dotnet-debian-x64.${NETCORE_1_0_VERSION}.tar.gz" \
      "https://dotnetcli.blob.core.windows.net/dotnet/Runtime/${NETCORE_1_1_VERSION}/dotnet-debian-x64.${NETCORE_1_1_VERSION}.tar.gz"; \
    do \
      echo "Downloading and installing from $url" \
      && curl -SL $url --output /tmp/dotnet.tar.gz \
      && mkdir -p /usr/share/dotnet \
      && tar -zxf /tmp/dotnet.tar.gz -C /usr/share/dotnet \
      && rm /tmp/dotnet.tar.gz; \
    done

# Add Docker SDK for when building a solution that has the Docker tools project.
RUN curl -H 'Cache-Control: no-cache' -o /tmp/Microsoft.Docker.Sdk.tar.gz https://distaspnet.blob.core.windows.net/sdk/Microsoft.Docker.Sdk.tar.gz \  
    && cd /usr/share/dotnet/sdk/${DOTNET_SDK_VERSION}/Sdks \
    && tar xf /tmp/Microsoft.Docker.Sdk.tar.gz \
    && rm /tmp/Microsoft.Docker.Sdk.tar.gz

# copy the ASP.NET packages manifest
COPY packagescache.csproj /tmp/warmup/

# warm up package cache
RUN dotnet restore /tmp/warmup/packagescache.csproj \  
      --source https://api.nuget.org/v3/index.json \
      --verbosity quiet \
    && rm -rf /tmp/warmup/

WORKDIR /  

What should you use it for?

Use this image to build ASP.NET Core (or .NET Core apps) that require multiple .NET Core runtimes or that contain Docker tools projects.

Alternatively, you could use this image if you just want to have a single base image for building all of your .NET Core apps, regardless of the SDK version (instead of using microsoft/aspnetcore-build:2.0.3 for 2.0 projects and microsoft/aspnetcore-build:1.1.5 for 1.1 projects for example).

Summary

In this post I walked through some of the common Docker images used in .NET Core development. Each of the images have a set of specific use-cases, and it's important you use the right one for your requirements.


Anuraj Parameswaran: How to launch different browsers from VS Code for debugging ASP.NET Core

This post is about launching different browsers from VSCode, while debugging ASP.NET Core. By default when debugging an ASP.NET Core, VS Code will launch default browser. There is way to choose the browser you would like to use. Here is the code snippet which will add different debug configuration to VS Code.


Damien Bowden: Using an EF Core database for the IdentityServer4 configuration data

This article shows how to implement a database store for the IdentityServer4 configurations for the Client, ApiResource and IdentityResource settings using Entity Framework Core and SQLite. This could be used, if you need to create clients, or resources dynamically for the STS, or if you need to deploy the STS to multiple instances, for example using Service Fabric. To make it scalable, you need to remove all session data, and configuration data from the STS instances and share this in a shared resource, otherwise you can run it only smoothly as a single instance.

Information about IdentityServer4 deployment can be found here:
http://docs.identityserver.io/en/release/topics/deployment.html

Code: https://github.com/damienbod/AspNetCoreIdentityServer4Persistence

Implementing the IClientStore

By implementing the IClientStore, you can load your STS client data from anywhere you want. This example uses an Entity Framework Core Context, to load the data from a SQLite database.

using IdentityServer4.Models;
using IdentityServer4.Stores;
using Microsoft.Extensions.Logging;
using System;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ClientStore : IClientStore
    {
        private readonly ConfigurationStoreContext _context;
        private readonly ILogger _logger;

        public ClientStore(ConfigurationStoreContext context, ILoggerFactory loggerFactory)
        {
            _context = context;
            _logger = loggerFactory.CreateLogger("ClientStore");
        }

        public Task<Client> FindClientByIdAsync(string clientId)
        {
            var client = _context.Clients.First(t => t.ClientId == clientId);
            client.MapDataFromEntity();
            return Task.FromResult(client.Client);
        }
    }
}

The ClientEntity is used to save or retrieve the data from the database. Because the IdentityServer4 class cannot be saved directly using Entity Framework Core, a wrapper class is used which saves the Client object as a Json string. The entity class implements helper methods, which parses the Json string to/from the type Client class, which is used by Identityserver4.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ClientEntity
    {
        public string ClientData { get; set; }

        [Key]
        public string ClientId { get; set; }

        [NotMapped]
        public Client Client { get; set; }

        public void AddDataToEntity()
        {
            ClientData = JsonConvert.SerializeObject(Client);
            ClientId = Client.ClientId;
        }

        public void MapDataFromEntity()
        {
            Client = JsonConvert.DeserializeObject<Client>(ClientData);
            ClientId = Client.ClientId;
        }
    }
}

Teh ConfigurationStoreContext implements the Entity Framework class to access the SQLite database. This could be easily changed to any other database supported by Entity Framework Core.

using IdentityServer4.Models;
using Microsoft.EntityFrameworkCore;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ConfigurationStoreContext : DbContext
    {
        public ConfigurationStoreContext(DbContextOptions<ConfigurationStoreContext> options) : base(options)
        { }

        public DbSet<ClientEntity> Clients { get; set; }
        public DbSet<ApiResourceEntity> ApiResources { get; set; }
        public DbSet<IdentityResourceEntity> IdentityResources { get; set; }
        

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<ClientEntity>().HasKey(m => m.ClientId);
            builder.Entity<ApiResourceEntity>().HasKey(m => m.ApiResourceName);
            builder.Entity<IdentityResourceEntity>().HasKey(m => m.IdentityResourceName);
            base.OnModelCreating(builder);
        }
    }
}

Implementing the IResourceStore

The IResourceStore interface is used to save or access the ApiResource configurations and the IdentityResource data in the IdentityServer4 application. This is implemented in a similiar way to the IClientStore.

using IdentityServer4.Models;
using IdentityServer4.Stores;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ResourceStore : IResourceStore
    {
        private readonly ConfigurationStoreContext _context;
        private readonly ILogger _logger;

        public ResourceStore(ConfigurationStoreContext context, ILoggerFactory loggerFactory)
        {
            _context = context;
            _logger = loggerFactory.CreateLogger("ResourceStore");
        }

        public Task<ApiResource> FindApiResourceAsync(string name)
        {
            var apiResource = _context.ApiResources.First(t => t.ApiResourceName == name);
            apiResource.MapDataFromEntity();
            return Task.FromResult(apiResource.ApiResource);
        }

        public Task<IEnumerable<ApiResource>> FindApiResourcesByScopeAsync(IEnumerable<string> scopeNames)
        {
            if (scopeNames == null) throw new ArgumentNullException(nameof(scopeNames));


            var apiResources = new List<ApiResource>();
            var apiResourcesEntities = from i in _context.ApiResources
                                            where scopeNames.Contains(i.ApiResourceName)
                                            select i;

            foreach (var apiResourceEntity in apiResourcesEntities)
            {
                apiResourceEntity.MapDataFromEntity();

                apiResources.Add(apiResourceEntity.ApiResource);
            }

            return Task.FromResult(apiResources.AsEnumerable());
        }

        public Task<IEnumerable<IdentityResource>> FindIdentityResourcesByScopeAsync(IEnumerable<string> scopeNames)
        {
            if (scopeNames == null) throw new ArgumentNullException(nameof(scopeNames));

            var identityResources = new List<IdentityResource>();
            var identityResourcesEntities = from i in _context.IdentityResources
                             where scopeNames.Contains(i.IdentityResourceName)
                           select i;

            foreach (var identityResourceEntity in identityResourcesEntities)
            {
                identityResourceEntity.MapDataFromEntity();

                identityResources.Add(identityResourceEntity.IdentityResource);
            }

            return Task.FromResult(identityResources.AsEnumerable());
        }

        public Task<Resources> GetAllResourcesAsync()
        {
            var apiResourcesEntities = _context.ApiResources.ToList();
            var identityResourcesEntities = _context.IdentityResources.ToList();

            var apiResources = new List<ApiResource>();
            var identityResources= new List<IdentityResource>();

            foreach (var apiResourceEntity in apiResourcesEntities)
            {
                apiResourceEntity.MapDataFromEntity();

                apiResources.Add(apiResourceEntity.ApiResource);
            }

            foreach (var identityResourceEntity in identityResourcesEntities)
            {
                identityResourceEntity.MapDataFromEntity();

                identityResources.Add(identityResourceEntity.IdentityResource);
            }

            var result = new Resources(identityResources, apiResources);
            return Task.FromResult(result);
        }
    }
}

The IdentityResourceEntity class is used to persist the IdentityResource data.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class IdentityResourceEntity
    {
        public string IdentityResourceData { get; set; }

        [Key]
        public string IdentityResourceName { get; set; }

        [NotMapped]
        public IdentityResource IdentityResource { get; set; }

        public void AddDataToEntity()
        {
            IdentityResourceData = JsonConvert.SerializeObject(IdentityResource);
            IdentityResourceName = IdentityResource.Name;
        }

        public void MapDataFromEntity()
        {
            IdentityResource = JsonConvert.DeserializeObject<IdentityResource>(IdentityResourceData);
            IdentityResourceName = IdentityResource.Name;
        }
    }
}

The ApiResourceEntity is used to persist the ApiResource data.

using IdentityServer4.Models;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
using System.Linq;
using System.Threading.Tasks;

namespace AspNetCoreIdentityServer4Persistence.ConfigurationStore
{
    public class ApiResourceEntity
    {
        public string ApiResourceData { get; set; }

        [Key]
        public string ApiResourceName { get; set; }

        [NotMapped]
        public ApiResource ApiResource { get; set; }

        public void AddDataToEntity()
        {
            ApiResourceData = JsonConvert.SerializeObject(ApiResource);
            ApiResourceName = ApiResource.Name;
        }

        public void MapDataFromEntity()
        {
            ApiResource = JsonConvert.DeserializeObject<ApiResource>(ApiResourceData);
            ApiResourceName = ApiResource.Name;
        }
    }
}

Adding the stores to the IdentityServer4 MVC startup class

The created stores can now be used and added to the Startup class of the ASP.NET Core MVC host project for IdentityServer4. The AddDbContext method is used to setup the Entity Framework Core data access and the AddResourceStore as well as AddClientStore are used to add the configuration data to IdentityServer4. The two interfaces and also the implementations need to be registered with the IoC.

The default AddInMemory… extension methods are removed.

public void ConfigureServices(IServiceCollection services)
{
	services.AddDbContext<ConfigurationStoreContext>(options =>
		options.UseSqlite(
			Configuration.GetConnectionString("ConfigurationStoreConnection"),
			b => b.MigrationsAssembly("AspNetCoreIdentityServer4")
		)
	);

	...

	services.AddTransient<IClientStore, ClientStore>();
	services.AddTransient<IResourceStore, ResourceStore>();

	services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddResourceStore<ResourceStore>()
		.AddClientStore<ClientStore>()
		.AddAspNetIdentity<ApplicationUser>()
		.AddProfileService<IdentityWithAdditionalClaimsProfileService>();

}

Seeding the database

A simple .NET Core console application is used to seed the STS server with data. This class creates the different Client, ApiResources and IdentityResources as required. The data is added directly to the database using Entity Framework Core. If this was a micro service, you would implement an API on the STS server which adds, removes, updates the data as required.

static void Main(string[] args)
{
	try
	{
		var currentDirectory = Directory.GetCurrentDirectory();

		var configuration = new ConfigurationBuilder()
			.AddJsonFile($"{currentDirectory}\\..\\AspNetCoreIdentityServer4\\appsettings.json")
			.Build();

		var configurationStoreConnection = configuration.GetConnectionString("ConfigurationStoreConnection");

		var optionsBuilder = new DbContextOptionsBuilder<ConfigurationStoreContext>();
		optionsBuilder.UseSqlite(configurationStoreConnection);

		using (var configurationStoreContext = new ConfigurationStoreContext(optionsBuilder.Options))
		{
			configurationStoreContext.AddRange(Config.GetClients());
			configurationStoreContext.AddRange(Config.GetIdentityResources());
			configurationStoreContext.AddRange(Config.GetApiResources());
			configurationStoreContext.SaveChanges();
		}
	}
	catch (Exception e)
	{
		Console.WriteLine(e.Message);
	}

	Console.ReadLine();
}

The static Config class just adds the data like the IdentityServer4 examples.


Now the applications run using the configuration data stored in an Entity Framwork Core supported database.

Note:

This post shows how just the configuration data can be setup for IdentityServer4. To make it scale, you also need to implement the IPersistedGrantStore and CORS for each client in the database. A cache solution might also be required.

IdentityServer4 provides a full solution and example: IdentityServer4.EntityFramework

Links:

http://docs.identityserver.io/en/release/topics/deployment.html

https://damienbod.com/2016/01/07/experiments-with-entity-framework-7-and-asp-net-5-mvc-6/

https://docs.microsoft.com/en-us/ef/core/get-started/netcore/new-db-sqlite

https://docs.microsoft.com/en-us/ef/core/

http://docs.identityserver.io/en/release/reference/ef.html

https://github.com/IdentityServer/IdentityServer4.EntityFramework

https://elanderson.net/2017/07/identity-server-using-entity-framework-core-for-configuration-data/

http://docs.identityserver.io/en/release/quickstarts/8_entity_framework.html


Anuraj Parameswaran: Connecting Localdb using Sql Server Management Studio

This post is about connecting and managing SQL Server LocalDB instances with Sql Server Management Studio. While working on an ASP.NET Core web application, I was using LocalDB, but when I tried to connect to it and modifying the data, but I couldn’t find it. Later after exploring little I found one way of doing it.


Anuraj Parameswaran: Runtime bundling and Minification in ASP.NET Core with Smidge

This post is about enabling bundling and minification in ASP.NET Core with Smidge. Long back I wrote a post about bundling and minification in ASP.NET Core. But it was during the compile time or while publishing the app. But Smidge helps you to enable bundling and minification in runtime similar to earlier versions of ASP.NET MVC.


Dominick Baier: Sponsoring IdentityServer

Brock and I have been working on free identity & access control related libraries since 2009. This all started as a hobby project, and I can very well remember the day when I said to Brock that we can only really claim to understand the protocols if we implement them ourselves. That’s what we did.

We are now at a point where the IdentityServer OSS project reached both enough significance and complexity what we need to find a sustainable way to manage it. This includes dealing with issues, questions and bug reports as well as feature and pull requests.

That’s why we decided to set up a sponsorship page on Patreon. So if you like the project and want to support us – or even more important, if you work for a company that relies on IdentityServer, please consider supporting us. This will allow us to be able to maintain this level of commitment.

Thank you!


Anuraj Parameswaran: Unit Testing ASP.NET Core Tag Helper

This post is about unit testing an ASP.NET Core tag helper. Tag Helpers enable server-side code to participate in creating and rendering HTML elements in Razor files. Unlike HTML helpers, Tag Helpers reduce the explicit transitions between HTML and C# in Razor views.


Anuraj Parameswaran: Implementing feature toggle in ASP.NET Core

This post is about implementing feature toggle in ASP.NET Core. A feature toggle(also feature switch, feature flag, feature flipper, conditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.


Damien Bowden: Sending Direct Messages using SignalR with ASP.NET core and Angular

This article shows how SignalR could be used to send direct messages between different clients using ASP.NET Core to host the SignalR Hub and Angular to implement the clients.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Posts in this series

History

2018-03-15 Updated signalr Microsoft.AspNetCore.SignalR 1.0.0-preview1-final, Angular 5.2.8, @aspnet/signalr 1.0.0-preview1-update1

When the application is started, different clients can log in using an email, if already registered, and can send direct messages from one SignalR client to the other SignalR client using the email of the user which was used to sign in. All messages are sent using a JWT token which is used to validate the identity.

The latest Microsoft.AspNetCore.SignalR Nuget package can be added to the ASP.NET Core project in the csproj file, or by using the Visual Studio Nuget package manager to add the package.

<PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-preview1-final" />

A single SignalR Hub is used to add the logic to send the direct messages between the clients. The Hub is protected using the bearer token authentication scheme which is defined in the Authorize filter. A client can leave or join using the Context.User.Identity.Name, which is configured to use the email of the Identity. When the user joins, the connectionId is saved to the in-memory database, which can then be used to send the direct messages. All other online clients are sent a message, with the new user data. The actual client is sent the complete list of existing clients.

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.SignalR;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace ApiServer.SignalRHubs
{
    [Authorize(AuthenticationSchemes = "Bearer")]
    public class UsersDmHub : Hub
    {
        private UserInfoInMemory _userInfoInMemory;

        public UsersDmHub(UserInfoInMemory userInfoInMemory)
        {
            _userInfoInMemory = userInfoInMemory;
        }

        public async Task Leave()
        {
            _userInfoInMemory.Remove(Context.User.Identity.Name);
            await Clients.AllExcept(new List<string> { Context.ConnectionId }).SendAsync(
                   "UserLeft",
                   Context.User.Identity.Name
                   );
        }

        public async Task Join()
        {
            if (!_userInfoInMemory.AddUpdate(Context.User.Identity.Name, Context.ConnectionId))
            {
                // new user

                var list = _userInfoInMemory.GetAllUsersExceptThis(Context.User.Identity.Name).ToList();
                await Clients.AllExcept(new List<string> { Context.ConnectionId }).SendAsync(
                    "NewOnlineUser",
                    _userInfoInMemory.GetUserInfo(Context.User.Identity.Name)
                    );
            }
            else
            {
                // existing user joined again
                
            }

            await Clients.Client(Context.ConnectionId).SendAsync(
                "Joined",
                _userInfoInMemory.GetUserInfo(Context.User.Identity.Name)
                );

            await Clients.Client(Context.ConnectionId).SendAsync(
                "OnlineUsers",
                _userInfoInMemory.GetAllUsersExceptThis(Context.User.Identity.Name)
            );
        }

        public Task SendDirectMessage(string message, string targetUserName)
        {
            var userInfoSender = _userInfoInMemory.GetUserInfo(Context.User.Identity.Name);
            var userInfoReciever = _userInfoInMemory.GetUserInfo(targetUserName);
            return Clients.Client(userInfoReciever.ConnectionId).SendAsync("SendDM", message, userInfoSender);
        }
    }
}

The UserInfoInMemory is used as an in-memory database, which is nothing more than a ConcurrentDictionary to manage the online users.

System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;

namespace ApiServer.SignalRHubs
{
    public class UserInfoInMemory
    {
        private ConcurrentDictionary<string, UserInfo> _onlineUser { get; set; } = new ConcurrentDictionary<string, UserInfo>();

        public bool AddUpdate(string name, string connectionId)
        {
            var userAlreadyExists = _onlineUser.ContainsKey(name);

            var userInfo = new UserInfo
            {
                UserName = name,
                ConnectionId = connectionId
            };

            _onlineUser.AddOrUpdate(name, userInfo, (key, value) => userInfo);

            return userAlreadyExists;
        }

        public void Remove(string name)
        {
            UserInfo userInfo;
            _onlineUser.TryRemove(name, out userInfo);
        }

        public IEnumerable<UserInfo> GetAllUsersExceptThis(string username)
        {
            return _onlineUser.Values.Where(item => item.UserName != username);
        }

        public UserInfo GetUserInfo(string username)
        {
            UserInfo user;
            _onlineUser.TryGetValue(username, out user);
            return user;
        }
    }
}

The UserInfo class is used to save the ConnectionId from the SignalR Hub, and the user name.

namespace ApiServer.SignalRHubs
{
    public class UserInfo
    {
        public string ConnectionId { get; set; }
        public string UserName { get; set; }
    }
}

The JWT Bearer token is configured in the startup class, to read the token from the URL parameters.

var tokenValidationParameters = new TokenValidationParameters()
{
	ValidIssuer = "https://localhost:44318/",
	ValidAudience = "dataEventRecords",
	IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("dataEventRecordsSecret")),
	NameClaimType = "name",
	RoleClaimType = "role", 
};

var jwtSecurityTokenHandler = new JwtSecurityTokenHandler
{
	InboundClaimTypeMap = new Dictionary<string, string>()
};

services.AddAuthentication(IdentityServerAuthenticationDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
	options.Authority = "https://localhost:44318/";
	options.Audience = "dataEventRecords";
	options.IncludeErrorDetails = true;
	options.SaveToken = true;
	options.SecurityTokenValidators.Clear();
	options.SecurityTokenValidators.Add(jwtSecurityTokenHandler);
	options.TokenValidationParameters = tokenValidationParameters;
	options.Events = new JwtBearerEvents
	{
		OnMessageReceived = context =>
		{
			if ( (context.Request.Path.Value.StartsWith("/loo")) || (context.Request.Path.Value.StartsWith("/usersdm")) 
				&& context.Request.Query.TryGetValue("token", out StringValues token)
			)
			{
				context.Token = token;
			}

			return Task.CompletedTask;
		},
		OnAuthenticationFailed = context =>
		{
			var te = context.Exception;
			return Task.CompletedTask;
		}
	};
});

Angular SignalR Client

The Angular SignalR client is implemented using the npm package “@aspnet/signalr-client”: “1.0.0-alpha2-final”

A ngrx store is used to manage the states sent, received from the API. All SiganlR messages are sent using the DirectMessagesService Angular service. This service is called from the ngrx effects, or sends the received information to the reducer of the ngrx store.

import 'rxjs/add/operator/map';
import { Subscription } from 'rxjs/Subscription';

import { HttpHeaders } from '@angular/common/http';
import { Injectable } from '@angular/core';

import { HubConnection } from '@aspnet/signalr';
import { Store } from '@ngrx/store';
import * as directMessagesActions from './store/directmessages.action';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { OnlineUser } from './models/online-user';

@Injectable()
export class DirectMessagesService {

    private _hubConnection: HubConnection;
    private headers: HttpHeaders;

    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;

    constructor(
        private store: Store<any>,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.headers = new HttpHeaders();
        this.headers = this.headers.set('Content-Type', 'application/json');
        this.headers = this.headers.set('Accept', 'application/json');

        this.init();
    }

    sendDirectMessage(message: string, userId: string): string {

        this._hubConnection.invoke('SendDirectMessage', message, userId);
        return message;
    }

    leave(): void {
        this._hubConnection.invoke('Leave');
    }

    join(): void {
        this._hubConnection.invoke('Join');
    }

    private init() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                    this.initHub();
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    private initHub() {
        console.log('initHub');
        const token = this.oidcSecurityService.getToken();
        let tokenValue = '';
        if (token !== '') {
            tokenValue = '?token=' + token;
        }
        const url = 'https://localhost:44390/';
        this._hubConnection = new HubConnection(`${url}usersdm${tokenValue}`);

        this._hubConnection.on('NewOnlineUser', (onlineUser: OnlineUser) => {
            console.log('NewOnlineUser received');
            console.log(onlineUser);
            this.store.dispatch(new directMessagesActions.ReceivedNewOnlineUser(onlineUser));
        });

        this._hubConnection.on('OnlineUsers', (onlineUsers: OnlineUser[]) => {
            console.log('OnlineUsers received');
            console.log(onlineUsers);
            this.store.dispatch(new directMessagesActions.ReceivedOnlineUsers(onlineUsers));
        });

        this._hubConnection.on('Joined', (onlineUser: OnlineUser) => {
            console.log('Joined received');
            this.store.dispatch(new directMessagesActions.JoinSent());
            console.log(onlineUser);
        });

        this._hubConnection.on('SendDM', (message: string, onlineUser: OnlineUser) => {
            console.log('SendDM received');
            this.store.dispatch(new directMessagesActions.ReceivedDirectMessage(message, onlineUser));
        });

        this._hubConnection.on('UserLeft', (name: string) => {
            console.log('UserLeft received');
            this.store.dispatch(new directMessagesActions.ReceivedUserLeft(name));
        });

        this._hubConnection.start()
            .then(() => {
                console.log('Hub connection started')
                this._hubConnection.invoke('Join');
            })
            .catch(() => {
                console.log('Error while establishing connection')
            });
    }

}

The DirectMessagesComponent is used to display the data, or send the events to the ngrx store, which in turn, sends the data to the SignalR server.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { Store } from '@ngrx/store';
import { DirectMessagesState } from '../store/directmessages.state';
import * as directMessagesAction from '../store/directmessages.action';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { OnlineUser } from '../models/online-user';
import { DirectMessage } from '../models/direct-message';
import { Observable } from 'rxjs/Observable';

@Component({
    selector: 'app-direct-message-component',
    templateUrl: './direct-message.component.html'
})

export class DirectMessagesComponent implements OnInit, OnDestroy {
    public async: any;
    onlineUsers: OnlineUser[];
    onlineUser: OnlineUser;
    directMessages: DirectMessage[];
    selectedOnlineUserName = '';
    dmState$: Observable<DirectMessagesState>;
    dmStateSubscription: Subscription;
    isAuthorizedSubscription: Subscription;
    isAuthorized: boolean;
    connected: boolean;
    message = '';

    constructor(
        private store: Store<any>,
        private oidcSecurityService: OidcSecurityService
    ) {
        this.dmState$ = this.store.select<DirectMessagesState>(state => state.dm.dm);
        this.dmStateSubscription = this.store.select<DirectMessagesState>(state => state.dm.dm)
            .subscribe((o: DirectMessagesState) => {
                this.connected = o.connected;
            });

    }

    public sendDm(): void {
        this.store.dispatch(new directMessagesAction.SendDirectMessageAction(this.message, this.onlineUser.userName));
    }

    ngOnInit() {
        this.isAuthorizedSubscription = this.oidcSecurityService.getIsAuthorized().subscribe(
            (isAuthorized: boolean) => {
                this.isAuthorized = isAuthorized;
                if (this.isAuthorized) {
                }
            });
        console.log('IsAuthorized:' + this.isAuthorized);
    }

    ngOnDestroy(): void {
        this.isAuthorizedSubscription.unsubscribe();
        this.dmStateSubscription.unsubscribe();
    }

    selectChat(onlineuserUserName: string): void {
        this.selectedOnlineUserName = onlineuserUserName
    }

    sendMessage() {
        console.log('send message to:' + this.selectedOnlineUserName + ':' + this.message);
        this.store.dispatch(new directMessagesAction.SendDirectMessageAction(this.message, this.selectedOnlineUserName));
    }

    getUserInfoName(directMessage: DirectMessage) {
        if (directMessage.fromOnlineUser) {
            return directMessage.fromOnlineUser.userName;
        }

        return '';
    }

    disconnect() {
        this.store.dispatch(new directMessagesAction.Leave());
    }

    connect() {
        this.store.dispatch(new directMessagesAction.Join());
    }
}

The Angular HTML template displays the data using Angular material.

<div class="full-width" *ngIf="isAuthorized">
    <div class="left-navigation-container" >
        <nav>

            <mat-list>
                <mat-list-item *ngFor="let onlineuser of (dmState$|async)?.onlineUsers">
                    <a mat-button (click)="selectChat(onlineuser.userName)">{{onlineuser.userName}}</a>
                </mat-list-item>
            </mat-list>

        </nav>
    </div>
    <div class="column-container content-container">
        <div class="row-container info-bar">
            <h3 style="padding-left: 20px;">{{selectedOnlineUserName}}</h3>
            <a mat-button (click)="sendMessage()" *ngIf="connected && selectedOnlineUserName && selectedOnlineUserName !=='' && message !==''">SEND</a>
            <a mat-button (click)="disconnect()" *ngIf="connected">Disconnect</a>
            <a mat-button (click)="connect()" *ngIf="!connected">Connect</a>
        </div>

        <div class="content" *ngIf="selectedOnlineUserName && selectedOnlineUserName !==''">

            <mat-form-field  style="width:95%">
                <textarea matInput placeholder="your message" [(ngModel)]="message" matTextareaAutosize matAutosizeMinRows="2"
                          matAutosizeMaxRows="5"></textarea>
            </mat-form-field>
           
            <mat-chip-list class="mat-chip-list-stacked">
                <ng-container *ngFor="let directMessage of (dmState$|async)?.directMessages">

                    <ng-container *ngIf="getUserInfoName(directMessage) !== ''">
                        <mat-chip selected="true" style="width:95%">
                            {{getUserInfoName(directMessage)}} {{directMessage.message}}
                        </mat-chip>
                    </ng-container>
                       
                    <ng-container *ngIf="getUserInfoName(directMessage) === ''">
                        <mat-chip style="width:95%">
                            {{getUserInfoName(directMessage)}} {{directMessage.message}}
                        </mat-chip>
                    </ng-container>

                    </ng-container>
            </mat-chip-list>

        </div>
    </div>
</div>

Links

https://github.com/aspnet/SignalR

https://github.com/aspnet/SignalR#readme

https://github.com/ngrx

https://www.npmjs.com/package/@aspnet/signalr-client

https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json

https://dotnet.myget.org/F/aspnetcore-ci-dev/npm/

https://dotnet.myget.org/feed/aspnetcore-ci-dev/package/npm/@aspnet/signalr-client

https://www.npmjs.com/package/msgpack5


Anuraj Parameswaran: Building multi-tenant applications with ASP.NET Core

This post is about developing multi-tenant applications with ASP.NET Core. Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each customer is called a tenant. Tenants may be given the ability to customize some parts of the application.


Anuraj Parameswaran: Seed database in ASP.NET Core

This post is about how to seed database in ASP.NET Core. You may want to seed the database with initial users for various reasons. You may want default users and roles added as part of the application. In this post, we will take a look at how to  seed the database with default data.


Anuraj Parameswaran: CI build for an ASP.NET Core app

This post is about setting up continuous integration (CI) process for an ASP.NET Core app using Visual Studio Team Services (VSTS) or Team Foundation Server (TFS).


Anuraj Parameswaran: How to use Angular 4 with ASP.NET MVC 5

This post is about how to use Angular 4 with ASP.NET MVC5. In one of my existing projects we were using Angular 1.x, due to some plugin compatibility issues, we had to migrate to latest version of Angular. We couldn’t find any good article which talks about development and deployment aspects of Angular 4 with ASP.NET MVC.


Dominick Baier: Updated Templates for IdentityServer4

We finally found the time to put more work into our templates.

dotnet new is4empty

Creates a minimal IdentityServer4 project without a UI.

dotnet new is4ui

Adds the quickstart UI to the current project (can be e.g added on top of is4empty)

dotnet new is4inmem

Adds a basic IdentityServer with UI, test users and sample clients and resources. Shows both in-memory code and JSON configuration.

dotnet new is4aspid

Adds a basic IdentityServer that uses ASP.NET Identity for user management

dotnet new is4ef

Adds a basic IdentityServer that uses Entity Framework for configuration and state management

Installation

Install with:

dotnet new -i identityserver4.templates

If you need to set back your dotnet new list to “factory defaults”, use this command:

dotnet new --debug:reinit


Anuraj Parameswaran: Using LESS CSS with ASP.NET Core

This post is about getting started with LESS CSS with ASP.NET. Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themeable and extendable. Less css helps developers to avoid code duplication.


Dominick Baier: Missing Claims in the ASP.NET Core 2 OpenID Connect Handler?

The new OpenID Connect handler in ASP.NET Core 2 has a different (aka breaking) behavior when it comes to mapping claims from an OIDC provider to the resulting ClaimsPrincipal.

This is especially confusing and hard to diagnose since there are a couple of moving parts that come together here. Let’s have a look.

You can use my sample OIDC client here to observe the same results.

Mapping of standard claim types to Microsoft proprietary ones
The first annoying thing is, that Microsoft still thinks they know what’s best for you by mapping the OIDC standard claims to their proprietary ones.

This can be fixed elegantly by clearing the inbound claim type map on the Microsoft JWT token handler:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

A basic OpenID Connect authentication request
Next – let’s start with a barebones scenario where the client requests the openid scope only.

First confusing thing is that Microsoft pre-populates the Scope collection on the OpenIdConnectOptions with the openid and the profile scope (don’t get me started). This means if you only want to request openid, you first need to clear the Scope collection and then add openid manually.

services.AddAuthentication(options =>
{
    options.DefaultScheme = "Cookies";
    options.DefaultChallengeScheme = "oidc";
})
    .AddCookie("Cookies", options =>
    {
        options.AccessDeniedPath = "/account/denied";
    })
    .AddOpenIdConnect("oidc", options =>
    {
        options.Authority = "https://demo.identityserver.io";
        options.ClientId = "server.hybrid";
        options.ClientSecret = "secret";
        options.ResponseType = "code id_token";
 
        options.SaveTokens = true;
                    
        options.Scope.Clear();
        options.Scope.Add("openid");
                    
        options.TokenValidationParameters = new TokenValidationParameters
        {
            NameClaimType = "name", 
            RoleClaimType = "role"
        };
    });

With the ASP.NET Core v1 handler, this would have returned the following claims: nbf, exp, iss, aud, nonce, iat, c_hash, sid, sub, auth_time, idp, amr.

In V2 we only get sid, sub and idp. What happened?

Microsoft added a new concept to their OpenID Connect handler called ClaimActions. Claim actions allow modifying how claims from an external provider are mapped (or not) to a claim in your ClaimsPrincipal. Looking at the ctor of the OpenIdConnectOptions, you can see that the handler will now skip the following claims by default:

ClaimActions.DeleteClaim("nonce");
ClaimActions.DeleteClaim("aud");
ClaimActions.DeleteClaim("azp");
ClaimActions.DeleteClaim("acr");
ClaimActions.DeleteClaim("amr");
ClaimActions.DeleteClaim("iss");
ClaimActions.DeleteClaim("iat");
ClaimActions.DeleteClaim("nbf");
ClaimActions.DeleteClaim("exp");
ClaimActions.DeleteClaim("at_hash");
ClaimActions.DeleteClaim("c_hash");
ClaimActions.DeleteClaim("auth_time");
ClaimActions.DeleteClaim("ipaddr");
ClaimActions.DeleteClaim("platf");
ClaimActions.DeleteClaim("ver");

If you want to “un-skip” a claim, you need to delete a specific claim action when setting up the handler. The following is the very intuitive syntax to get the amr claim back:

options.ClaimActions.Remove("amr");

If you want to see the raw claims from the token in the principal, you need to clear the whole claims action collection.

Requesting more claims from the OIDC provider
When you are requesting more scopes, e.g. profile or custom scopes that result in more claims, there is another confusing detail to be aware of.

Depending on the response_type in the OIDC protocol, some claims are transferred via the id_token and some via the userinfo endpoint. I wrote about the details here.

So first of all, you need to enable support for the userinfo endpoint in the handler:

options.GetClaimsFromUserInfoEndpoint = true;

If the claims are being returned by userinfo, ClaimsActions are used again to map the claims from the returned JSON document to the principal. The following default settings are used here:

ClaimActions.MapUniqueJsonKey("sub""sub");
ClaimActions.MapUniqueJsonKey("name""name");
ClaimActions.MapUniqueJsonKey("given_name""given_name");
ClaimActions.MapUniqueJsonKey("family_name""family_name");
ClaimActions.MapUniqueJsonKey("profile""profile");
ClaimActions.MapUniqueJsonKey("email""email");

IOW – if you are sending a claim to your client that is not part of the above list, it simply gets ignored, and you need to do an explicit mapping. Let’s say your client application receives the website claim via userinfo (one of the standard OIDC claims, but unfortunately not mapped by Microsoft) – you need to add the mapping yourself:

options.ClaimActions.MapUniqueJsonKey("website""website");

The same would apply for any other claims you return via userinfo.

I hope this helps. In short – you want to be explicit about your mappings, because I am sure that those default mappings will change at some point in the future which will lead to unexpected behavior in your client applications.


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.