Andrew Lock: Controller activation and dependency injection in ASP.NET Core MVC

Controller activation and dependency injection in ASP.NET Core MVC

In my last post about disposing IDsiposables in ASP.NET Core, Mark Rendle pointed out that MVC controllers are also disposed at the end of a request. On first glance, this may seem obvious given that scoped resources are disposed at the end of a request, but MVC controllers are actually handled in a slightly different way to most services.

In this post, I'll describe how controllers are created in ASP.NET Core MVC using the IControllerActivator, the options available out of the box, and their differences when it comes to dependency injection.

The default IControllerActivator

In ASP.NET Core, when a request is received by the MvcMiddleware, routing - either conventional or attribute routing - is used to select the controller and action method to execute. In order to actually execute the action, the MvcMiddleware must create an instance of the selected controller.

The process of creating the controller depends on a number of different provider and factory classes, culminating in an instance of the IControllerActivator. This class implements just two methods:

public interface IControllerActivator  
{
    object Create(ControllerContext context);
    void Release(ControllerContext context, object controller);
}

As you can see, the IControllerActivator.Create method is passed a ControllerContext which defines the controller to be created. How the controller is created depends on the particular implementation.

Out of the box, ASP.NET Core uses the DefaultControllerActivator, which uses the TypeActivatorCache to create the controller. The TypeActivatorCache creates instances of objects by calling the constructor of the Type, and attempting to resolve the required constructor argument dependencies from the DI container.

This is an important point. The DefaultControllerActivator doesn't attempt to resolve the Controller instance from the DI container itself, only the Controller's dependencies.

Example of the default controller activator

To demonstrate this behaviour, I've created a simple MVC application, consisting of a single service, and a single controller. The service instance has a name property, that is set in the constructor. By default, it will have the value "default".

public class TestService  
{
    public TestService(string name = "default")
    {
        Name = name;
    }

    public string Name { get; }
}

The HomeController for the app takes a dependency on the TestService, and returns the Name property:

public class HomeController : Controller  
{
    private readonly TestService _testService;
    public HomeController(TestService testService)
    {
        _testService = testService;
    }

    public string Index()
    {
        return "TestService.Name: " + _testService.Name;
    }
}

The final piece of the puzzle is the Startup file. Here I register the TestService as a scoped service in the DI container, and set up the MvcMiddleware and services:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

You'll also notice I've defined a factory method for creating an instance of the HomeController. This registers the HomeController type in the DI container, injecting an instance of the TestService with a custom Name property.

So what do you get if you run the app?

Controller activation and dependency injection in ASP.NET Core MVC

As you can see, the TestService.Name property has the default value, indicating the TestService instance has been sourced directly from the DI container. The factory method we registered to create the HomeController has clearly been ignored.

This makes sense when you remember that the DefaultControllerActivator is creating the controller. It doesn't request the HomeController from the DI container, it just requests its constructor dependencies.

Most of the time, using the DefaultControllerActivator will be fine, but sometimes you may want to create your controllers by using the DI container directly. This is especially true when you are using third-party containers with features such as interceptors or decorators.

Luckily, the MVC framework includes an implementation of IControllerActivator to do just this, and even provides a handy extension method to enable it.

The ServiceBasedControllerActivator

As you've seen, the DefaultControllerActivator uses the TypeActivatorCache to create controllers, but MVC includes an alternative implementation, the ServiceBasedControllerActivator, which can be used to directly obtain controllers from the DI container. The implementation itself is trivial:

public class ServiceBasedControllerActivator : IControllerActivator  
{
    public object Create(ControllerContext actionContext)
    {
        var controllerType = actionContext.ActionDescriptor.ControllerTypeInfo.AsType();

        return actionContext.HttpContext.RequestServices.GetRequiredService(controllerType);
    }

    public virtual void Release(ControllerContext context, object controller)
    {
    }
}

You can configure the DI-based activator with the AddControllersAsServices() extension method, when you add the MVC services to your application:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc()
                .AddControllersAsServices();

        services.AddScoped<TestService>();
        services.AddTransient(ctx =>
            new HomeController(new TestService("Non-default value")));
    }

    public void Configure(IApplicationBuilder app)
    {
        app.UseMvcWithDefaultRoute();
    }
}

With this in place, hitting the home page will create a controller by loading it from the DI container. As we've registered a factory method for the HomeController, our custom TestService configuration will be honoured, and the alternative Name will be used:

Controller activation and dependency injection in ASP.NET Core MVC

The AddControllersAsServices method does two things - it registers all of the Controllers in your application with the DI container (if they haven't already been registered) and replaces the IControllerActivator registration with the ServiceBasedControllerActivator:

public static IMvcBuilder AddControllersAsServices(this IMvcBuilder builder)  
{
    var feature = new ControllerFeature();
    builder.PartManager.PopulateFeature(feature);

    foreach (var controller in feature.Controllers.Select(c => c.AsType()))
    {
        builder.Services.TryAddTransient(controller, controller);
    }

    builder.Services.Replace(ServiceDescriptor.Transient<IControllerActivator, ServiceBasedControllerActivator>());

    return builder;
}

If you need to do something esoteric, you can always implement IControllerActivator yourself, but I can't think of any reason that these two implementations wouldn't satisfy all your requirements!

Summary

  • By default, the DefaultControllerActivator is configured as the IControllerActivator for ASP.NET Core MVC.
  • The DefaultControllerActivator uses the TypeActivatorCache to create controllers. This creates an instance of the controller, and loads constructor arguments from the DI container.
  • You can use an alternative activator, the ServiceBasedControllerActivator, which loads controllers directly from the DI container. You can configure this activator by using the AddControllersAsServices() extension method on the MvcBuilder instance in Startup.ConfigureServices.


Anuraj Parameswaran: How to Deploy Multiple Apps on Azure WebApps

This post is about deploying multiple applications on an Azure Web App. App Service Web Apps is a fully managed compute platform that is optimized for hosting websites and web applications. This platform-as-a-service (PaaS) offering of Microsoft Azure lets you focus on your business logic while Azure takes care of the infrastructure to run and scale your apps.


Anuraj Parameswaran: Develop and Run Azure Functions locally

This post is about developing, running and debugging azure functions locally. Trigger on events in Azure and debug C# and JavaScript functions. Azure functions is a new service offered by Microsoft. Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third party service as well as on-premises systems.


Andrew Lock: Four ways to dispose IDisposables in ASP.NET Core

Four ways to dispose IDisposables in ASP.NET Core

One of the most commonly implemented interfaces in .NET is the IDisposable interface. Classes implement IDisposable when they contain references to unmanaged resources, such as window handles, files or sockets. The garbage collector automatically releases memory for managed (i.e. .NET) objects, but it doesn't know about how to handle the unmanaged resources. Implementing IDisposable provides a hook, so you can properly clean up those resources when your class is disposed.

This post goes over some of the options available to you for disposing services in ASP.NET Core applications, especially when using the built-in dependency injection container.

For the purposes of this post, I'll use the following class that implements IDisposable in the examples. I'm just writing to the console instead of doing any actual cleanup, but it'll serve our purposes for this post.

public class MyDisposable : IDisposable  
{
    public MyDisposable()
    {
        Console.WriteLine("+ {0} was created", this.GetType().Name);
    }

    public void Dispose()
    {
        Console.WriteLine("- {0} was disposed!", this.GetType().Name);
    }
}

Now let's look at our options.

The simple case - a using statement

The typical suggested approach when consuming an IDisposable in your code, is with a using block:

using(var myObject = new MyDisposable())  
{
    // myObject.DoSomething();
}

Using IDisposables in this way ensures they are disposed correctly, whether or not they throw an exception. You could also use a try-finally block instead if necessary:

MyDisposable myObject;  
try  
{
    myObject = new MyDisposable();
    // myObject.DoSomething();
}
finally  
{
    myObject?.Dispose();
}

You'll often find this pattern when working with files or streams - things that you only need transiently, and are finished with in the same scope. Unfortunately, sometimes this won't suit your situation, and you might need to dispose of the object from somewhere else. Depending on your exact situation, there are a number of other options available to you.

Note: Wherever possible, it's best practice to dispose of objects in the same scope they were created. This will help prevent memory leaks and unexpected file locks in your application, where objects go accidentally undisposed.

Disposing at the end of a request - using RegisterForDispose

When you're working in ASP.NET Core, or any web application, it's very common for your objects to be scoped to a single request. That is, anything you create to handle a request you want to dispose when the request finishes.

There are a number of ways to do this. The most common way is to leverage the DI container which I'll come to in a minute, but sometimes that's not possible, and you need to create the disposable in your own code.

If you are manually creating an instance of an IDisposable, then you can register that disposable with the HttpContext, so that when the request ends, the instance will be disposed automatically. Simply pass the instance to HttpContext.Response.RegisterForDispose:

public class HomeController : Controller  
{
    readonly Disposable _disposable;

    public HomeController()
    {
        _disposable = new RegisteredForDispose();
    }

    public IActionResult Index()
    {
        // register the instance so that it is disposed when request ends
        HttpContext.Response.RegisterForDispose(_disposable);
        Console.Writeline("Running index...");
        return View();
    }
}

In this example, I'm creating the Disposable during the constructor of the HomeController, and then registering for its disposal in the action method. This is a little bit contrived, but it shows the mechanism at least.

If you execute this action method, you'll see the following:

$ dotnet run
Hosting environment: Development  
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ MyDisposable was created
Running index...  
- MyDisposable was disposed!

The HttpContext takes care of disposing our object for us!

Warning: I registered the instance in the action method, instead of the constructor method because HttpContext might be null in the constructor!

RegisterForDispose is useful when you are new-ing up services in your code. But given that Dispose is only required for classes using unmanaged resources, you'll probably find that more often than not, your IDisposable classes are encapsulated in services that are registered with the DI container.

As Mark Rendle pointed out, the Controller itself will also be disposed at the end of the request, so you can use that mechanism to dispose any objects you create.

Automatically disposing services - leveraging the built-in DI container

ASP.NET Core comes with a simple, built-in DI container that you can register your services with as either Transient, Scoped, or Singleton. You can read about it here, so I'll assume you already know how to use it to register your services.

Note, this article only discusses the built-in container - third-party containers might have other rules around automatic disposal of services.

Any service that the built-in container creates in order to fill a dependency, which also implements IDisposable, will be disposed by the container at the appropriate point. So Transient and Scoped instances will be disposed at the end of the request (or more accurately, at the end of a scope), and Singleton services will be disposed when the application is torn down and the ServiceProvider itself is disposed.

To clarify, that means the provider will dispose any service you register with it, as long as you don't provide a specific instance. For example, I'll create a number of disposable classes:

public class TransientCreatedByContainer: MyDisposable { }  
public class ScopedCreatedByFactory : MyDisposable { }  
public class SingletonCreatedByContainer: MyDisposable {}  
public class SingletonAddedManually: MyDisposable {}  

And register each of them in a different way in Startup.ConfigureServices. I am registering

  • TransientCreatedByContainer as a transient
  • ScopedCreatedByFactory as scoped, using a lambda function as a factory
  • SingletonCreatedByContainer as a singleton
  • SingletonAddedManually as a singleton by passing in a specific instance of the object.
public void ConfigureServices(IServiceCollection services)  
{
    // other services

    // these will be disposed
    services.AddTransient<TransientCreatedByContainer>();
    services.AddScoped(ctx => new ScopedCreatedByFactory());
    services.AddSingleton<SingletonCreatedByContainer>();

    // this one won't be disposed
    services.AddSingleton(new SingletonAddedManually());
}

Finally, I'll inject an instance of each into the HomeController, so the DI container will create / inject instances as necessary:

public class HomeController : Controller  
{
    public HomeController(
        TransientCreatedByContainer transient,
        ScopedCreatedByFactory scoped,
        SingletonCreatedByContainer createdByContainer,
        SingletonAddedManually manually)
    { }

    public IActionResult Index()
    {
        return View();
    }
}

When I run the application, hit the home page, and then stop the application, I get the following output:

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonCreatedByContainer was disposed!

There's a few things to note here:

  • SingletonAddedManually was created before the web host had finished being set up, so it writes to the console before logging starts
  • SingletonCreatedByContainer is disposed after we have started shutting down the server
  • SingletonAddedManually is never disposed as it was provided a specific instance!

Note the behaviour whereby only objects created by the DI container are disposed applies to ASP.NET Core 1.1 and above. In ASP.NET Core 1.0, all objects registered with the container are disposed.

Letting the container handle your IDisposables for you is obviously convenient, especially as you are probably registering your services with it anyway! The only obvious hole here is if you need to dispose an object that you create yourself. As I said originally, if possible, you should favour a using statement, but that's not always possible. Luckily, ASP.NET Core provides hooks into the application lifetime, so you can do some clean up when the application is shutting down.

Disposing when the application ends - hooking into IApplicationLifetime events

ASP.NET Core exposes an interface called IApplicationLifetime that can be used to execute code when an application is starting up or shutting down:

public interface IApplicationLifetime  
{
    CancellationToken ApplicationStarted { get; }
    CancellationToken ApplicationStopping { get; }
    CancellationToken ApplicationStopped { get; }
    void StopApplication();
}

You can inject this into your Startup class (or elsewhere) and register to the events you need. Extending the previous example, we can inject both the IApplicationLifetime and our singleton SingletonAddedManually instance into the Configure method of Startup.cs:

public void Configure(  
    IApplicationBuilder app, 
    IApplicationLifetime applicationLifetime,
    SingletonAddedManually toDispose)
{
    applicationLifetime.ApplicationStopping.Register(OnShutdown, toDispose);

    // configure middleware etc
}

private void OnShutdown(object toDispose)  
{
    ((IDisposable)toDispose).Dispose();
}

I've created a simple helper method that takes the state passed in (the SingletonAddedManually instance), casts it to an IDisposable, and disposes it. This helper method is registered with the CancellationToken called ApplicationStopping, which is fired when closing down the application.

If we run the application again, with this additional registration, you can see that the SingletonAddedManually instance is now disposed, just after the application shutting down trigger.

$ dotnet run
+ SingletonAddedManually was created
Content root path: C:\Users\Sock\Repos\RegisterForDispose  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  
+ TransientCreatedByContainer was created
+ ScopedCreatedByFactory was created
+ SingletonCreatedByContainer was created
- TransientCreatedByContainer was disposed!
- ScopedCreatedByFactory was disposed!
Application is shutting down...  
- SingletonAddedManually was disposed!
- SingletonCreatedByContainer was disposed!

Summary

So there you have it, four different ways to dispose of your IDisposable objects. Wherever possible, you should either use the using statement, or let the DI container handle disposing objects for you. For cases where that's not possible, ASP.NET Core provides two mechanisms you can hook in to: RegisterForDispose and IApplicationLifetime.

Oh, and apparently I missed one final way:


Damien Bowden: Angular OIDC OAuth2 client with Google Identity Platform

This article shows how an Angular client could implement a login for a SPA application using Google Identity Platform OpenID. The Angular application uses the npm package angular-auth-oidc-client to implement the OpenID Connect Implicit Flow to connect with the google identity platform.

Code: https://github.com/damienbod/angular-auth-oidc-sample-google-openid

Setting up Google Identity Platform

The Google Identity Platform provides good documentation on how to set up its OpenID Connect implementation.

You need to login into google using a gmail account.
https://accounts.google.com

Now open the OpenID Connect google documentation page

https://developers.google.com/identity/protocols/OpenIDConnect

Open the credentials page provided as a link.

https://console.developers.google.com/apis/credentials

Create new credentials for your application, select OAuth Client ID in the drop down:

Select a web application and configure the parameters to match your client application URLs.

Implementing the Angular OpenID Connect client

The client application is implemtented using ASP.NET Core and Angular.

The npm package angular-auth-oidc-client is used to connect to the OpenID server. The package can be added to the package.json file in the dependencies.

"dependencies": {
    ...
    "angular-auth-oidc-client": "0.0.8"
},

Now the AuthModule, OidcSecurityService, AuthConfiguration can be imported. The AuthModule.forRoot() is used and added to the root module imports, the OidcSecurityService is added to the providers and the AuthConfiguration is the configuration class which is used to set up the OpenID Connect Implicit Flow.


import { AuthModule, OidcSecurityService, AuthConfiguration } from 'angular-auth-oidc-client';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule,
        AuthModule.forRoot(),
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent
    ],
    providers: [
        OidcSecurityService,
        Configuration
    ],
    bootstrap:    [AppComponent],
})

The AuthConfiguration class is used to configure the module.

stsServer
This is the URL where the STS server is located. We use https://accounts.google.com in this example.

redirect_url
This is the redirect_url which was configured on the google client ID on the server.

client_id
The client_id must match the Client ID for Web application which was configured on the google server.

response_type
This must be ‘id_token token’ or ‘id_token’. If you want to use the user service, or access data using using APIs, you must use the ‘id_token token’ configuration. This is the OpenID Connect Implicit Flow. The possible values are defined in the well known configuration URL from the OpenID Connect server.

scope
Scope which are used by the client. The openid must be defined: ‘openid email profile’

post_logout_redirect_uri
Url after a server logout if using the end session API. This is not supported by google OpenID.

start_checksession
Checks the session using OpenID session management. Not supported by google OpenID

silent_renew
Renews the client tokens, once the token_id expires.

startup_route
Angular route after a successful login.

forbidden_route
HTTP 403

unauthorized_route
HTTP 401

log_console_warning_active
Logs all module warnings to the console.

log_console_debug_active
Logs all module debug messages to the console.


export class AppModule {
    constructor(public authConfiguration: AuthConfiguration) {
        this.authConfiguration.stsServer = 'https://accounts.google.com';
        this.authConfiguration.redirect_url = 'https://localhost:44386';
        // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
        // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
        this.authConfiguration.client_id = '188968487735-b1hh7k87nkkh6vv84548sinju2kpr7gn.apps.googleusercontent.com';
        this.authConfiguration.response_type = 'id_token token';
        this.authConfiguration.scope = 'openid email profile';
        this.authConfiguration.post_logout_redirect_uri = 'https://localhost:44386/Unauthorized';
        this.authConfiguration.start_checksession = false;
        this.authConfiguration.silent_renew = true;
        this.authConfiguration.startup_route = '/home';
        // HTTP 403
        this.authConfiguration.forbidden_route = '/Forbidden';
        // HTTP 401
        this.authConfiguration.unauthorized_route = '/Unauthorized';
        this.authConfiguration.log_console_warning_active = true;
        this.authConfiguration.log_console_debug_active = true;
        // id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time,
        // limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
        this.authConfiguration.max_id_token_iat_offset_allowed_in_seconds = 3;
        this.authConfiguration.override_well_known_configuration = true;
        this.authConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';
    }
}

Google OpenID does not support the .well-known/openid-configuration API as defined by OpenID. Google blocks this when using it due to a CORS security restriction, so it can not be used from a browser application. As a workaround, the well known configuration can be configured locally when using angular-auth-oidc-client. The goole OpenID configuration can be downloaded using the following URL:

https://accounts.google.com/.well-known/openid-configuration

The json file can then be downloaded and saved locally on your server and this can then be configured in the authConfiguration class using the override_well_known_configuration_url property.

this.authConfiguration.override_well_known_configuration = true;
this.authConfiguration.override_well_known_configuration_url = 'https://localhost:44386/wellknownconfiguration.json';

The following json is the actual configuration for the google well known configuration. What’s really interesting is that the end session endpoint is not supported, which is strange I think.
It’s also interesting to see that the response_types_supported supports a type which is not supported “token id_token”, this should be “id_token token”.

See: http://openid.net/specs/openid-connect-core-1_0.html

{
  "issuer": "https://accounts.google.com",
  "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
  "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
  "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
  "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
  "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
  "response_types_supported": [
    "code",
    "token",
    "id_token",
    "code token",
    "code id_token",
    "token id_token",
    "code token id_token",
    "none"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid",
    "email",
    "profile"
  ],
  "token_endpoint_auth_methods_supported": [
    "client_secret_post",
    "client_secret_basic"
  ],
  "claims_supported": [
    "aud",
    "email",
    "email_verified",
    "exp",
    "family_name",
    "given_name",
    "iat",
    "iss",
    "locale",
    "name",
    "picture",
    "sub"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}

The AppComponent implements the authorize and the authorizedCallback functions from the OidcSecurityService provider.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { Configuration } from './app.constants';
import { OidcSecurityService } from 'angular-auth-oidc-client';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';


import './app.component.css';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html'
})

export class AppComponent implements OnInit {

    constructor(public securityService: OidcSecurityService) {
    }

    ngOnInit() {
        if (window.location.hash) {
            this.securityService.authorizedCallback();
        }
    }

    login() {
        console.log('start login');
        this.securityService.authorize();
    }

    refreshSession() {
        console.log('start refreshSession');
        this.securityService.authorize();
    }

    logout() {
        console.log('start logoff');
        this.securityService.logoff();
    }
}

Running the application

Start the application using IIS Express in Visual Studio 2017. This starts with https://localhost:44386 which is configured in the launch settings file. If you use a differnt URL, you need to change this in the client application and also the servers client credentials configuration.

Login then with your gmail.

And you are redirected back to the SPA.

Links:

https://www.npmjs.com/package/angular-auth-oidc-client

https://developers.google.com/identity/protocols/OpenIDConnect



Damien Bowden: angular-auth-oidc-client Release, an OpenID Implicit Flow client in Angular

I have been blogging and writing code for Angular and OpenID Connect since Nov 1, 2015. Now after all this time, I have decided to create my first npm package for Angular: angular-auth-oidc-client, which makes it easier to use the Angular Auth OpenID client. This is now available on npm.

npm package: https://www.npmjs.com/package/angular-auth-oidc-client

github code: https://github.com/damienbod/angular-auth-oidc-client

issues: https://github.com/damienbod/angular-auth-oidc-client/issues

Using the npm package: see the readme

Samples: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/npm-lib-test/src/AngularClient

Features:

Notes:

FabianGosebrink and Roberto Simonetti have decided to help and further develop this npm package which I’m very grateful. Anyone wishing to get involved, please do and create some issues and pull-requests. Help is always most welcome.

The next step is to do the OpenID Relying Parties certification.



Andrew Lock: Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

Automatically validating anti-forgery tokens in ASP.NET Core with the AutoValidateAntiforgeryTokenAttribute

This quick post is a response to a question about anti-forgery tokens I saw on twitter:

Anti-forgery tokens are a security mechanism to defend against cross-site request forgery (CSRF) attacks. Marius Schulz shared a solution to this problem in a blog post in which he creates a simple middleware to automatically validate the tokens sent in the request. This works, but there's actually an even simpler solution I wanted to share: the built-in AutoValidateAntiforgeryTokenAttribute.

tl;dr Add the AutoValidateAntiforgeryTokenAttribute as a global MVC filter to automatically validate all appropriate action methods.

Defending against cross-site request forgery in ASP.NET Core

I won't go into CSRF attacks in detail - I recommend you check out the docs for details if this is all new to you. In essence, when you send a form to the user, you add an extra hidden field that includes one half of a cryptographic token. Additionally, a cookie is set with the other half of the token. When the form is posted to the server, the two halves of the token are verified, ensuring only valid requests can be made.

Note, this post assumes you are using Razor and server-side rendering. You can use a similar approach to protect your API calls, but I won't go into that here.

In ASP.NET Core, the tokens are added to your forms automatically when you use the asp-* tag helpers, for example:

<form asp-controller="Manage" asp-action="ChangePassword" method="post">  
   <!-- Form details -->
</form>  

This will generate markup similar to the following. You can see the hidden input with the anti-forgery token:

<form method="post" action="/Manage/ChangePassword">  
  <!-- Form details -->
  <input name="__RequestVerificationToken" type="hidden" value="CfDJ8NrAkSldwD9CpLR...LongValueHere!" />
</form>  

Note, in ASP.NET Core 2.0, ASP.NET Core will add anti-forgery tokens to all your forms, whether you have use the asp-* tag helpers or not.

Adding the form field is just one part of the requirement, you also need to actually check that the tokens are valid on the server side. You can do this by decorating your controller actions with the [ValidateAntiForgeryToken] attribute. You'll need to add it to all of your POST actions to properly protect your application:

public class ManageController  
{
  [HttpPost]
  [ValidateAntiForgeryToken]
  public IActionResult ChangePassword()
  {
    // ...
    return View();
  }
}

This works, but is a bit of a pain - you have to decorate each of your POST action methods with the attribute. If you forget, you won't get an error, the action just won't be protected.

Automatically validating all appropriate actions

MVC has the concept of "Global filters", which are applied to every action in your MVC app. Unfortunately we can't just add the [ValidateAntiForgeryToken] attribute globally. We won't receive anti-forgery tokens for certain types of requests like GET or HEAD - if we applied [ValidateAntiForgeryToken] globally, then all of those requests would throw validation errors.

Luckily, ASP.NET Core provides another attribute for just such a use, the [AutoValidateAntiForgeryToken] attribute. This works identically to the [ValidateAntiForgeryToken] attribute, except that it ignores "safe" methods like GET and HEAD.

Adding the attribute to your application is simple - just add it to the global filters collection in your Startup class, when calling AddMvc().

public class Startup  
{
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddMvc(options =>
    {
        options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute());
    });
  }
}

Job done! No need to decorate all your action methods with attributes, you will just be protected automatically!


Damien Bowden: OpenID Connect Session Management using an Angular application and IdentityServer4

The article shows how the OpenID Connect Session Management can be implemented in an Angular application. The OpenID Connect Session Management 1.0 provides a way of monitoring the user session on the server using iframes. IdentityServer4 implements the server side of the specification. This does not monitor the lifecycle of the tokens used in the browser application. This session only monitors the server session. This has nothing to do with the OpenID tokens used by the SPA application.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Code: Angular auth module

Other posts in this series:

The OidcSecurityCheckSession class implements the Session Management from the specification. The init function creates an iframe and adds it to the window document in the DOM. The iframe uses the ‘authWellKnownEndpoints.check_session_iframe’ value, which is the connect/checksession API got from the ‘.well-known/openid-configuration’ service.

The init function also adds the event for the message, which is specified in the OpenID Connect Session Management documentation.

init() {
	this.sessionIframe = window.document.createElement('iframe');
	this.oidcSecurityCommon.logDebug(this.sessionIframe);
	this.sessionIframe.style.display = 'none';
	this.sessionIframe.src = this.authWellKnownEndpoints.check_session_iframe;

	window.document.body.appendChild(this.sessionIframe);
	this.iframeMessageEvent = this.messageHandler.bind(this);
	window.addEventListener('message', this.iframeMessageEvent, false);

	return Observable.create((observer: Observer<any>) => {
		this.sessionIframe.onload = () => {
			observer.next(this);
			observer.complete();
		}
	});
}

The pollServerSession function, posts a message every 3 seconds to the iframe which checks if the session on the server has been changed. The session_state is the value returned in the HTTP callback from a successful authorization.

pollServerSession(session_state: any, clientId: any) {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
			this.oidcSecurityCommon.logDebug(this.sessionIframe);
			this.sessionIframe.contentWindow.postMessage(clientId + ' ' + session_state, this.authConfiguration.stsServer);
		},
		(err: any) => {
			this.oidcSecurityCommon.logError('pollServerSession error: ' + err);
		},
		() => {
			this.oidcSecurityCommon.logDebug('checksession pollServerSession completed');
		});
}

The messageHandler handles the callback from the iframe. If the server session has changed, the output onCheckSessionChanged event is triggered.

private messageHandler(e: any) {
	if (e.origin === this.authConfiguration.stsServer &&
		e.source === this.sessionIframe.contentWindow
	) {
		if (e.data === 'error') {
			this.oidcSecurityCommon.logWarning('error from checksession messageHandler');
		} else if (e.data === 'changed') {
			this.onCheckSessionChanged.emit();
		} else {
			this.oidcSecurityCommon.logDebug(e.data + ' from checksession messageHandler');
		}
	}
}

The onCheckSessionChanged is a public EventEmitter output for this provider.

@Output() onCheckSessionChanged: EventEmitter<any> = new EventEmitter<any>(true);

The OidcSecurityService provider subscribes to the onCheckSessionChanged event and uses its onCheckSessionChanged function to handle this event.

this.oidcSecurityCheckSession.onCheckSessionChanged.subscribe(() => { this.onCheckSessionChanged(); });

After a successful login, and if the tokens are valid, the client application checks if the checksession should be used, and calls the init method and subscribes to it. When ready, it uses the pollServerSession function to activate the monitoring.

if (this.authConfiguration.start_checksession) {
  this.oidcSecurityCheckSession.init().subscribe(() => {
    this.oidcSecurityCheckSession.pollServerSession(
      result.session_state,
      this.authConfiguration.client_id
    );
  });
}

The onCheckSessionChanged function sets a public boolean which can be used to implement the required application logic when the server sesion has changed.

private onCheckSessionChanged() {
  this.oidcSecurityCommon.logDebug('onCheckSessionChanged');
  this.checkSessionChanged = true;
}

In this demo, the navigation bar allows to Angular application to refresh the session if the server session has changed.

<li>
  <a class="navigationLinkButton" *ngIf="securityService.checkSessionChanged" (click)="refreshSession()">Refresh Session</a>
</li>

When the application is started, the unchanged message is returned.

Then open the server application in a tab in the same browser session, and logout.

And the client application notices tht the server session has changed and can react as required.

Links:

http://openid.net/specs/openid-connect-session-1_0-ID4.html

http://docs.identityserver.io/en/release/



Anuraj Parameswaran: Connecting to Azure Cosmos DB emulator from RoboMongo

This post is about connecting to Azure Cosmos DB emulator from RoboMongo. Azure Cosmos DB is Microsoft’s globally distributed multi-model database. It is superset of Azure Document DB. Due to some challenges, one of our team decided to try some new No SQL databases. One of the option was Document Db. I found it quite good option, since it supports Mongo protocol so existing app can work without much change. So I decided to explore that. First step I downloaded the Document Db emulator, now it is Azure Cosmos DB emulator. Installed and started the emulator, it is opening the Data Explorer web page (https://localhost:8081/_explorer/index.html), which helps to explore the Documents inside the database. Then I tried to connect to the same with Robo Mongo (It is a free Mongo Db client, can be downloaded from here). But is was not working. I was getting some errors. Later I spent some time to find some similar issues, blog post on how to connect from Robo Mongo to Document Db emulator. But I couldn’t find anything useful. After spenting almost a day, I finally figured out the solution. Here is the steps.


Andrew Lock: Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

One of the nice features introduced in ASP.NET Core is the universal logging infrastructure. In this post I take a look at one of the helper methods in the ASP.NET Core logging library, and how you can use it to efficiently log messages in your libraries.

Before we get into it, I'll give a quick overview of the ASP.NET Core logging infrastructure. Fell free to skip down to the section on helper methods if you already know the basics of how it works.

Logging overview

The logging infrastructure is exposed in the form of the ILogger<T> and ILoggerFactory interfaces, which you can inject into your services using dependency injection to log messages in a number of ways. For example, in the following ProductController, we log a message when the View action is invoked.

public class ProductController : Controller  
{
    private readonly ILogger _logger;

    public ProductController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<ProductController>();
    }

    public IActionResult View(int id)
    {
        _logger.LogDebug("View Product called with id {Id}", id);
        return View();
    }
}

The ILogger can log messages at a number of different levels given by LogLevel:

public enum LogLevel  
{
    Trace = 0,
    Debug = 1,
    Information = 2,
    Warning = 3,
    Error = 4,
    Critical = 5,
}

The final aspect to the logging infrastructure are logging providers. These are the "sinks" that the logs are actually written to. You can plug in multiple providers, and write logs to a variety of different locations, for example the console, to a file, to Serilog etc.

One of the nice things about the logging infrastructure, and the ubiquitous use of DI in the ASP.NET Core libraries is that these same interfaces and classes are used throughout the libraries themselves, as well as in your application.

Controlling the logs produced by different categories

When creating a logger with CreateLogger<T>, the type name you pass in is used to create a category for the logs. At the application level, you can choose which LogLevels are actually output for a given category.

For example, you could specify that by default, Debug or higher level logs are written to providers, but for logs written by services in the Microsoft namespace, only logs of at least Warning level or above are written.

With this approach you can control the amount of logging produced by the various libraries in your application, increasing logging levels for only those areas that need them.

For example, the following screenshot shows the logs generated by default in an MVC application when we first hit the home page:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

That's a lot of logs! And notice that most of them are coming from internal components, from classes in the Microsoft namespace. It's basically just noise in this case. We can filter out Warning logs in the Microsoft namespace, but keep other logs at the Debug level:

Defining custom logging messages with LoggerMessage.Define in ASP.NET Core

With the default ASP.NET Core 1.X template, all you need to do is change the appsettings.json file, and set the loglevels to Warning as appropriate:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Warning",
      "Microsoft": "Warning"
    }
  }
}

Note, in ASP.NET Core 1.X, filtering is a little bit of an afterthought. Some logging providers, such as the Console provider let you specify how to filter the categories. Alternatively, you can apply filters to all providers together using the WithFilter method. The logging in ASP.NET Core 2.0 will likely tidy up this approach - it is due to be updated in preview 2, so I won't dwell on it here.

Filtering considerations

It's a good idea to instrument your code with as many log messages are useful, and you can filter out the most verbose Trace and Debug log levels. These filtering capabilities are a really useful way of cutting through the cruft, but there's one particular downside.

If you add thousands of logging statements, that will inevitably start having a performance impact on your libraries, simply by the virtue of the fact you're running more code. One solution to this is to check whether the particular log level is enabled for the current logger, before trying to write to it. For example:

public class ProductController : Controller  
{
    public IActionResult Index()
    {
        if(_logger.IsEnabled(LogLevel.Debug))
        {
            _logger.LogDebug("Calling HomeController.Index");
        }
        return View();
    }
}

That's a bit of a pain to have to do every time you write a log though right? Luckily ASP.NET Core comes with a helper class, LoggerMessage to make using this pattern easier.

Creating logging delegates with the LoggerMessage Helper

The static LoggerMessage class is found in the Microsoft.Extensions.Logging.Abstractions package, and contains a number of static, generic Define methods that return an Action<> which in turn can be used to create strongly-typed logging extensions. That probably all sounds a bit confusing so lets break it down from the top.

The strongly-typed logging extension methods

We'll start with the logging code we want to use in our application. In this really simple example, we're just going to log the time that the HomeController.Index action method executes:

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        _logger.HomeControllerIndexExecuting(DateTimeOffset.Now);
        return View();
    }
}

The HomeControllerIndexExecuting method is a custom extension method that takes a DateTimeOffset parameter. We can define it as follows:

internal static class LoggerExtensions  
{
    private static Action<ILogger, DateTimeOffset, Exception> _homeControllerIndexExecuting;

    static LoggerExtensions()
    {
        _homeControllerIndexExecuting = LoggerMessage.Define<DateTimeOffset>(
            logLevel: LogLevel.Debug,
            eventId: 1,
            formatString: "Executing 'Index' action at '{StartTime}'");
    }

    public static void HomeControllerIndexExecuting(
        this ILogger logger, DateTimeOffset executeTime)
    {
        _homeControllerIndexExecuting(logger, executeTime, null);
    }
}

The HomeControllerIndexExecuting method is an ILogger extension method that invokes a static Action field on our static LoggerExtensions method. The _homeControllerIndexExecuting field is initialised using the ASP.NET Core LoggerMessage.Define method, by providing a logLevel, an eventId and the formatString to use to create the log.

That probably seems like a lot of effort right? Why not just call _logger.LogDebug() directly in the HomeControllerIndexExecuting extension method?

Remember, our goal was to improve performance by only logging messages for unfiltered categories, without having to explicitly write: if(_logger.IsEnabled(LogLevel.Debug). The answer lies in the LoggerMessage.Define<T> method.

The LoggerHelper.Define method

The purpose of the static LoggerMessage.Define<T> method is three-fold:

  1. Encapsulate the if statement to allow performant logging
  2. Enforce the correct strongly-typed parameters are passed when logging the message
  3. Ensure the log message contains the correct number of placeholders for parameters

This post is long enough, so I won't go into to much detail, but in summary, this gives you an idea of what the method looks like:

public static class LoggerMessage  
{
    public static Action<ILogger, T1, Exception> Define<T1>(
        LogLevel logLevel, EventId eventId, string formatString)
    {
        var formatter = CreateLogValuesFormatter(
            formatString, expectedNamedParameterCount: 1);

        return (logger, arg1, exception) =>
        {
            if (logger.IsEnabled(logLevel))
            {
                logger.Log(logLevel, eventId, new LogValues<T1>(formatter, arg1), exception, LogValues<T1>.Callback);
            }
        };
    }
}

First, this method performs a check that the provided format string, ("Executing 'Index' action at '{StartTime}'") contains the correct number of named parameters. Then, it returns an action method with the necessary number of generic parameters, including the if(logger.IsEnabled) clause. There are multiple overloads of the Define method, that take 0-6 generic parameters, depending on the number you need for your custom logging message.

If you want to see the details, including how the LogValues<> class works, check out the source code on GitHub.

Summary

If you take one thing away from the post, consider the way you log messages in your own application or libraries. Consider creating a static LoggerExtensions class, using LoggerMessage.Define to create a set of static fields, and adding strongly typed extension methods like HomeControllerIndexExecuting using the static Action<> fields.

If you want to see the logger messages in action, check out the sample app in the logging repo, or take a look at the ImageSharp library, which puts them to good effect.


Anuraj Parameswaran: Using Node Services in ASP.NET Core

This post is about running Javascript code in Server. Because a huge number of useful, high-quality Web-related open source packages are in the form of Node Package Manager (NPM) modules. NPM is the largest repository of open-source software packages in the world, and the Microsoft.AspNetCore.NodeServices package means that you can use any of them in your ASP.NET Core application.


Anuraj Parameswaran: Detecting AJAX Requests in ASP.NET Core

This post is about detecting Ajax Requests in ASP.NET Core. In earlier versions of ASP.NET MVC, developers could easily determine whether the request is made via AJAX or not with IsAjaxRequest() method which is part of Request method. In this post I am implementing the similar functionlity in ASP.NET Core.


Damien Bowden: Implementing a silent token renew in Angular for the OpenID Connect Implicit flow

This article shows how to implement a silent token renew in Angular using IdentityServer4 as the security token service server. The SPA Angular client implements the OpenID Connect Implicit Flow ‘id_token token’. When the id_token expires, the client requests new tokens from the server, so that the user does not need to authorise again.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

Other posts in this series:

When a user of the client app authorises for the first time, after a successful login on the STS server, the AuthorizedCallback function is called in the Angular application. If the server response and the tokens are successfully validated, as defined in the OpenID Connect specification, the silent renew is initialized, and the token validation method is called.

 public AuthorizedCallback() {
        
	...
		
	if (authResponseIsValid) {
		
		...

		if (this._configuration.silent_renew) {
			this._oidcSecuritySilentRenew.initRenew();
		}

		this.runTokenValidatation();

		this._router.navigate([this._configuration.startupRoute]);
	} else {
		this.ResetAuthorizationData();
		this._router.navigate(['/Unauthorized']);
	}

}

The OidcSecuritySilentRenew Typescript class implements the iframe which is used for the silent token renew. This iframe is added to the parent HTML page. The renew is implemented in an iframe, because we do not want the Angular application to refresh, otherwise for example we would lose form data.

...

@Injectable()
export class OidcSecuritySilentRenew {

    private expiresIn: number;
    private authorizationTime: number;
    private renewInSeconds = 30;

    private _sessionIframe: any;

    constructor(private _configuration: AuthConfiguration) {
    }

    public initRenew() {
        this._sessionIframe = window.document.createElement('iframe');
        console.log(this._sessionIframe);
        this._sessionIframe.style.display = 'none';

        window.document.body.appendChild(this._sessionIframe);
    }

    ...
}

The runTokenValidatation function starts an Observable timer. The application subscribes to the Observable which executes every 3 seconds. The id_token is validated, or more precise, checks that the id_token has not expired. If the token has expired and the silent_renew configuration has been activated, the RefreshSession function will be called, to get new tokens.

private runTokenValidatation() {
	let source = Observable.timer(3000, 3000)
		.timeInterval()
		.pluck('interval')
		.take(10000);

	let subscription = source.subscribe(() => {
		if (this._isAuthorized) {
			if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
				console.log('IsAuthorized: isTokenExpired');

				if (this._configuration.silent_renew) {
					this.RefreshSession();
				} else {
					this.ResetAuthorizationData();
				}
			}
		}
	},
	function (err: any) {
		console.log('Error: ' + err);
	},
	function () {
		console.log('Completed');
	});
}

The RefreshSession function creates the required nonce and state which is used for the OpenID Implicit Flow validation and starts an authentication and authorization of the client application and the user.

public RefreshSession() {
        console.log('BEGIN refresh session Authorize');

        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('RefreshSession created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url = this.createAuthorizeUrl(nonce, state);

        this._oidcSecuritySilentRenew.startRenew(url);
    }

The startRenew sets the iframe src to the url for the OpenID Connect flow. If successful, the id_token and the access_token are returned and the application runs without any interupt.

public startRenew(url: string) {
        this._sessionIframe.src = url;

        return new Promise((resolve) => {
            this._sessionIframe.onload = () => {
                resolve();
            }
        });
}

IdentityServer4 Implicit Flow configuration

The STS server, using IdentityServer4 implements the server side of the OpenID Implicit flow. The AccessTokenLifetime and the IdentityTokenLifetime properties are set to 30s and 10s. After 10s the id_token will expire and the client application will request new tokens. The access_token is valid for 30s, so that any client API requests will not fail. If you set these values to the same value, then the client will have to request new tokens before the id_token expires.

new Client
{
	ClientName = "angularclient",
	ClientId = "angularclient",
	AccessTokenType = AccessTokenType.Reference,
	AccessTokenLifetime = 30,
	IdentityTokenLifetime = 10,
	AllowedGrantTypes = GrantTypes.Implicit,
	AllowAccessTokensViaBrowser = true,
	RedirectUris = new List<string>
	{
		"https://localhost:44311"

	},
	PostLogoutRedirectUris = new List<string>
	{
		"https://localhost:44311/Unauthorized"
	},
	AllowedCorsOrigins = new List<string>
	{
		"https://localhost:44311",
		"http://localhost:44311"
	},
	AllowedScopes = new List<string>
	{
		"openid",
		"dataEventRecords",
		"dataeventrecordsscope",
		"securedFiles",
		"securedfilesscope",
		"role"
	}
}

When the application is run, the user can login, and the tokens are refreshed every ten seconds as configured on the server.

Links:

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/IdentityServer/IdentityServer4

https://identityserver4.readthedocs.io/en/release/



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

This is the next in a series of posts on using ImageSharp to resize images in an ASP.NET Core application. I showed how you could define an MVC action that takes a path to a file stored in the wwwroot folder, resizes it, and serves the resized file.

The biggest problem with this is that resizing an image is relatively expensive, taking multiple seconds to process large images. In the previous post I showed how you could use the IDistributedCache interface to cache the resized image, and use that for subsequent requests.

This works pretty well, and avoids the need to process the image multiple times, but in the implementation I showed, there were a couple of drawbacks. The main issue was the lack of caching headers and features at the HTTP level - whenever the image is requested, the MVC action will return the whole data to the browser, even though nothing has changed.

In the following image, you can see that every request returns a 200 response and the full image data. The subsequent requests are all much faster than the original because we're using data cached in the IDistributedCache, but the browser is not caching our resized image.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

In this post I show a different approach to caching the data - instead of storing the file in an IDistributedCache, we instead write the file to disk in the wwwroot folder. We then use StaticFileMiddleware to serve the file directly, without ever hitting the MVC middleware after the initial request. This lets us take advantage of the built in caching headers and etag behaviour that comes with the StaticFileMiddleware.

Note: James Jackson-South has been working hard on some extensible ImageSharp middleware to provide the functionality in these blog posts. He's even written a post blog introducing it, so check it out!

The system design

The approach I'm using in this post is shown in the following figure:

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

With this design a request for resizing an image, e.g. to /resized/200/120/original.jpg, would go through a number of steps:

  1. A request arrives for /resized/200/120/original.jpg
  2. The StaticFileMiddleware looks for the original.jpg file in the folder wwwroot/resized/200/120/, but it doesn't exist, so the request passes on to the MvcMiddleware
  3. The MvcMiddleware invokes the ResizeImage middleware, and saves the resized file in the folder wwwroot/resized/200/120/.
  4. On the next request, the StaticFileMiddleware finds the resized image in the wwwroot folder, and serves it as usual, short-circuiting the middleware pipeline before the MvcMiddleware can run.
  5. All subsequent requests for the resized file are served by the StaticFileMiddleware.

Writing a resized file to the wwwroot folder

After we first resize an image using the MvcMiddleware, we need to store the resized image in the wwwroot folder. In ASP.NET Core there is an abstraction called IFileProvider which can be used to obtain information about files. The IHostingEnvironment includes two such IFileProvders:

  • ContentRootFileProvider - an `IFileProvider for the Content Root, where your application files are stored, usually the project root or publish folder.
  • WebRootFileProvider - an IFileProvider for the wwwroot folder

We can use the WebRootFileProvider to open a stream to our destination file, which we will write the resized image to. The outline of the method is as follows, with preconditions, and the DOS protection code removed for brevity:`

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;
    public HomeController(IHostingEnvironment env)
    {
        _fileProvider = env.WebRootFileProvider;
    }

    [Route("/resized/{width}/{height}/{*url}")]
    public IActionResult ResizeImage(string url, int width, int height)
    {
        // Preconditions and sanitsation 
        // Check the original image exists
        var originalPath = PathString.FromUriComponent("/" + url);
        var fileInfo = _fileProvider.GetFileInfo(originalPath);
        if (!fileInfo.Exists) { return NotFound(); }

        // Replace the extension on the file (we only resize to jpg currently) 
        var resizedPath = ReplaceExtension($"/resized/{width}/{height}/{url}");

        // Use the IFileProvider to get an IFileInfo
        var resizedInfo = _fileProvider.GetFileInfo(resizedPath);
        // Create the destination folder tree if it doesn't already exist
        Directory.CreateDirectory(Path.GetDirectoryName(resizedInfo.PhysicalPath));

        // resize the image and save it to the output stream
        using (var outputStream = new FileStream(resizedInfo.PhysicalPath, FileMode.CreateNew))
        using (var inputStream = fileInfo.CreateReadStream())
        using (var image = Image.Load(inputStream))
        {
            image
                .Resize(width, height)
                .SaveAsJpeg(outputStream);
        }

        return PhysicalFile(resizedInfo.PhysicalPath, "image/jpg");
    }

    private static string ReplaceExtension(string wwwRelativePath)
    {
        return Path.Combine(
            Path.GetDirectoryName(wwwRelativePath),
            Path.GetFileNameWithoutExtension(wwwRelativePath)) + ".jpg";
    }
}

The overall design of this method is pretty simple.

  1. Check the original file exists.
  2. Create the destination file path. We're replacing the file extension with jpg at the moment because we are always resizing to a jpeg.
  3. Obtain an IFileInfo for the destination file. This is relative to the wwwroot folder as we are using the WebRootFileProvider on IHostingEnvironment.
  4. Open a file stream for the destination file.
  5. Open the original image, resize it, and save it to the output file stream.

With this method, we have everything we need to cache files in the wwwroot folder. Even better, nothing else needs to change in our Startup file, or anywhere else in our program.

Trying it out

Time to take it for a spin! If we make a number of requests for the same page again, and compare it to the first image in this post, you can see that we still have the fast response times for requests after the first, as we only resize the image once. However, you can also see the some of the requests now return a 304 response, and just 208 bytes of data. The browser uses its standard HTTP caching mechanisms on the client side, rather than caching only on the server.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk This is made possible by the etag and Last-Modified headers sent automatically by the StaticFileMiddleware.

Using ImageSharp to resize images in ASP.NET Core - Part 4: saving to disk

Note, we are not actually sending any caching headers by default - I wrote a post on how to do this here, which gives you control over how much caching browsers should do.

It might seem a little odd that there are three 200 requests before we start getting 304s. This is because:

  1. The first request is handled by the ResizeImage MVC method, but we are not adding any cache-related headers like ETag etc - we are just serving the file using the PhysicalFileResult.
  2. The second request is handled by the StaticFileMiddleware. It returns the file from disk, including an ETag and a Last-Modified header.
  3. The third request is made with additional headers - If-Modified-Since and If-None-Match headers. This returns the image data with a new ETag.
  4. Subsequent requests send the new ETag in the If-None-Match header, and the server responds with 304s.

I'm not entirely sure why we need three requests for the whole data here - it seems like two would suffice, given that the third request is made with the If-Modified-Since and If-None-Match headers. Why would the ETag need to change between requests two and three? I presume this is just standard behaviour though, and something I need to look at in more detail when I have time!

Summary

This post takes an alternative approach to caching compared to my last post on ImageSharp. Instead of caching the resized images in an IDistributedCache, we save them directly to the wwwroot folder. That way we can use all of the built in file response capabilities of the StaticFileMiddleware, without having to write it ourselves.

Having said that, James Jackson-South has written some middleware to take a similar approach, which handles all the caching headers for you. If this series has been of interest, I encourage you to check it out!


Dominick Baier: Techorama 2017

Again Techorama was an awesome conference – kudos to the organizers!

Seth and Channel9 recorded my talk and also did an interview – so if you couldn’t be there in person, there are some updates about IdentityServer4 and identity in general.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

In the ASP.NET Core 2.0 preview 1 release, Microsoft have released a new metapackage called Microsoft.AspNetCore.All. This package consists of every single package shipped by Microsoft as part of ASP.NET Core 2.0, including all the Razor packages, MVC packages, Entity Framework Core packages, everything!

Now, if you're first reaction to this news is anything like mine, you're quite possibly cradling your head in your hands, thinking they've completely thrown in the towel in creating a small, modular framework. But don't worry, there's more to it than that, and it should help deal with a number of problems.

Note, the Microsoft.AspNetCore.All metapackage will only ever target netcoreapp2.0, as it relies on features of .NET Core as you'll see. IF you are targeting .NET Framework, then you will still be able to use the individual ASP.NET Core packages (as they will target netstandard2.0, it is just the metapackage that will not be available.

Version issues

From the outset, ASP.NET Core has been plagued by confusion around versioning. First of all, you have the confusion between the .NET Core runtime version and the .NET tools version (I discussed those in a previous post). On top of that, you have the somewhat abstract version of ASP.NET Core itself, such as ASP.NET Core 1.0 or 1.1.

And then you have the package versions themselves. Every package in ASP.NET Core revs its version number semi-independently. This makes sense semantically - if a package doesn't change between two versions of ASP.NET Core, then the version numbers shouldn't change.

Unfortunately this can make it tricky to work out which version of a package to install. For example, version 1.0.5 of the Microsoft.AspNetCore metapackage (that I talk about here) adds the following packages to your application:

"Microsoft.AspNetCore.Diagnostics" (>= 1.0.3)
"Microsoft.AspNetCore.Hosting" (>= 1.0.3)
"Microsoft.AspNetCore.Routing" (>= 1.0.4)
"Microsoft.AspNetCore.Server.IISIntegration" (>= 1.0.3)
"Microsoft.AspNetCore.Server.Kestrel" (>= 1.0.4)
"Microsoft.Extensions.Configuration.EnvironmentVariables" (>= 1.0.2)
"Microsoft.Extensions.Configuration.FileExtensions" (>= 1.0.2)
"Microsoft.Extensions.Configuration.Json" (>= 1.0.2)
"Microsoft.Extensions.Logging" (>= 1.0.2)
"Microsoft.Extensions.Logging.Console" (>= 1.0.2)
"Microsoft.Extensions.Options.ConfigurationExtensions" (>= 1.0.2)

As you can see, the version numbers are all over the place - it's basically impossible to know which version to install without polling NuGet to find the latest version of each package. And updating your packages becomes a nightmare, especially if you're working outside of VS or Rider, and are manually editing your .csproj files.

With ASP.NET Core 2.0, Microsoft are moving to a different approach. While strict semantic versioning and many small packages is a great idea, the realities of that sure get confusing, especially when you have a large project.

The Microsoft.AspNetCore.All metapacakge references all of the ASP.NET Core packages, but it does so with a single version number:

<ItemGroup>  
  <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0-preview1-final" />
</ItemGroup>  

Updating your ASP.NET Core packages will just involve updating this one package. No more trying to work out which packages have changed and which haven't.

"But wait", I hear you cry, "if that metapackage includes every package in ASP.NET Core, won't that make my published application huge?". Actually, no, thanks to a new feature in .NET Core 2.0.

Number of packages

.NET Core 2.0 brings a new feature called the Runtime Store. This essentially lets you pre-install packages on a machine, in a central location, so you don't have to include them in the publish output of your individual apps.

You can think of the Runtime Store as a Global Assembly Cache (GAC) for .NET Core. As well serving as a central location for packages, it can also ngen the libraries so they don't have to be jit-ed by your application, which improves startup times for your app.

When you install the .NET Core 2.0 preview 1, it will also install all of the ASP.NET Core packages into the runtime store. That means they're always going to be on a machine that has the .NET Core 2.0 runtime installed. If you've installed the preview of .NET Core 2.0 preview 1, you can find the store here: C:\Program Files\dotnet\store.

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

Surprise, surprise, this setup works brilliantly with the Microsoft.AspNetCore.All metapackage!

When you publish an ASP.NET Core 2.0 app that references the Microsoft.AspNetCore.All you'll find that your publish folder is miraculously free of the ASP.NET Core packages like Microsoft.AspNetCore.Hosting and Microsoft.Extensions.Logging.

Just compare the output of the basic dotnet new mvc template in an ASP.NET Core 1.0 app to the 2.0 preview 1. Look at the size of that scrollbar in the top image - 94 items vs 13 items in ASP.NET Core 2.0.

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

Making the publish size smaller has a number of obvious benefits, but one of the biggest in my mind is making the size of your Docker images smaller. Docker works in layers; when you deploy your app with docker, whole of ASP.NET Core will now be included in the base image, instead of as part of your app. Shipping updates to your app only requires shipping the files in your publish folder; a smaller publish folder means a smaller Docker image file, and a smaller image file means a faster deployment! Win win.

Discoverability of APIs

The final advantage of the Microsoft.AspNetCore.All package is a massive boon to new developers - discoverability of the APIs available. One of the problems with having many small packages in ASP.NET Core 1.x, is that you won't necessarily know what's available to you, or which package an API lives in unless you go checking the docs or browsing on NuGet.

Now, granted, VS 2017 has got much better at this, as you can turn on suggestions for common NuGet packages, but you still have to know what APIs are available to you:

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

If you're working in VS Code, or you don't already know the APIs you're after, this is no use to you.

In ASP.NET Core 2.0, that problem goes away. Your project already references all the packages, so the APIs, and IntelliSense suggestions, are already there!

The Microsoft.AspNetCore.All metapackage is huge, and that's awesome, thanks to the .NET Core runtime store

In my mind, this is a massive help for new users. I still find myself trying to remember the exact API I'm looking for to prompt VS to install the associated package, and this problem is just completely solved!

Summary

The Microsoft.AspNetCore.All metapackage includes every package released by Microsoft as part of ASP.NET Core. This has a number of benefits:

  • It reduces the package version management burden
  • It reduces the number of packages that need to be published with the app
  • It makes it easy to use the core packages without having to add them to the solution explicitly

These benefits should make writing ASP.NET Core 2.0 apps against .NET Core just that little bit easier, so kudos to all those involved.


Ben Foster: Applying IP Address restrictions in AWS API Gateway

Recently I've been exploring the features of the AWS API Gateway to see if it's a viable routing solution for some of our microservices hosted in ECS.

One of these services is a new onboarding API that we wish to make available to a trusted third party. To keep the integration as simple as possible we opted for API key based authentication.

In addition to supporting API Key authentication, API Gateway also allows you to configure plans with usage policies, which met our second requirement, to provide rate limits on this API.

As an additional level of security, we decided to whitelist the IP Addresses that could hit the API. The way you configure this is not quite what I expected since it's not a setting directly within API Gateway but instead done using IAM policies.

Below is an example API within API Gateway. I want to apply an IP Address restriction to the webhooks resource:

The first step is to configure your resource Authorization settings to use IAM. Select the resource method (in my case, ANY) and then AWS_IAM in the Authorization select list:

Next go to IAM and create a new Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "xxx.xx.xx.xx/32"
                }
            },
            "Resource": "arn:aws:execute-api:*:*:*"
        }
    ]
}

Note that this policy allows invocation of all resources within all APIs in API Gateway from the specified IP Address. You'll want to restrict this to a specific API or resource, using the format:

arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path

It was my assumption that I would attach this policy to my API Gateway role and hey presto, I'd have my IP restriction in place. However, the policy instead is instead applied to a user who then needs to sign the request using their access keys.

This can be tested using Postman:

With this done you should now be able to test your IP address restrictions. One thing I did notice is that policy changes do not seem to take effect immediately - instead I had to disable and re-enable IAM authorization on the resource after changing my policy.

Final thoughts

AWS API Gateway is a great service but I find it odd that it doesn't support what I would class as a standard feature of API Gateways. Given that the API I was testing is only going to be used by a single client, creating an IAM user isn't the end of the world, however, I wouldn't want to do this for APIs with a large number of clients.

Finally in order to make use of usage plans you need to require an API key. This means to achieve IP restrictions and rate limiting, clients will need to send two authentication tokens which isn't an ideal integration experience.

When I first started my investigation it was based on achieving the following architecture:

Unfortunately running API Gateway in-front of ELB still requires your load balancers to be publicly accessible which makes the security features void if a client can figure our your ELB address. It seems API Gateway geared more towards Lambda than ELB so it looks like we'll need to consider other options for now.


Andrew Lock: How to use multiple hosting environments on the same machine in ASP.NET Core

How to use multiple hosting environments on the same machine in ASP.NET Core

This short post is in response to a comment I received on a post I wrote a while ago, about how to set the hosting environment in ASP.NET Core. It's a question I've heard a couple of times, so thought I'd write it up here.

The question by Denis Zavershinskiy is as follows:

Do you know if there is a way to overwrite environment variable name? For example, I want my CoolProject to take environment name not from ASPNETCORE_ENVIRONMENT but from COOL_PROJ_ENV. Is it possible?

The answer to that question is a little nuanced. If you already have an app deployed, and want to switch the environment for it without changing other apps on that machine, then you can't do it with Environment variables. Those obviously affect the whole environment!

tl;dr; Create a custom configuration object in your Program.cs file, load the environment variable using a custom key, and call UseEnvironment on the WebHostBuilder.

However, if this is a capability you think you will need, you can use a similar approach to the one I use in that post to set the environment using command line arguments.

This approach involves building a new IConfiguration object, and passing that in to the WebHostBuilder on application startup. This lets you load configuration from any source, just as you would in your normal startup method, and pass that configuration to the WebHostBuilder using UseConfiguration. The WebHostBuilder will look for a key named "Environment" in this configuration, and use that as the environment.

For example, if you use the following configuration.

var config = new ConfigurationBuilder()  
    .AddCommandLine(args)
    .Build();

var host = new WebHostBuilder()  
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseKestrel()
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

You can pass any setting value with this setup, including the "environment variable":

> dotnet run --environment "MyCustomEnv"

Project TestApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.

Hosting environment: MyCustomEnv  
Content root path: C:\Projects\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

This is fine if you can use command line arguments like this, but what if you want to use environment variables? Again, the problem is that they're shared between all apps on a machine.

However, you can use a similar approach, coupled with the UseEnvironment extension method, to set a different environment for each machine. This will override the ASPNETCORE_ENVIRONMENT value, if it exists, with the value you provide for this application alone. No other applications on the machine will be affected.

public class Program  
{
    public static void Main(string[] args)
    {
        const string EnvironmentKey = "MYCOOLPROJECT_ENVIRONMENT";

        var config = new ConfigurationBuilder()
            .AddEnvironmentVariables()
            .Build();

        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseEnvironment(config[EnvironmentKey])
            .UseIISIntegration()
            .UseStartup<Startup>()
            .UseApplicationInsights()
            .Build();

        host.Run();
    }
}

To test this out, I added the MYCOOLPROJECT_ENVIRONMENT key with a value of Staging to the launch.json file VS uses when running the app:

{
  "profiles": {
    "EnvironmentTest": {
      "commandName": "Project",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "MYCOOLPROJECT_ENVIRONMENT": "Staging"
      },
      "applicationUrl": "http://localhost:56172"
    }
  }
}

Running the app using F5, shows that we have correctly picked up the Staging value using our custom environment variable:

Hosting environment: Staging  
Content root path: C:\Users\Sock\Repos\MyCoolProj\src\MyCoolProj  
Now listening on: http://localhost:56172  
Application started. Press Ctrl+C to shut down.  

With this approach you can effectively have a per-app environment variable that you can use to configure the environment for an app individually.

Summary

On shared hosting, you may be in a situation when you want to use a different IHostingEnvironment for multiple apps on the same machine. You can achieve this with the approach outlined in this post, building an IConfiguration object and passing a key to WebHostBuilder.UseEnvironment extension method.


Andrew Lock: Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

One of the brand new features added to ASP.NET Core in version 2.0 preview 1 is Razor Pages. This harks back to the (terribly named) ASP.NET Web Pages framework, which was a simpler, page-based, alternative framework to MVC. It was a completely different stack, and was sort of .NET's answer to PHP.

I know, that might make you shudder, but I've actually had a fair amount of success with it. It allowed our designers who had HTML knowledge but no C# to be productive, without having to deal with full MVC. Web developers might turn their nose up at that, but there's really no reason for a designer to go there, and would you want them to? A developer could just jump on for any dynamic bits of code that were beyond the designer, but otherwise you could leave them to it.

This post dips a toe into the justification of Razor Pages and when you might want to use it in your own ASP.NET Core applications.

ASP.NET Core 2.0 includes a new razor template. This shows the default MVC template completely converted to Razor Pages. Mike Brind has a great rundown of this template on his blog. In this post I'm looking at a hybrid approach - only converting the simplest pages to Razor Pages.

What are Razor Pages?

The new Razor Pages treads a line somewhere between ASP.NET Web Pages and full MVC. It's still a "page based" model, in that all the code for a single page lives in one file, and that file is used to describe the URL structure of the app.

This makes it very easy to reason about the behaviour of simple apps, and removes some of the ceremony of creating new pages. Instead of having to create a controller, create an action, create a model, and create a view, you can simply create the Razor Page instead.

Now granted, for cases where you're doing anything much more than displaying a View, MVC may well be the correct thing to use. And the great thing about Razor Pages is that it's built directly on top of the MVC stack. Essentially all of the primitives like model binding and validation are available to you in Razor Pages. Or even better, you could use Razor Pages for most of the app, and fallback to full MVC for the complex actions.

Putting it to the test - the MVC template

As an example, I thought I'd convert the default MVC template to use Razor pages for the simple actions. This is remarkably simple to do and highlights the advantages of using Razor Pages if you don't have much logic in your app.

The MVC template

The default MVC template is pretty simple - it consists of a HomeController with four actions:

  • Index()
  • About()
  • Contact()
  • Error()

The first three of these are really simply actions, that optionally set some ViewData and return a ViewResult. These are prime candidates for converting to Razor Pages.

public class HomeController : Controller  
{
    public IActionResult Index()
    {
        return View();
    }

    public IActionResult About()
    {
        ViewData["Message"] = "Your application description page.";

        return View();
    }

    public IActionResult Contact()
    {
        ViewData["Message"] = "Your contact page.";

        return View();
    }

    public IActionResult Error()
    {
        return View(new ErrorViewModel 
        { 
            RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier 
        });
    }
}

Each of the simple methods renders a .cshtml file in the Views folder of the project. These are perfect of us to convert to Razor pages - they are basically just HTML files with some dynamic regions. For example, the About.cshtml file looks like this:

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>@ViewData["Message"]</h3>

<p>Use this area to provide additional information.</p>  

So lets convert these to Razor Pages.

The Error action is only slightly more complicated, and could easily be converted to a Razor Page too (as it is in the dotnet new razor template). To highlight the ability to mix the two however, I'm going to leave that as an MVC action.

The Razor Pages version

Converting these pages is super-simple, I'll take the About page as an example.

To turn the page into a Razor Page instead of a view, you simply need to add the @page directive to the top of the page, and move the file from Views/Home/About.cshtml to Pages/Home/About.cshtml.

Razor pages are stored in the pages folder by default.

Finally, we can move the small piece of logic, setting the message, from the action method to the page itself. We could store this in the ViewData["Message"], but there's not a lot of need in this case as it's not used in parent Layouts or anything, so we'll just write it in the markup.

@page

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>  
<h3>Your application description page.</h3>

<p>Use this area to provide additional information.</p>  

With that, we can delete the About action method, and give it a test!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Hmmm, that's not quite right - we appear to have lost the layout! The Views folder contains a _ViewStart.cshtml that defines the default Layout for all the views in its subfolder (unless overwridden by a sub-_ViewStart.cshtml or the .cshtml file itself. Razor Pages uses the exact same concept of Layouts and partial views, so we can add a _ViewStart.cshtml to the Pages folder too:

@{
    Layout = "_Layout";
}

Note that we don't need to create separate layout files themselves, we can still reference the ones in the Views/Shared folder - the Views/Shared folder is searched by Razor Pages (as well as the Pages/Home folder in this case, and the Pages/Shared folder). With the Layout statement in place, we get our original About page back!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

And that's pretty much that! We can easily convert the other pages in the same way, and we can still access them at the same URL paths: /Home/About, /Home/Contact and /Home/Index.

Limitations

I actually bent the truth slightly there, we can't quite use all the same URLs. With the full-MVC template and the default MVC routing template, {controller=home/action=index/id?}, the following URLs all route to the same MVC HomeController.Index action:

  • /Home/Index
  • /Home
  • /

Those first two work perfectly well with Razor Pages, in the same way as with MVC, thanks to Index.cshtml being considered the default document for the folder. The last one is a problem however - you can't get this result with Razor Pages alone. And that's not really surprising - a page-based model implies a single URL - which is essentially what we have.

As far as I can tell, if you want a single action to cover all these URLs you'll need to fall back to MVC. But then, at least you can, and in those cases, maybe you should! Alternatively, you could create two pages, one for / and one for /Home, and use partial views to avoid duplication of markup between them.

So where does that leave us? Well if we just look at the number of files, you can see that we actually have more now than before. The difference is that you can add a new URL handler/Page by just creating a single .cshtml file in the appropriate Pages folder, no model or controller files needed!

Using Razor Pages to simplify basic actions in ASP.NET Core 2.0 preview 1

Whether this works for you will depend on the sort of application you're building. A simple, mostly-static app with some dynamics sections might be well suited to building a Razor Pages app, while a more fully-featured, complicated app most definitely would not!

The real winner here is the ability to combine Razor with full MVC - you can start out building your application with Razor Pages, and if you find some complex pages you have a choice. You can either add the functionality using Razor Pages alone (as in Mike Brind's follow up post on handlers), or you can drop back to full MVC, whichever works best for you.

Summary

This post just scratches the surface of what you can do with Razor Pages - you can do far more complicated-MVC things if you like, but personally I fell like if you start doing that, just switch to proper MVC actions and encapsulate your logic properly. If you're interested, install the .NET Core 2.0 preview 1, and checkout the docs!


Andrew Lock: Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

Exploring Program.cs, Startup.cs and CreateDefaultBuilder in ASP.NET Core 2 preview 1

One of the goals in ASP.NET Core 2.0 has been to clean up the basic templates, simplify the basic use-cases, and make it easier to get started with new projects.

This is evident in the new Program and Startup classes, which, on the face of it, are much simpler than their ASP.NET Core 1.0 counterparts. In this post, I'll take a look at the new WebHost.CreateDefaultBuilder() method, and see how it bootstraps your application.

Program and Startup responsibilities in ASP.NET Core 1.X

In ASP.NET Core 1.X, the Program class is used to setup the IWebHost. The default template for a web app looks something like this:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

This relatively compact file does a number of things:

  • Configuring a web server (Kestrel)
  • Set the Content directory (the directory containing the appsettings.json file etc)
  • Setup IIS Integration
  • Define the Startup class to use
  • Build(), and Run the IWebHost

The Startup class varies considerably depending on the application you are building. The MVC template shown below is a fairly typical starter template:

public class Startup  
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseBrowserLink();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This is a far more substantial file that has 4 main reponsibilities:

  • Setup configuration in the Startup constructor
  • Setup dependency injection in ConfigureServices
  • Setup Logging in Configure
  • Setup the middleware pipeline in Configure

This all works pretty well, but there are a number of points that the ASP.NET team considered to be less than ideal.

First, setting up configuration is relatively verbose, but also pretty standard; it generally doesn't need to vary much either between applications, or as the application evolves.

Secondly, logging is setup in the Configure method of Startup, after configuration and DI have been configured. This has two draw backs. On the one hand, it makes logging feel a little like a second class citizen - Configure is generally used to setup the middleware pipeline, so having the logging config in there doesn't make a huge amount of sense. Also it means you can't easily log the bootstrapping of the application itself. There are ways to do it, but it's not obvious.

In ASP.NET Core 2.0 preview 1, these two points have been addressed by modifying the IWebHost and by creating a helper method for setting up your apps.

Program and Startup responsibilities in ASP.NET Core 2.0 preview 1

In ASP.NET Core 2.0 preview 1, the responsibilities of the IWebHost have changed somewhat. As well as having the same responsibilities as before, the IWebHost has gained two more:

  • Setup configuration
  • Setup Logging

In addition, ASP.NET Core 2.0 introduces a helper method, CreateDefaultBuilder, that encapsulates most of the common code found in Program.cs, as well as taking care of configuration and logging!

public class Program  
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>()
            .Build();
}

As you can see, there's no mention of Kestrel, IIS integration, configuration etc - that's all handled by the CreateDefaultBuilder method as you'll see in a sec.

Moving the configuration and logging code into this method also simplifies the Startup file:

public class Startup  
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
        }

        app.UseStaticFiles();

        app.UseMvc(routes =>
        {
            routes.MapRoute(
                name: "default",
                template: "{controller=Home}/{action=Index}/{id?}");
        });
    }
}

This class is pretty much identical to the 1.0 class with the logging and most of the configuration code removed. Notice too that the IConfiguration object is injected into the class and stored in a property on the class, instead of creating the configuration in the constructor itself

This is new to ASP.NET Core 2.0 - the IConfiguration object is registered with DI by default. in 1.X you had to register the IConfigurationRoot yourself if you needed it to be available in DI.

My initial reaction to CreateDefaultBuilder was that it was just obfuscating the setup, and felt a bit like a step backwards, but in hindsight, that was more just a "who moved my cheese" reaction. There's nothing magical about the CreateDefaultBuilder it just hides a certain amount of standard, ceremonial code that would often go unchanged anyway.

The WebHost.CreateDefaultBuilder helper method

In order to properly understand the static CreateDefaultBuilder helper method, I decided to take a peek at the source code on GitHub! You'll be pleased to know, if you're used to ASP.NET Core 1.X, most of this will look remarkably familiar.

public static IWebHostBuilder CreateDefaultBuilder(string[] args)  
{
    var builder = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .ConfigureAppConfiguration((hostingContext, config) => { /* setup config */  })
        .ConfigureLogging((hostingContext, logging) =>  { /* setup logging */  })
        .UseIISIntegration()
        .UseDefaultServiceProvider((context, options) =>  { /* setup the DI container to use */  })
        .ConfigureServices(services => 
        {
            services.AddTransient<IConfigureOptions<KestrelServerOptions>, KestrelServerOptionsSetup>();
        });

    return builder;
}

There's a few new methods in there that I've elided for now, which I'll explore in follow up posts. You can see that this method is largely doing the same work that Program did in ASP.NET Core 1.0 - it sets up Kestrel, defines the ContentRoot, and sets up IIS integration, just like before. Additionally, it does a number of other things

  • ConfigureAppConfiguration - this contains the configuration code that use to live in the Startup configuration
  • ConfigureLogging - sets up the logging that use to live in Startup.Configure
  • UseDefaultServiceProvider - I'll go into this in a later post, but this sets up the built-in DI container, and lets you customise its behaviour
  • ConfigureServices - Adds additional services needed by components added to the IWebHost. In particular, it configures the Kestrel server options, which lets you easily define your web host setup as part of your normal config.

I'll look a closer look at configuration and logging in this post, and dive into the other methods in a later post.

Setting up app configuration in ConfigureAppConfiguration

The ConfigureAppConfiguration method takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, and an IConfigurationBuilder instance, config:

ConfigureAppConfiguration(hostingContext, config) =>  
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

    if (env.IsDevelopment())
    {
        var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
        if (appAssembly != null)
        {
            config.AddUserSecrets(appAssembly, optional: true);
        }
    }

    config.AddEnvironmentVariables();

    if (args != null)
    {
        config.AddCommandLine(args);
    }
});

As you can see, the hostingContext parameter exposes the IHostingEnvironment (whether we're running in "Development" or "Production") as a property, HostingEnvironment. Apart form that the bulk of the code should be pretty familiar if you've used ASP.NET Core 2.0.

The one exception to this is setting up User Secrets, which is done a little different in ASP.NET Core 2.0. This uses an assembly reference to load the user secrets, though you can still use the generic config.AddUserSecrets<T> version in your own config.

In ASP.NET Core 2.0, the UserSecretsId is stored in an assembly attribute, hence the need for the Assembly code above. You can still define the id to use in your csproj file - it will be embedded in an assembly level attribute at compile time.

This is all pretty standard stuff. It loads configuration from the following providers, in the following order:

  • appsettings.json (optional)
  • appsettings.{env.EnvironmentName}.json (optional)
  • User Secrets
  • Environment Variables
  • Command line arguments

The main difference between this method and the approach in ASP.NET Core 1.X is the location - config is now part of the WebHost itself, instead of sliding in through the backdoor so-to-speak by using the Startup constructor. Also, the initial creation and final call to Build() on the IConfigurationBuilder instance happens in the web host itself, instead of being handled by you.

Setting up logging in ConfigureLogging

The ConfigureLogging method also takes a lambda with two parameters - a WebHostBuilderContext called hostingContext, just like the configuration method, and a LoggerFactory instance, logging:

ConfigureLogging((hostingContext, logging) =>  
{
    logging.UseConfiguration(hostingContext.Configuration.GetSection("Logging"));
    logging.AddConsole();
    logging.AddDebug();
});

The logging infrastructure has changed a little in ASP.NET Core 2.0, but broadly speaking, this code echoes what you would find in the Configure method of an ASP.NET Core 1.0 app, setting up the Console and Debug log providers. You can use the UseConfiguration method to setup the log levels to use by accessing the already-defined IConfiguration, exposed on hostingContext.Configuration.

Customising your WebHostBuilder

Hopefully this dive into the WebHost.CreateDefaultBuilder helper helps show why the ASP.NET team decided to introduce it. There's a fair amount of ceremony in getting an app up and running, and this makes it far simpler.

But what if this isn't the setup you want? Well, then you don't have to use it! There's nothing special about the helper, you could copy-and paste its code into your own app, customise it, and you're good to go.

That's not quite true - the KestrelServerOptionsSetup class referenced in ConfigureServices is currently internal, so you would have to remove this. I'll dive into what this does in a later post.

Summary

This post looked at some of the differences between Program.cs and Startup.cs in moving from ASP.NET Core 1.X to 2.0 preview 1. In particular, I took a slightly deeper look into the new WebHost.CreateDefaultBuilder method which aims to simplify the initial bootstrapping of your app. If you're not keen on the choices it makes for you, or you need to customise them, you can still do this, exactly as you did before. The choice is yours!


Andrew Lock: The .NET Core 2.0 Preview 1, version numbers and global.json

The .NET Core 2.0 Preview 1, version numbers and global.json

So as I'm sure most people reading this are aware, Microsoft announced ASP.NET Core and .NET Core 2.0 Preview 1 at Microsoft Build 2017 this week. I've been pretty busy with a variety of things, so I wasn't really planning on checking out most of the pieces for now. I've been keeping an eye on the community standups so I kind of knew what to expect. But after watching the video of Scott Hanselman and Dan Roth, and reading Steve Gordon's blog post, I couldn't resist!

If you haven't already, I really recommend you check out the various links above first - this post isn't really meant to serve as an introduction to ASP.NET Core, it focuses on installing the preview on your system while you continue to work on production ASP.NET Core 1.0/1.1 sites. In particular, it looks at the different version numbers associated with .NET Core.

Shortly after writing this, I realised Scott Hanselman had just written a very similar post, check it out!

Installing the .NET Core 2.0 Preview 1 SDK

The first step is obviously installing the .NET Core 2.0 Preview 1 SDK from https://www.microsoft.com/net/core/preview#windowscmd. This is pretty painless, no matrix of options, just a "Download now" button. And there's just one version number!

The .NET Core 2.0 Preview 1, version numbers and global.json

One interesting point is that .NET Core now also includes ASP.NET Core. That should mean smaller packages when deploying your applications, which is nice! I'll be exploring this at some point soon.

It's also worth noting that if you want to create ASP.NET Core 2.0 applications in Visual Studio, then you'll need to install the preview version of Visual Studio 2017. This should install side-by-side with the stable version, but I decided to just stick to the SDK for now while I'm just playing with it.

.NET Core version numbers

I pointed out just now that there's finally only one version number for the various .NET Core parts - 2.0 preview 1 - but that's not entirely true.

There are two different aspects to a .NET Core install: the version number of the SDK/Tools/CLI; and the version number of the .NET Core runtime (or .NET Core Shared Framework Host).

If you've just installed 2.0 preview 1, then when you run dotnet info you should see something like the following:

$ dotnet --info
.NET Command Line Tools (2.0.0-preview1-005977)

Product Information:  
 Version:            2.0.0-preview1-005977
 Commit SHA-1 hash:  414cab8a0b

Runtime Environment:  
 OS Name:     Windows
 OS Version:  10.0.14393
 OS Platform: Windows
 RID:         win10-x64
 Base Path:   C:\Program Files\dotnet\sdk\2.0.0-preview1-005977\

Microsoft .NET Core Shared Framework Host

  Version  : 2.0.0-preview1-002111-00
  Build    : 1ff021936263d492539399688f46fd3827169983

This gives you a whole bunch of different values, but there are two different versions listed here:

  • 2.0.0-preview1-005977 - the CLI version
  • 2.0.0-preview1-002111-00 - the runtime version

But these version numbers are slightly misleading. I also have version 1.0 of the .NET Core tools installed on the machine, as well as versions 1.1.1 and 1.0.4 of the .NET Core runtime.

Understanding multiple .NET Core runtime versions

One of the selling point of .NET Core is being able to install multiple versions of the .NET Core runtime side-by-side, without affecting one another. This is in contrast to the way .NET Framework works as a central install - you can't install version 4.5, 4.6 and 4.7 side-by-side withe the .NET Framework; 4.7 will replace the previous versions.

You can see which versions of the .NET Core runtime are installed by browsing to C:\Program Files\dotnet\shared\Microsoft.NETCore.App. As you can see, I have 3 versions available on my machine:

The .NET Core 2.0 Preview 1, version numbers and global.json

So the question is, how do you know which runtime will be used when you run an application?

Well, you specify it in your .NET .csproj file!

For example, in a .NET Core 1.1 project, you would set the <TargetFramework> (or <TargetFrameworks> if you're targeting more than one) to netcoreapp1.1:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

</Project>

This will use the .NET Core version 1.1.1 that I have installed when it runs.

If you had set <TargetFramework> to netcoreapp1.0 then the version 1.0.4 that I have installed would be used.

Which brings us to the version 2.0 preview 1 .csproj:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
    <UserSecretsId>aspnet-v2test-32450BD7-D635-411A-A507-53B20874D210</UserSecretsId>
  </PropertyGroup>

  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0-preview1-final" />
  </ItemGroup>

</Project>  

As before, the csproj file specifies that the <TargetFramework> should be netcoreapp2.0, so the newest version of the 2.0 runtime on my machine will be used - 2.0.0-preview1-002111-00.

Understanding the SDK versions

Hopefully that clears up the .NET Core runtime versions, but there's still the issue of the SDK/CLI version. What is that used for?

If you navigate to C:\Program Files\dotnet\sdk, you can see the SDK versions installed on your system. I've got two versions installed on my machine: 1.0.0 and 2.0.0-preview1-005977.

The .NET Core 2.0 Preview 1, version numbers and global.json

It's a bit of an over-simplification, but you can think of the SDK/CLI as providing all of the "build" related commands, dotnet new, dotnet build, dotnet publish etc.

Generally speaking, any SDK version higher than the one used to create a project can be used to dotnet build and dotnet publish it. So you can use the 2.0 SDK to build projects created with the 1.0 SDK.

So in that case, you can just use the newer SDK all the time right? Well mostly. If you're still building project.json based projects then you need to ensure you use the RC2 SDK.

The current version of the SDK also becomes apparent when you call dotnet new - if you are using the 2.0 Preview 1 SDK, you will get a 2.0-based application, if you're using the 1.0 SDK you'll get a 1.1 based application!

The question is how do you control which version of the SDK is used?

Choosing an SDK version with global.json

The global.json file has a very simple schema, that simply defines which version of the SDK to use:

{
  "sdk": {
    "version": "1.0.0"
  }
}

Back in the day, the global.json was also used to define the source code folders for a "solution", but that functionality was removed with the 1.0.0 SDK.

When you run dotnet new or dotnet build, the dotnet host looks in the current folder,and all parent folders up to the drive's root for a global.json. If it can't find one, it will just use the newest version of the SDK - in my case 2.0.0-preview1-005977.

If a global.json exists (and the SDK version it references exists!) then that version will be used for all SDK commands, which is is basically all dotnet commands other than dotnet run.

Personally, I've put the above global.json in my Repos folder, so any existing projects will continue to use the 1.0.0 SDK, as will any new projects I create. I've then created a subfolder called netcore20 and added the following global.json. I can then use this folder whenever I'm playing with the ASP.NET Core 2.0 preview bits without risking any issues!

{
  "sdk": {
    "version": "2.0.0-preview1-005977"
  }
}

Summary

Versioning has been an issue throughout the recent history of .NET Core. Aligning all the versions going forward will definitely simplify things and hopefully cause less confusion, but it's still a good idea to try and understand the difference between runtime and SDK versions. I hope this post has helped clear some of that up!


Anuraj Parameswaran: What is new in ASP.NET Core 2.0 Preview

This post is about new features of ASP.NET Core 2.0 Preview. Microsoft announced ASP.NET Core 2.0 Preview 1 at Build 2017. This post will introduce some ASP.NET 2.0 features.


Damien Bowden: Anti-Forgery Validation with ASP.NET Core MVC and Angular

This article shows how API requests from an Angular SPA inside an ASP.NET Core MVC application can be protected against XSRF by adding an anti-forgery cookie. This is required, if using Angular, when using cookies to persist the auth token.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

Cross Site Request Forgery

XSRF is an attack where a hacker makes malicious requests to a web app, when the user of the website is already authenticated. This can happen when a website uses cookies to persist the token of an trusted website, user. A pure SPA should not use cookies to as it is hard to protect against this. With a server side rendered application, like ASP.NET Core MVC, anti-forgery cookies can be used to protect against this, which makes it safer, when using cookies.

Angular automatically adds the X-XSRF-TOKEN HTTP Header with the anti-forgery cookie value for each request if the XSRF-TOKEN cookie is present. ASP.NET Core needs to know, that it must use this to validate the request. This can be added to the ConfigureServices method in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
	...
	services.AddAntiforgery(options => options.HeaderName = "X-XSRF-TOKEN");
	services.AddMvc();
}

The XSRF-TOKEN cookie is added to the response of the HTTP request. The cookie is a secure cookie so this is only sent with HTTPS and not HTTP. All HTTP (Not HTTPS) requests will fail and return a 400 response. The cookie is created and added each time a new server url is called, but not for an API call.

app.Use(async (context, next) =>
{
	string path = context.Request.Path.Value;
	if (path != null && !path.ToLower().Contains("/api"))
	{
		// XSRF-TOKEN used by angular in the $http if provided
		var tokens = antiforgery.GetAndStoreTokens(context);
		context.Response.Cookies.Append("XSRF-TOKEN", 
		  tokens.RequestToken, new CookieOptions { 
		    HttpOnly = false, 
		    Secure = true 
		  }
		);
	}

	...

	await next();
});

The API uses the ValidateAntiForgeryToken attribute to check if the request contains the correct value for the XSRF-TOKEN cookie. If this is incorrect, or not sent, the request is rejected with a 400 response. The attribute is required when data is changed. HTTP GET requests should not require this attribute.

[HttpPut]
[ValidateAntiForgeryToken]
[Route("{id:int}")]
public IActionResult Update(int id, [FromBody]Thing thing)
{
	...

	return Ok(updatedThing);
}

You can check the cookies in the chrome browser.

Or in Firefox using Firebug (Cookies Tab).

Links:

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

http://www.fiyazhasan.me/angularjs-anti-forgery-with-asp-net-core/

http://www.dotnetcurry.com/aspnet/1343/aspnet-core-csrf-antiforgery-token

http://stackoverflow.com/questions/43312973/how-to-implement-x-xsrf-token-with-angular2-app-and-net-core-app/43313402

https://en.wikipedia.org/wiki/Cross-site_request_forgery

https://stormpath.com/blog/angular-xsrf



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

In my previous post I updated my comparison between ImageSharp and CoreCompat.System.Drawing. Instead of loading an arbitrary file from a URL using the HttpClient, the path to a file in the wwwroot folder was provided as part of the URL path.

This approach falls more in line with use cases you've likely run into many times - an image is uploaded at a particular size and you need to dynamically resize it to arbitrary dimensions. I've been using ImageSharp to do this - a new image library that runs on netstandard1.1 and is written entirely in managed code, so is fully cross-platform.

The code from the previous post fulfils this role, allowing you to arbitrarily resize images in your website. The biggest problem with the code as-is is how long it takes to process an image - large images could take multiple seconds to be loaded, processed, and served, as you can see below

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

This post shows how to add IDistributedCache to the implementation to quickly improve the response time for serving resized images.

IDistributeCache and IMemoryDistributedCache

The most obvious option here is to cache the resized image after it's first processed, and just serve the resized image from cache from subsequent requests. ASP.NET Core includes a general caching infrastructure in the form of the IDistributedCache interface. It's used by various parts of the framework, whenever caching is needed.

The IDistributedCache interface provides a number of methods related to saving and loading byte arrays by a string key, the pertinent ones of which are shown below:

 public interface IDistributedCache
{
    Task<byte[]> GetAsync(string key);
    Task SetAsync(string key, byte[] value, DistributedCacheEntryOptions options);
}

In addition, there is an extension method that provides a simplified version of the SetAsync method, without DistributedCacheEntryOptions:

public static void Set(this IDistributedCache cache, string key, byte[] value)  
{
    cache.Set(key, value, new DistributedCacheEntryOptions());
}

There are a number of different implementations of IDistributedCache that you can use to store data in Redis, SqlServer and other stores, but by default, ASP.NET Core registers an in-memory cache, MemoryDistributedCache, which uses the IMemoryCache under the hood. This essentially caches data in a dictionary in-process. Normally, you would want to replace this with an actually distributed cache, but for our purposes, it should do the job nicely.

Loading data from an IDistributedCache

The first step to adding caching, is to decide on a key to use. For our case, that's quite simple - we can combine the requested path with the requested image dimensions. We can then try and load the image from the cache. If we get a hit, we can use the cached byte[] data to create a FileResult directly, instead of having to load and resize the image again:

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;
    private readonly IDistributedCache _cache;

    public HomeController(IHostingEnvironment env, IDistributedCache cache)
    {
        _fileProvider = env.WebRootFileProvider;
        _cache = cache;
    }

    [Route("/image/{width}/{height}/{*url}")]
    public async Task<IActionResult> ResizeImage(string url, int width, int height)
    {
        if (width < 0 || height < 0) { return BadRequest(); }

        var key = $"/{width}/{height}/{url}";
        var data = await _cache.GetAsync(key);
        if (data == null)
        {
           // resize image and cache it
        }

        return File(data, "image/jpg");
    }

All that remains is to add the set-cache code. This code is very similar to the previous post, the only difference being that we need to create a byte[] of data to cache, instead of passing a Stream to the FileResult, as we did in the previous post.

Saving resized images in an IDistributedCache

Saving data to the IDistributedCache is very simple - you simply provide the string key and the data as byte[]. We'll reuse most of the code from the previous post here - checking the requested image exists, reading it into memory, resizing it and saving it to an output stream. The only difference is that we call ToArray() on the MemoryStream to get a byte[] we can store in cache.

var imagePath = PathString.FromUriComponent("/" + url);  
var fileInfo = _fileProvider.GetFileInfo(imagePath);  
if (!fileInfo.Exists) { return NotFound(); }

using (var outputStream = new MemoryStream())  
{
    using (var inputStream = fileInfo.CreateReadStream())
    using (var image = Image.Load(inputStream))
    {
        image
            .Resize(width, height)
            .SaveAsJpeg(outputStream);
    }

    data = outputStream.ToArray();
}
await _cache.SetAsync(key, data);  

And that's it, we're done - let's take it for a spin.

Testing it out

The first time we request an image, the cache is empty, so we still have to check the image exists, load it up, resize it, and store it in the cache. This is the same process as we had before, so the first request for a resized image is always going to be slow:

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

If we reload the page however, you can see that our subsequent requests are much better - we're down from 2+ seconds to 10ms in some cases!

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

This is clearly a vast improvement, and suddenly makes the approach of resizing on-the-fly a viable option. If we want, we can add some logging to our method to confirm that we are in fact pulling the data from the cache:

Using ImageSharp to resize images in ASP.NET Core - Part 3: caching

Protecting against DOS attacks

While we have a working, cached version of our resizing action, there is still one particular aspect we haven't covered that was raised by Bertrand Le Roy in the last post. With the current implementation, you can resize to arbitrary dimensions, which opens the app up to Denial of Service (DOS) attacks.

A malicious user could use significant server resources by requesting multiple different sizes for a resized image, e.g.

  • 640×480- /640/480/images/clouds.jpg
  • 640×481- /640/481/images/clouds.jpg
  • 640×482- /640/482/images/clouds.jpg
  • 641×480- /641/480/images/clouds.jpg
  • 642×480 - /642/480/images/clouds.jpg
  • etc

With the current design, each of those requests would trigger an expensive resize operation, as well as caching the result in the IDistributedCache. Hopefully it's clear that could end up being a problem - your server ends up using a significant amount of CPU resizing the images, and a large amount of memory caching every slight variation.

There are a number of ways you could get round this, all centred around limiting the number of "acceptable" image sizes, for example:

  • Only allow n specific, fixed sizes, e.g. 640×480, 960×720 etc.
  • Only allow n specific dimensions e.g 640, 720 and 960, but allow any combination of these e.g. 640×640, 640×720 etc.
  • Only allow you to specify the dimension in one direction, e.g. height=640 or width=640, and automatically scale the other dimension to keep the correct aspect ratio

Limiting the number of supported sizes like this means you also need to decide what to do if an unsupported size is requested. The easiest solution is to just return a 404 NotFound, but that's not necessarily the most user-friendly.

An alternative approach is to always return the smallest supported size that is larger than the requested size. For example, if we only support 640×480, 960×720, then:

  • if 640×480 is requested, return 640×480
  • if 480×320 is requested, return 640×480
  • if 720×540 is requested, return 960×720
  • if 1280×960 is requested, return ?

We still have a question of what to return for the last point, which requests a size larger than our largest supported size, but you would probably just return the biggest size you can here.

Exactly which approach you choose is obviously up to you. As an example, I've updated the ResizeImage action method resize method to ensure that either width or height is always 0, to preserve image aspect ratio. The SanitizeSize method is shown afterwards

[Route("/image/{width}/{height}/{*url}")]
public async Task<IActionResult> ResizeImage(string url, int width, int height)  
{
    if (width < 0 || height < 0) { return BadRequest(); }
    if (width == 0 && height == 0) { return BadRequest(); }

    if(height == 0)
    {
        width = SanitizeSize(width);
    }
    else
    {
        width = 0;
        height = SanitizeSize(height);
    }

    // remainder of method
}

For the SanitizeSize method, I've chosen to have 3 fixed sizes, where the smallest size larger than the requested size is used, or the largest size (1280) if you request larger than this.

private static int[] SupportedSizes = { 480, 960, 1280};

private int SanitizeSize(int value)  
{
    if (value >= 1280) { return 1280; }
    return SupportedSizes.First(size => size >= value);
}

With this in place, you can only request 6 different sizes for each image - 480, 960, 1280 width or 480, 960, 1280 height. The other dimension will have whatever value preserves the aspect ratio.

This provides you with simple protection from DOS attacks. It does, however, raise the question of whether it is worth doing this work in a web request at all. If you only have fixed supported sizes, then resizing the images at compile time and saving as files might make more sense. That way you avoid all the overhead of resizing images at runtime. Anyway, I digress, this covers you from DOS attacks anyway!

Working with other cache implementations

In this example, I simply used one of the the most basic of caching options available. However, as this code depends on the general IDistributedCache interface, we can easily extend it. If we wanted to, we could replace the MemoryDistributedCache implementation with a RedisCache, which would allow a whole cluster of web servers to only resize an image once, instead of having to resize it once per-server.

Adding caching to your code really can be as simple as this example, especially when you have immutable data as I do. I don't need to worry about images getting stale - I'm assuming you're not going to be adding images to your wwwroot folder when the server is in production - so caching is pretty simple. Obviously, as soon you have to worry about cache invalidation, things get way more complicated.

An alternative approach

This caching approach works, but there's one thing that slightly bugs me. Even though we're not resizing the image on every request, we're still serving the whole data every time. Notice how in the last screen shot every response is identical - a 200 response and 51.1KB of data?

Response caching is a standard approach to getting around this issue - instead of returning the whole data, the server sends some cache headers to the browser with the original data. On subsequent requests, the server can check the headers sent back by the browser, and if nothing's changed, can send a 304 response telling the browser to just use the data it has cached.

Now, we could add this functionality to our existing method, but in my next post I'll look at an alternative approach to caching which lets us achieve this without having to write the code ourselves.

Summary

Adding caching to a method is a very simple way to speed up long-running requests. In this case, we used the IDistributedCache to avoid having to resize an image every time it is requested. Instead, we store the byte[] data with a unique key the first time it is requested, and store this is in the cache. Subsequent requests for the resized image can just reload the cached data.


Anuraj Parameswaran: Implementing localization in ASP.NET Core

This post is about implementing localization in ASP.NET Core. Localization is the process of adapting a globalized app, which you have already processed for localizability, to a particular culture/locale. Localization is one of the best practise in web application development.


Anuraj Parameswaran: Hardware assisted virtualization and data execution protection must be enabled in the BIOS

This post is about fixing the error, Hardware assisted virtualization and data execution protection must be enabled in the BIOS which displayed by Docker while running Windows 10. Today while running Docker, it throws an error like this.


Damien Bowden: Secure ASP.NET Core MVC with Angular using IdentityServer4 OpenID Connect Hybrid Flow

This article shows how an ASP.NET Core MVC application using Angular in the razor views can be secured using IdentityServer4 and the OpenID Connect Hybrid Flow. The user interface uses server side rendering for the MVC views and the Angular app is then implemented in the razor view. The required security features can be added to the application easily using ASP.NET Core, which makes it safe to use the OpenID Connect Hybrid flow, which once authenticated and authorised, saves the token in a secure cookie. This is not an SPA application, it is an ASP.NET Core MVC application with Angular in the razor view. If you are implementing an SPA application, you should use the OpenID Connect Implicit Flow.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

IdentityServer4 configuration for OpenID Connect Hybrid Flow

IdentityServer4 is implemented using ASP.NET Core Identity with SQLite. The application implements the OpenID Connect Hybrid flow. The client is configured to allow the required scopes, for example the ‘openid’ scope must be added and also the RedirectUris property which implements the URL which is implemented on the client using the ASP.NET Core OpenID middleware.

using IdentityServer4;
using IdentityServer4.Models;
using System.Collections.Generic;

namespace QuickstartIdentityServer
{
    public class Config
    {
        public static IEnumerable<IdentityResource> GetIdentityResources()
        {
            return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
                new IdentityResources.Email(),
                new IdentityResource("thingsscope",new []{ "role", "admin", "user", "thingsapi" } )
            };
        }

        public static IEnumerable<ApiResource> GetApiResources()
        {
            return new List<ApiResource>
            {
                new ApiResource("thingsscope")
                {
                    ApiSecrets =
                    {
                        new Secret("thingsscopeSecret".Sha256())
                    },
                    Scopes =
                    {
                        new Scope
                        {
                            Name = "thingsscope",
                            DisplayName = "Scope for the thingsscope ApiResource"
                        }
                    },
                    UserClaims = { "role", "admin", "user", "thingsapi" }
                }
            };
        }

        // clients want to access resources (aka scopes)
        public static IEnumerable<Client> GetClients()
        {
            // client credentials client
            return new List<Client>
            {
                new Client
                {
                    ClientName = "angularmvcmixedclient",
                    ClientId = "angularmvcmixedclient",
                    ClientSecrets = {new Secret("thingsscopeSecret".Sha256()) },
                    AllowedGrantTypes = GrantTypes.Hybrid,
                    AllowOfflineAccess = true,
                    RedirectUris = { "https://localhost:44341/signin-oidc" },
                    PostLogoutRedirectUris = { "https://localhost:44341/signout-callback-oidc" },
                    AllowedCorsOrigins = new List<string>
                    {
                        "https://localhost:44341/"
                    },
                    AllowedScopes = new List<string>
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        IdentityServerConstants.StandardScopes.OfflineAccess,
                        "thingsscope",
                        "role"

                    }
                }
            };
        }
    }
}

MVC Angular Client Configuration

The ASP.NET Core MVC application with Angular is implemented as shown in this post: Using Angular in an ASP.NET Core View with Webpack

The cookie authentication middleware is used to store the access token in a cookie, once authorised and authenticated. The OpenIdConnectAuthentication middleware is used to redirect the user to the STS server, if the user is not authenticated. The SaveTokens property is set, so that the token is persisted in the secure cookie.

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
	AuthenticationScheme = "Cookies"
});

app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
	AuthenticationScheme = "oidc",
	SignInScheme = "Cookies",

	Authority = "https://localhost:44348",
	RequireHttpsMetadata = true,

	ClientId = "angularmvcmixedclient",
	ClientSecret = "thingsscopeSecret",

	ResponseType = "code id_token",
	Scope = { "openid", "profile", "thingsscope" },

	GetClaimsFromUserInfoEndpoint = true,
	SaveTokens = true
});

The Authorize attribute is used to secure the MVC controller or API.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Authorization;

namespace AspNetCoreMvcAngular.Controllers
{
    [Authorize]
    public class HomeController : Microsoft.AspNetCore.Mvc.Controller
    {
        public IActionResult Index()
        {
            return View();
        }

        public IActionResult Error()
        {
            return View();
        }
    }
}

CSP: Content Security Policy in the HTTP Headers

Content Security Policy helps you reduce XSS risks. The really brilliant NWebSec middleware can be used to implement this as required. Thanks to André N. Klingsheim for this excellent library. The middleware adds the headers to the HTTP responses.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

In this configuration, mixed content is not allowed and unsafe inline styles are allowed.

app.UseCsp(opts => opts
	.BlockAllMixedContent()
	.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
	.StyleSources(s => s.UnsafeInline())
);

Set the Referrer-Policy in the HTTP Header

This allows us to restrict the amount of information being passed on to other sites when referring to other sites.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Scott Helme write a really good post on this:
https://scotthelme.co.uk/a-new-security-header-referrer-policy/

Again NWebSec middleware is used to implement this.

           
app.UseReferrerPolicy(opts => opts.NoReferrer());

Redirect Validation

You can secure that application so that only redirects to your sites are allowed. For example, only a redirect to IdentityServer4 is allowed.

// Register this earlier if there's middleware that might redirect.
// The IdentityServer4 port needs to be added here. 
// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

Secure Cookies

Only secure cookies should be used to store the session information.

You can check this in the Chrome browser:

XFO: X-Frame-Options

The X-Frame-Options Headers can be used to prevent an IFrame from being used from within the UI. This helps protect against click jacking.

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

app.UseXfo(xfo => xfo.Deny());

Configuring HSTS: Http Strict Transport Security

The HTTP Header tells the browser to force HTTPS for a length of time.

app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());

TOFU (Trust on first use) or first time loading.

Once you have a proper cert and a fixed URL, you can configure that the browser to preload HSTS settings for your website.

https://hstspreload.org/

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

X-Xss-Protection NWebSec

Adds a middleware to the ASP.NET Core pipeline that sets the X-Xss-Protection (Docs from NWebSec)

 app.UseXXssProtection(options => options.EnabledWithBlockMode());

CORS

Only the allowed CORS should be enabled when implementing this. Disabled this as much as possible.

Cross Site Request Forgery XSRF

See this blog:
Anti-Forgery Validation with ASP.NET Core MVC and Angular

Validating the security Headers

Once you start the application, you can check that all the security headers are added as required:

Here’s the Configure method with all the NWebsec app settings as well as the authentication middleware for the client MVC application.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IAntiforgery antiforgery)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();
	loggerFactory.AddSerilog();

	//Registered before static files to always set header
	app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
	app.UseXContentTypeOptions();
	app.UseReferrerPolicy(opts => opts.NoReferrer());

	app.UseCsp(opts => opts
		.BlockAllMixedContent()
		.ScriptSources(s => s.Self()).ScriptSources(s => s.UnsafeEval())
		.StyleSources(s => s.UnsafeInline())
	);

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCookieAuthentication(new CookieAuthenticationOptions
	{
		AuthenticationScheme = "Cookies"
	});

	app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
	{
		AuthenticationScheme = "oidc",
		SignInScheme = "Cookies",

		Authority = "https://localhost:44348",
		RequireHttpsMetadata = true,

		ClientId = "angularmvcmixedclient",
		ClientSecret = "thingsscopeSecret",

		ResponseType = "code id_token",
		Scope = { "openid", "profile", "thingsscope" },

		GetClaimsFromUserInfoEndpoint = true,
		SaveTokens = true
	});

	var angularRoutes = new[] {
		 "/default",
		 "/about"
	 };

	app.Use(async (context, next) =>
	{
		string path = context.Request.Path.Value;
		if (path != null && !path.ToLower().Contains("/api"))
		{
			// XSRF-TOKEN used by angular in the $http if provided
			  var tokens = antiforgery.GetAndStoreTokens(context);
			context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken, new CookieOptions { HttpOnly = false, Secure = true });
		}

		if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
			(ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
		{
			context.Request.Path = new PathString("/");
		}

		await next();
	});

	app.UseDefaultFiles();
	app.UseStaticFiles();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseStaticFiles();

	//Registered after static files, to set headers for dynamic content.
	app.UseXfo(xfo => xfo.Deny());

	// Register this earlier if there's middleware that might redirect.
	// The IdentityServer4 port needs to be added here. 
	// If the IdentityServer4 runs on a different server, this configuration needs to be changed.
	app.UseRedirectValidation(t => t.AllowSameHostRedirectsToHttps(44348)); 

	app.UseXXssProtection(options => options.EnabledWithBlockMode());

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});  
}

Links:

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows

https://docs.nwebsec.com/en/latest/index.html

https://www.nwebsec.com/

https://github.com/NWebsec/NWebsec

https://content-security-policy.com/

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

https://scotthelme.co.uk/a-new-security-header-referrer-policy/

https://developer.mozilla.org/de/docs/Web/HTTP/Headers/X-Frame-Options

https://www.owasp.org/index.php/HTTP_Strict_Transport_Security_Cheat_Sheet

https://gun.io/blog/tofu-web-security/

https://en.wikipedia.org/wiki/Trust_on_first_use

http://www.dotnetnoob.com/2013/07/ramping-up-aspnet-session-security.html

http://openid.net/specs/openid-connect-core-1_0.html

https://www.ssllabs.com/



Dominick Baier: Financial APIs and IdentityServer

Right now there is quite some movement in the financial sector towards APIs and “collaboration” scenarios. The OpenID Foundation started a dedicated working group on securing Financial APIs (FAPIs) and the upcoming Revised Payment Service EU Directive (PSD2 – official document, vendor-based article) will bring quite some change to how technology is used at banks as well as to banking itself.

Googling for PSD2 shows quite a lot of ads and sponsored search results, which tells me that there is money to be made (pun intended).

We have a couple of customers that asked me about FAPIs and how IdentityServer can help them in this new world. In short, the answer is that both FAPIs in the OIDF sense and PSD2 are based on tokens and are either inspired by OpenID Connect/OAuth 2 or even tightly coupled with them. So moving to these technologies is definitely the first step.

The purpose of the OIDF “Financial API Part 1: Read-only API security profile” is to select a subset of the possible OpenID Connect options for clients and providers that have suitable security for the financial sector. Let’s have a look at some of those for OIDC providers (edited):

  • shall support both public and confidential clients;
  • shall authenticate the confidential client at the Token Endpoint using one of the following methods:
    • TLS mutual authentication [TLSM];
    • JWS Client Assertion using the client_secret or a private key as specified in section 9 of [OIDC];
  • shall require a key of size 2048 bits or larger if RSA algorithms are used for the client authentication;
  • shall require a key of size 160 bits or larger if elliptic curve algorithms are used for the client authentication;
  • shall support PKCE [RFC7636]
  • shall require Redirect URIs to be pre-registered;
  • shall require the redirect_uri parameter in the authorization request;
  • shall require the value of redirect_uri to exactly match one of the pre-registered redirect URIs;
  • shall require user authentication at LoA 2 as defined in [X.1254] or more;
  • shall require explicit consent by the user to authorize the requested scope if it has not been previously authorized;
  • shall return the token response as defined in 4.1.4 of [RFC6749];
  • shall return the list of allowed scopes with the issued access token;
  • shall provide opaque non-guessable access tokens with a minimum of 128 bits as defined in section 5.1.4.2.2 of [RFC6819].
  • should provide a mechanism for the end-user to revoke access tokens and refresh tokens granted to a Client as in 16.18 of [OIDC].
  • shall support the authentication request as in Section 3.1.2.1 of [OIDC];
  • shall issue an ID Token in the token response when openid was included in the requested scope as in Section 3.1.3.3 of [OIDC] with its sub value corresponding to the authenticated user and optional acr value in ID Token.

So to summarize, these are mostly best practices for implementing OIDC and OAuth 2 – just formalized. I am sure there will be also a certification process around that at some point.

Interesting to note is the requirement for PKCE and the removal of plain client secrets in favour of mutual TLS and client JWT assertions. IdentityServer supports all of the above requirements.

In contrast, the “Read and Write Profile” (currently a working draft) steps up security significantly by demanding proof of possession tokens via token binding, requiring signed authentication requests and encrypted identity tokens, and limiting the authentication flow to hybrid only. The current list from the draft:

  • shall require the request or request_uri parameter to be passed as a JWS signed JWT as in clause 6 of OIDC;
  • shall require the response_type values code id_token or code id_token token;
  • shall return ID Token as a detached signature to the authorization response;
  • shall include state hash, s_hash, in the ID Token to protect the state value;
  • shall only issue holder of key authorization code, access token, and refresh token for write operations;
  • shall support OAUTB or MTLS as a holder of key mechanism;
  • shall support user authentication at LoA 3 or greater as defined in X.1254;
  • shall support signed and encrypted ID Tokens

Both profiles also have increased security requirements for clients – which is subject of a future post.

In short – exciting times ahead and we are constantly improving IdentityServer to make it ready for these new scenarios. Feel free to get in touch if you are interested.


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Damien Bowden: Using Angular in an ASP.NET Core View with Webpack

This article shows how Angular can be run inside an ASP.NET Core MVC view using Webpack to build the Angular application. By using Webpack, the Angular application can be built using the AOT and Angular lazy loading features and also profit from the advantages of using a server side rendered view. If you prefer to separate the SPA and the server into 2 applications, use Angular CLI or a similiar template.

Code: https://github.com/damienbod/AspNetCoreMvcAngular

Blogs in this Series

The application was created using the .NET Core ASP.NET Core application template in Visual Studio 2017. A packages.json npm file was added to the project. The file contains the frontend build scripts as well as the npm packages required to build the application using Webpack and also the Angular packages.

{
  "name": "angular-webpack-visualstudio",
  "version": "1.0.0",
  "description": "An Angular VS template",
  "author": "",
  "license": "ISC",
    "repository": {
    "type": "git",
    "url": "https://github.com/damienbod/Angular2WebpackVisualStudio.git"
  },
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release",
    "test": "karma start"
  },
  "dependencies": {
    "@angular/common": "4.2.2",
    "@angular/compiler": "4.2.2",
    "@angular/compiler-cli": "4.2.2",
    "@angular/platform-server": "4.2.2",
    "@angular/core": "4.2.2",
    "@angular/forms": "4.2.2",
    "@angular/http": "4.2.2",
    "@angular/platform-browser": "4.2.2",
    "@angular/platform-browser-dynamic": "4.2.2",
    "@angular/router": "4.2.2",
    "@angular/upgrade": "4.2.2",
    "@angular/animations": "4.2.2",
    "angular-in-memory-web-api": "0.3.1",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.10",
    "rxjs": "5.3.0",
    "zone.js": "0.8.8",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
    "@types/node": "7.0.13",
    "@types/jasmine": "2.5.47",
    "angular2-template-loader": "0.6.2",
    "angular-router-loader": "^0.6.0",
    "awesome-typescript-loader": "3.1.2",
    "clean-webpack-plugin": "^0.1.16",
    "concurrently": "^3.4.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.28.0",
    "file-loader": "^0.11.1",
    "html-webpack-plugin": "^2.28.0",
    "jquery": "^3.2.1",
    "json-loader": "^0.5.4",
    "node-sass": "^4.5.2",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.6.1",
    "sass-loader": "^6.0.3",
    "source-map-loader": "^0.2.1",
    "style-loader": "^0.16.1",
    "ts-helpers": "^1.1.2",
    "tslint": "^5.1.0",
    "tslint-loader": "^3.5.2",
    "typescript": "2.3.2",
    "url-loader": "^0.5.8",
    "webpack": "^2.4.1",
    "webpack-dev-server": "2.4.2",
    "jasmine-core": "2.5.2",
    "karma": "1.6.0",
    "karma-chrome-launcher": "2.0.0",
    "karma-jasmine": "1.1.0",
    "karma-sourcemap-loader": "0.3.7",
    "karma-spec-reporter": "0.0.31",
    "karma-webpack": "2.0.3"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

The angular application is added to the angularApp folder. This frontend app implements a default module and also a second about module which is lazy loaded when required (About button clicked). See Angular Lazy Loading with Webpack 2 for further details.

The _Layout.cshtml MVC View is also added here as a template. This will be used to build into the MVC application in the Views folder.

The webpack.prod.js uses all the Angular project files and builds them into pre-compiled AOT bundles, and also a separate bundle for the about module which is lazy loaded. Webpack adds the built bundles to the _Layout.cshtml template and copies this to the Views/Shared/_Layout.cshtml file.

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angularApp/vendor.ts',
        'polyfills': './angularApp/polyfills.ts',
        'app': './angularApp/main-aot.ts' // AoT compilation
    },

    output: {
        path: __dirname + '/wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: ''
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: '../Views/Shared/_Layout.cshtml',
            inject: 'body',
            template: 'angularApp/_Layout.cshtml'
        }),

        new CopyWebpackPlugin([
            { from: './angularApp/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};

The Startup.cs is configured to load the configuration and middlerware for the application using client or server routing as required.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using AspNetCoreMvcAngular.Repositories.Things;
using Microsoft.AspNetCore.Http;

namespace AspNetCoreMvcAngular
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddCors(options =>
            {
                options.AddPolicy("AllowAllOrigins",
                    builder =>
                    {
                        builder
                            .AllowAnyOrigin()
                            .AllowAnyHeader()
                            .AllowAnyMethod();
                    });
            });

            services.AddSingleton<IThingsRepository, ThingsRepository>();

            services.AddMvc();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                 "/default",
                 "/about"
             };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseCors("AllowAllOrigins");

            app.UseDefaultFiles();
            app.UseStaticFiles();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The application can be built and run using the command line. The client application needs to be built before you can deploy or run!

> npm install
> npm run build-production
> dotnet restore
> dotnet run

You can also build inside Visual Studio 2017 using the Task Runner Explorer. If building inside Visual Studio 2017, you need to configure the NodeJS path correctly to use the right version.

Now you have to best of both worlds in the UI.

Note:
You could also use Microsoft ASP.NET Core JavaScript Services which supports server side pre rendering but not client side lazy loading. If your using Microsoft ASP.NET Core JavaScript Services, configure the application to use AOT builds for the Angular template.

Links:

Angular Templates, Seeds, Starter Kits

https://github.com/damienbod/AngularWebpackVisualStudio

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/aspnet/JavaScriptServices



Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - Part 2

Using ImageSharp to resize images in ASP.NET Core - Part 2

In my last post, I showed a way to crop and resize an image downloaded from a URL, using the ImageSharp library. This was in response to a post in which Dmitry Sikorsky used the CoreCompat.System.Drawing library to achieve the same thing. The purpose of that post was simply to compare the two libraries from an API perspective, and to demonstrate ImageSharp in the application.

The code in that post was not meant to be used in production, and contained a number of issues such as not disposing objects correctly, creating a fresh HttpClient for every request, and happily downloading files from any old URL!

In this post I'll show a few tweaks to make the code from the last post a little more production worthy. In particular, rather than downloading a file from any URL provided in the querystring, we'll load the file from the web root folder on disk, if the file exists.

Add the NuGet.config file

As mentioned in my previous post, ImageSharp is currently only published on MyGet, not NuGet, so you'll need to add a NuGet.config file to ensure you can restore the ImageSharp library.

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="ImageSharp Nightly" value="https://www.myget.org/F/imagesharp/api/v3/index.json" />
  </packageSources>
</configuration>  

Once you've added the config file, you can add the ImageSharp library to the project.

Using a catch-all route parameter

The first step I wanted to take was to move from passing the URL to the target image as a querystring parameter to part of the URL path. As a part of this, we'll switch from allowing URLs to be absolute paths to relative paths.

Previously, you would pass the URL to resize something like the following:

/image?url=https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png?width=200&height=100

Instead, our updated route will look something like the following, where the path to the image to resize is images/clouds.jpg:

/resize/200/100/images/clouds.jpg

All that's required is to introduce a [Route] attribute with a appropriate parameters for the dimensions and a catch-all parameter, for example:

[Route("/resize/width/height/{*url}")]
public IActionResult ResizeImage(string url, int width, int height)  
{
    /* Method implementation */
}

This gives us urls that are easier to read and parse around (plus will give us another benefit, as you'll see in the next post.

Validating the requested file exists

Before we try and load the file from disk, we first need to make sure that a valid file has been requested. To do so, we'll use the FileInfo class and the IFileProvider interface.

public class HomeController : Controller  
{
    private readonly IFileProvider _fileProvider;

    [Route("/image/{width}/{height}/{*url}")]
    public IActionResult ResizeImage(string url,  int width, int height)
    {
        if (width <= 0 || height <= 0) 
        { 
            return BadRequest(); 
        }

        var imagePath = PathString.FromUriComponent("/" + url);
        var fileInfo = _fileProvider.GetFileInfo(imagePath);
        if (!fileInfo.Exists) { return NotFound(); }

        /* Load image, resize and return */
    }
}

First, we perform some simple parameter validation to make sure the requested dimenstions aren't less that zero, and if that fails, we return a 400 result.

I'm going to treat the value 0 as a special case for now - if you pass zero in either width or height then we'll ignore that value, and use the original image's dimension.

Assuming the width and height are valid, we try and get the FileInfo using the injected IFileProvider, and if it deems the file doesn't exist, we return a 404.

So the first question is, where does the implementation of IFileProvider come from?

WebRootFileProvider vs ContentRootFileProvider

The IHostingEnvironment exposes two IFileProviders:

  • ContentRootFileProvider
  • WebRootFileProvider

These file providers allow serving files from the ContentRootPath and WebRootPath respectively. By default, the ContentRootPath points to the root of the project folder, while WebRootPath points to the wwwroot folder.

Using ImageSharp to resize images in ASP.NET Core - Part 2

For this example, we only want to serve files from the wwwroot folder - serving files from anywhere else would be a security risk - so we use the WebRootFileProvider property, by accessing it from an IHostingEnvironment injected into the consturctor:

public HomeController(IHostingEnvironment env)  
{
    _fileProvider = env.WebRootFileProvider;
}

Resizing the image

Once we have validated the file exists, we can continue with the rest of the action method. This part is very similar to the previous post, just tweaked a little using suggestions from James South. We use the FileInfo object to obtain a Stream for the file we want to reload, load it into memory.

Once we have loaded the image, we can resize it. For this example, we'll just use the values provided in the URL, and we'll always save the image as a jpeg, so we can use the SaveAsJpeg extension method:

[Route("/image/{width}/{height}/{*url}")]
public IActionResult ResizeImage(string url, int width, int height)  
{
    if (width < 0 || height < 0 ) { return BadRequest(); }

    var imagePath = PathString.FromUriComponent("/" + url);
    var fileInfo = _fileProvider.GetFileInfo(imagePath);
    if (!fileInfo.Exists) { return NotFound(); }

    var outputStream = new MemoryStream();
    using (var inputStream = fileInfo.CreateReadStream())
    using (var image = Image.Load(inputStream))
    {
        image
            .Resize(widthToUse, heightToUse)
            .SaveAsJpeg(outputStream);
    }

    outputStream.Seek(0, SeekOrigin.Begin);

    return File(outputStream, "image/jpg");
}

Note, if you pass 0 for either width or height, by default ImageSharp will preserve the original aspect ratio when resizing.

With this revised action method, we have an action closer to something we'd actually use in practice.

There's still some aspects that we would likely want to improve before we used in production. In particular, we would likely want some sort of caching of the final output, so we are not doing an expensive resize operation with every request. I'll look at fixing this in a follow up post.

Summary

This post showed a revised version of the "crop and resize" action method from my previous post. In this post, I stopped loading the image with HttpClient and instead required that it already be located in the web app in the wwwroot folder. The file was loaded using the IHostingEnvironment.WebRootFileProvider property, and finally resized in a more fluent way, and ensuring we dispose the underlying arrays.


Andrew Lock: Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

Currently, one of the significant missing features in .NET Core and .NET Standard are the System.Drawing APIs that you can use, among other things, for server-side image processing in ASP.NET Core. Bertrand Le Roy gave a great run down of the various alternatives available in Jan 2017, each with different pros and cons.

I was reading a post by Dmitry Sikorsky yesterday describing how to use one of these libraries, the CoreCompat.System.Drawing package, to resize an image in ASP.NET Core. This package is designed to mimic the existing System.Drawing APIs (it's a .NET Core port, of the Mono port, of System.Drawing!) so if you need a drop in replacement for System.Drawing then it's a good place to start.

I'm going to need to start doing some image processing soon, so I wanted to take a look at how the code for working with CoreCompat.System.Drawing would compare to using the ImageSharp package. This is a brand new library that is designed from the ground up to be cross-platform by using only managed-code. This means it will probably not be as performant as libraries that use OS-specific features, but on the plus side, it is completely cross platform.

For the purposes of this comparison, I'm going to start with the code presented by Dmitry in his post and convert it to use ImageSharp.

The sample app

This post will as based on the code from Dimitri's post, so it uses the same sample app. This contains a single controller, the ImageController, which you can use to crop and resize an image from a given URL.

For example, a request might look like the following:

/image?url=https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png&sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360&destinationwidth=100&destinationheight=100

This will downloaded the GitHub logo from https://assets-cdn.github.com/images/modules/logos_page/GitHub-Mark.png:

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

It will then crop it using the rectangle specified by sourcex=120&sourcey=100&sourcewidth=360&sourceheight=360, and resize the output to 100×100. Finally, it will render the result in the response as a jpeg.

Using ImageSharp to resize images in ASP.NET Core - a comparison with CoreCompat.System.Drawing

This is the same functionality Dimitri described, I will just convert his code to use ImageSharp instead.

Installing ImageSharp

The first step is to add the ImageSharp package to your project. Currently, this is not quite as smooth as it will be, as it is not yet published on Nuget, but instead just to a MyGet feed. This is only a temporary situation while the code-base stabilises - it will be published to NuGet at that point - but at the moment it is a bit of a barrier to adding it to your project.

Note, ImageSharp actually is published on NuGet, but that package is currently just a placeholder for when the package is eventually published. Don't use it!

To install the package from the MyGet feed, add a NuGet.config file to your solution folder, specifying the location of the feed:

<?xml version="1.0" encoding="utf-8"?>  
<configuration>  
  <packageSources>
    <add key="ImageSharp Nightly" value="https://www.myget.org/F/imagesharp/api/v3/index.json" />
  </packageSources>
</configuration>  

You can now add the ImageSharp package to your csproj file, and run a restore. I specified the version 1.0.0-* to fetch the latest version from the feed (1.0.0-alpha7 in my case).

<PackageReference Include="ImageSharp" Version="1.0.0-*" />  

When you run dotnet restore you should see that the CLI has used the ImageSharp MyGet feed, where it lists the config files used:

$ dotnet restore
  Restoring packages for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj...
  Installing ImageSharp 1.0.0-alpha7-00006.
  Generating MSBuild file C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\AspNetCoreImageResizingService.csproj.nuget.g.props.
  Writing lock file to disk. Path: C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\obj\project.assets.json
  Restore completed in 2.76 sec for C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj.

  NuGet Config files used:
      C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\NuGet.Config
      C:\Users\Sock\AppData\Roaming\NuGet\NuGet.Config
      C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config

  Feeds used:
      https://www.myget.org/F/imagesharp/api/v3/index.json
      https://api.nuget.org/v3/index.json
      C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

  Installed:
      1 package(s) to C:\Users\Sock\Repos\andrewlock\AspNetCoreImageResizingService\AspNetCoreImageResizingService\AspNetCoreImageResizingService.csproj

Adding the NuGet.config file is a bit of a pain, but it's a step that will hopefully go away soon, when the package makes its way onto NuGet.org. On the plus side, you only need to add a single package to your project for this example.

In contrast, to add the CoreCompat.System.Drawing packages you have to include three different packages when writing cross-platform code - the library itself and the run time components for both Linux and OS X:

<PackageReference Include="CoreCompat.System.Drawing" Version="1.0.0-beta006" />  
<PackageReference Include="runtime.linux-x64.CoreCompat.System.Drawing" Version="1.0.0-beta009" />  
<PackageReference Include="runtime.osx.10.10-x64.CoreCompat.System.Drawing" Version="1.0.1-beta004" />  

Obviously, if you are running on only a single platform, then this probably won't be an issue for you, but it's something to take into consideration.

Loading an image from a stream

Now the library is installed, we can start converting the code. The first step in the app is to download the image provided in the URL.

Note that this code is very much sample only - downloading files sent to you in query arguments is probably not advisable, plus you should probably be using a static HttpClient, disposing correctly etc!

For the CoreCompat.System.Drawing library, the code doing the work reads the stream into a Bitmap, which is then set to the Image object.

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

using (Bitmap temp = new Bitmap(inputStream))  
{
    image = new Bitmap(temp);
}

While for ImageSharp we have the following:

Image image = null;  
HttpClient httpClient = new HttpClient();  
HttpResponseMessage response = await httpClient.GetAsync(url);  
Stream inputStream = await response.Content.ReadAsStreamAsync();

image = Image.Load(inputStream);  

Obviously the HttpClient code is identical here, but there is less faffing required to actually read an image from the response stream. The ImageSharp API is much more intuitive - I have to admit I always have to refresh my memory on how the System.Drawing Imageand Bitmap classes interact! Definitely a win to ImageSharp I think.

It's worth noting that the Image classes in these two examples are completely different types, in different namespaces, so are not interoperable in general.

Cropping and Resizing an image

Once the image is in memory, the next step is to crop and resize it to create our output image. The CropImage function for the CoreCompat.System.Drawing is as follows:

private Image CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
  Image destinationImage = new Bitmap(destinationWidth, destinationHeight);
  Graphics g = Graphics.FromImage(destinationImage);

  g.DrawImage(
    sourceImage,
    new Rectangle(0, 0, destinationWidth, destinationHeight),
    new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight),
    GraphicsUnit.Pixel
  );

  return destinationImage;
}

This code creates the destination image first, generates a Graphics object to allow manipulating the content, and then draws a region from the first image onto the second, resizing as it does so.

This does the job, but it's not exactly simple to follow - if I hadn't told you, would you have spotted that the image is being resized as well as cropped? Maybe, given we set the destinationImage size, but possibly not if you were just looking at the DrawImage function.

In contrast, the ImageSharp version of this method would look something like the following:

private Image<Rgba32> CropImage(Image sourceImage, int sourceX, int sourceY, int sourceWidth, int sourceHeight, int destinationWidth, int destinationHeight)  
{
    return sourceImage
        .Crop(new Rectangle(sourceX, sourceY, sourceWidth, sourceHeight))
        .Resize(destinationWidth, destinationHeight);
}

I think you'd agree, this is much easier to understand! Instead of using a mapping from one coordinate system to another, handling both the crop and resize in one operation, it has two well-named methods that are easy to understand.

One slight quirk in the ImageSharp version is that this method returns an Image<Rgba32> when we gave it an Image. The definition for this Image object is:

public sealed class Image : Image<Rgba32> { }  

so the Image is-an Image<Rgba32>. This isn't a big issue, I guess it would just be nice if you were working with the Image class to get back an Image from the manipulation functions. I still count this as a win for ImageSharp.

Saving the image to a stream

The final part of the app is to save the cropped image to the response stream and return it to the browser.

The CoreCompat.System.Drawing version of saving the image to a stream looks like the following. We first download the image, crop it and then save it to a MemoryStream. This stream can then be used to create a FileResponse object in the browser (check the example source code or Dimitri's post for details.

Image sourceImage = await this.LoadImageFromUrl(url);

Image destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, ImageFormat.Jpeg);  

The ImageSharp equivalent is very similar. It just involves changing the type of the destination image to be Image<Rgba32> (as mentioned in the previous section), and updating the last line, in which we save the image to a stream.

Image sourceImage = await this.LoadImageFromUrl(url);

Image<Rgba32> destinationImage = this.CropImage(sourceImage, sourceX, sourceY, sourceWidth, sourceHeight, destinationWidth, destinationHeight);  
Stream outputStream = new MemoryStream();

destinationImage.Save(outputStream, new JpegEncoder());  

Instead of using an Enum to specify the output formatting, you pass an instance of an IImageEncoder, in this case the JpegEncoder. This approach is more extensible, though it is slightly less discoverable then the System.Drawing approach.

Note, there are many different overloads to Image<T>.Save() that you can use to specify all sorts of different encoding options etc.

Wrapping up

And that's it. Everything you need to convert from CoreCompat.System.Drawing to ImageSharp. Personally, I really like how ImageSharp is shaping up - it has a nice API, is fully managed cross-platform and even targets .NET Standard 1.1 - no mean feat! It may not currently hit the performance of other libraries that rely on native code, but with all the improvements and progress around Spans<T>, it may be able to come close to parity down the line.

If you're interested in the project, do check it out on GitHub and consider contributing - it will be great to get the project to an RTM state.

Thanks are due to James South for creating the ImageSharp project, and also to Dmitry Sikorsky for inspiring me to write this post! You can find the source code for his project on GitHub here, and the source for my version here.


Anuraj Parameswaran: Post requests from Azure Logic apps

This post is about sending post request to services from Azure Logic Apps. Logic Apps provide a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow. There are many connectors across the cloud and on-premises to quickly integrate across services and protocols. A logic app begins with a trigger (like ‘When an account is added to Dynamics CRM’) and after firing can begin many combinations actions, conversions, and condition logic.


Andrew Lock: Creating a basic Web API template using dotnet new custom templates

Creating a basic Web API template using dotnet new custom templates

In my last post, I showed a simple, stripped-down version of the Web API template with the Razor dependencies removed.

As an excuse to play with the new CLI templating functionality, I decided to turn this template into a dotnet new template.

For details on this new capability, check out the announcement blog post, or the excellent series by Muhammed Rehan Saeed. In brief, the .NET CLI includes functionality to let you create your own templates using dotnet new, which can be distributed as zip files, installed from the source code project folder or as NuGet packages.

The Basic Web API template

I decided to wrap the basic web API template I created in my last post so that you can easily use it to create your own Web API projects without Razor templates.

To do so, I followed Muhammed Rehan Saeed's blog posts (and borrowed heavily from his example template!) to create a version of the basic Web API template you can install from NuGet.

This template creates a very stripped-down version of the web API project, with the Razor functionality removed. If you are looking for a more fully-featured template, I recommend checking out the ASP.NET MVC Boilerplate project.

If you have installed Visual Studio 2017, you can use the .NET CLI to install new templates and use them to create projects:

  1. Run dotnet new --install "NetEscapades.Templates::*" to install the project template
  2. Run dotnet new basicwebapi --help to see how to select the various features to include in the project
  3. Run dotnet new basicwebapi --name "MyTemplate" along with any other custom options to create a project from the template.

This will create a new basic Web API project in the current folder.

Options and feature selection

One of the great features in the .NET CLI templates are the ability to do feature selection . This lets you add or remove features from the template at the time it is generated.

I added a number of options to the template (again, heavily inspired by the ASP.NET Boilerplate project). This lets you add features that will be commonly included in a Web API project, such as CORS, DataAnnotations, and the ApiExplorer.

You can view these options by running dotnet new basicwebapi --help:

$dotnet new basicwebapi --help
Template Instantiation Commands for .NET Core CLI.

Usage: dotnet new [arguments] [options]

Arguments:  
  template  The template to instantiate.

Options:  
  -l|--list         List templates containing the specified name.
  -lang|--language  Specifies the language of the template to create
  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is used.
  -o|--output       Location to place the generated output.
  -h|--help         Displays help for this command.
  -all|--show-all   Shows all templates


Basic ASP.NET Core Web API (C#)  
Author: Andrew Lock  
Options:

  -A|--ApiExplorer                 The ApiExplorer functionality allows you to expose metadata about your API endpoints. You can use it to generate documentation about your application. Enabling this option will add the ApiExplorer libraries and services to your project.
                                   bool - Optional
                                   Default: false

  -C|--Controller                  If true, this will generate an example ValuesController in your project.
                                   bool - Optional
                                   Default: false

  -D|--DataAnnotations             DataAnnotations provide declarative metadata and validations for models in ASP.NET Core.
                                   bool - Optional
                                   Default: false

  -CO|--CORS                       Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sensitive data from another site. CORS is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly allow some cross-origin requests while rejecting others.
                                   bool - Optional
                                   Default: true

  -T|--Title                       The name of the project which determines the assembly product name. If the Swagger feature is enabled, shows the title on the Swagger UI.
                                   string - Optional
                                   Default: BasicWebApi

 -De|--Description                A description of the project which determines the assembly description. If the Swagger feature is enabled, shows the description on the Swagger UI.

                                   string - Optional
                                   Default: BasicWebApi

  -Au|--Author                     The name of the author of the project which determines the assembly author, company and copyright information.

                                   string - Optional
                                   Default: Project Author

  -F|--Framework                   Decide which version of the .NET Framework to target.

                                       .NET Core         - Run cross platform (on Windows, Mac and Linux). The framework is made up of NuGet packages which can be shipped with the application so it is fully stand-alone.

                                       .NET Framework    - Gives you access to the full breadth of libraries available in .NET instead of the subset available in .NET Core but requires it to be pre-installed.

                                       Both              - Target both .NET Core and .NET Framework.

                                   Default: Both

  -I|--IncludeApplicationInsights  Whether or not to include Application Insights in the project
                                   bool - Optional
                                   Default: false

You can invoke the template with any or all of these options, for example:

$dotnet new basicwebapi --Controller false --DataAnnotations true -Au "Andrew Lock"
Content generation time: 762.9798 ms  
The template "Basic ASP.NET Core Web API" created successfully.  

Source code for the template

If you're interested to see the source for the template, you can view it on GitHub. There you will find an example of the templates.json file that describes the template, as well as a full CI build using cake and AppVeyor to automatically publish the NuGet templates.

If you have any suggestions, bugs or comments, then do let me know on the GitHub!


Andrew Lock: Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In this article I'll show how to add the minimal required dependencies to create an ASP.NET Core Web API project, without including the additional MVC/Razor functionality and packages.

Note: I have since created a custom dotnet new template for the code described in this post, plus a few extra features. You can read about it here, or view it on GitHub and NuGet.

MVC vs Web API

In the previous version of ASP.NET, the MVC and Web API stacks were completely separate. Even though there were many similar concepts shared between them, the actual types were distinct. This was generally a little awkward, and often resulted in confusing error messages when you accidentally references the wrong namespace.

In ASP.NET Core, this is no longer an issue - MVC and Web API have been unified under the auspices of ASP.NET Core MVC, in which there is fundamentally no real difference between an MVC controller and a Web API controller. All of your controllers can act both as MVC controllers, serving server-side rendered Razor templates, and as Web API controllers returning formatted (e.g. JSON or XML) data.

This unification is great and definitely reduces the mental overhead required when working with both previously. Even if you are not using both aspects in a single application, the fact the types are all familiar is just a smoother experience.

Having said that, if you only need to use the Web API features (e.g. you're building an API client without any server-side rendering requirements), then you may not want/need the additional MVC capabilities in your app. Currently, the default templates include these by default.

The default templates

When you create a new MVC project from a template in Visual Studio or via the command line, you can choose whether to create an empty ASP.NET Core project, a Web API project or an MVC web app project:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

If you create an 'empty' project, then the resulting app really is super-lightweight. It has no dependencies on any MVC constructs, and just produces a very simple 'Hello World' response when run:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

At the other end of the scale, the 'MVC web app' gives you a more 'complete' application. Depending on the authentication options you select, this could include ASP.NET Core Identity, EF Core, and SQL server integration, in addition to all the MVC configuration and Razor view templating:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

In between these two templates is the Web API template. This includes the necessary MVC dependencies for creating a Web API, and the simplest version just includes a single example ValuesController:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

However, while this looks stripped back, it also adds all the necessary packages for creating full MVC applications too, i.e. the server-side Razor packages. This is because it includes the same Microsoft.AspNetCore.Mvc package that the full MVC web app does, and calls AddMvc() in Startup.ConfigureServices.

As described in Steve Gordon's post on the AddMvc function, this adds a bunch of various services to the service collection. Some of these are required to allow you to use Web API, but some of them - the Razor-related services in particular - are unnecessary for a web API.

In most cases, using the Microsoft.AspNetCore.Mvc package is the easiest thing to do, but sometimes you want to trim your dependencies as much as possible, and make your APIs as lightweight as you can. In those cases you may find it useful to specifically add only the MVC packages and services you need for your app.

Adding the package dependencies

We'll start with the 'Empty' web application template, and add the packages necessary for Web API to it.

The exact packages you will need will depend on what features you need in your application. By default, the Empty ASP.NET Core template includes ApplicationInsights and the Microsoft.AspNetCore meta package, so I'll leave those in the project.

On top of those, I'll add the MVC.Core package, the JSON formatter package, and the CORS package:

  • The MVC Core package adds all the essential MVC types such as ControllerBase and RouteAttribute, as well as a host of dependencies such as Microsoft.AspNetCore.Mvc.Abstractions and Microsoft.AspNetCore.Authorization.
  • The JSON formatter package ensures we can actually render our Web API action results
  • The CORS package adds Cross Origin Resource Sharing (CORS) support - a common requirement for web APIs that will be hosted on a different domain to the client calling them.

    The final .csproj file should look something like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" />
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Formatters.Json" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Cors" Version="1.1.2" />
  </ItemGroup>

</Project>  

Once you've restored the packages, we can update Startup to add our Web API services.

Adding the necessary services to Startup.cs

In most cases, adding the Web API services to a project would be as simple as calling AddMvc() in your ConfigureServices method. However, that method adds a whole load of functionality that I don't currently need. by default, it would add the ApiExplorer, the Razor view engine, Razor views, tag helpers and DataAnnotations - none of which we are using at the moment (We might well want to add the ApiExplorer and DataAnnotations back at a later date, but right now, I don't need them).

Instead, I'm left with just the following services:

public void ConfigureServices(IServiceCollection services)  
{
    var builder = services.AddMvcCore();
    builder.AddAuthorization();
    builder.AddFormatterMappings();
    builder.AddJsonFormatters();
    builder.AddCors();
}

That's all the services we need for now - next stop, middleware.

Adding the MvcMiddleware

Adding the MvcMiddleware to the pipeline is simple. I just replace the "Hello World" run call with UseMvc(). Note that I'm using the unparameterised version of the method, which does not add any conventional routes to the application. As this is a web API, I will just be using attribute routing, so there's no need to setup the conventional routes.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    loggerFactory.AddConsole();

    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseMvc();
}

That's all the MVC configuration we need - the final step is to add a controller to show off our new API.

Adding an MVC Controller

There's one important caveat to be aware of when creating a web API in this way - you must use the ControllerBase class, not Controller. The latter is defined in the Microsoft.AspNetCore.Mvc package, which we haven't added. Luckily, it mostly contains methods related to rendering Razor, so it's not a problem for us here. The ControllerBase class includes all the various StatusCodeResult helper methods you will likely use, such as Ok used below.

[Route("api/[controller]")]
public class ValuesController : ControllerBase  
{
    // GET api/values
    [HttpGet]
    public IActionResult Get()
    {
        return Ok(new string[] { "value1", "value2" });
    }
}

And if we take it for a spin:

Removing the MVC Razor dependencies from the Web API template in ASP.NET Core

Voila! A stripped down web API controller, with minimal dependencies.

Bonus: AddWebApi extension method

As a final little piece of tidying up - our ConfigureServices call looks a bit messy now. Personally I'm a fan of the "Clean Startup.cs" approach espoused by K. Scott Allen, in which you reduce the clutter in your Startup.cs class by creating wrapper extension methods for your configuration.

We can do the same with our simplified web API project by adding an extension method called AddWebApi(). I've even created a parameterised overload that takes an Action<MvcOptions>, synonymous with the AddMvc() equivalent that you are likely already using.

using System;  
using Microsoft.AspNetCore.Mvc;  
using Microsoft.AspNetCore.Mvc.Internal;

// ReSharper disable once CheckNamespace
namespace Microsoft.Extensions.DependencyInjection  
{
    public static class WebApiServiceCollectionExtensions
    {
        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));

            var builder = services.AddMvcCore();
            builder.AddAuthorization();

            builder.AddFormatterMappings();

            // +10 order
            builder.AddJsonFormatters();

            builder.AddCors();

            return new MvcBuilder(builder.Services, builder.PartManager);
        }

        /// <summary>
        /// Adds MVC services to the specified <see cref="IServiceCollection" /> for Web API.
        /// This is a slimmed down version of <see cref="MvcServiceCollectionExtensions.AddMvc"/>
        /// </summary>
        /// <param name="services">The <see cref="IServiceCollection" /> to add services to.</param>
        /// <param name="setupAction">An <see cref="Action{MvcOptions}"/> to configure the provided <see cref="MvcOptions"/>.</param>
        /// <returns>An <see cref="IMvcBuilder"/> that can be used to further configure the MVC services.</returns>
        public static IMvcBuilder AddWebApi(this IServiceCollection services, Action<MvcOptions> setupAction)
        {
            if (services == null) throw new ArgumentNullException(nameof(services));
            if (setupAction == null) throw new ArgumentNullException(nameof(setupAction));

            var builder = services.AddWebApi();
            builder.Services.Configure(setupAction);

            return builder;
        }

    }
}

Finally, we can use this extension method to tidy up our ConfigureServices method:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddWebApi();
}

Much better!

Summary

This post showed how you could trim the Razor dependencies from your application, when you know you are not going to need them. This represents pretty much the most bare-bones web API template you might use in your application. Obviously mileage may vary, but luckily adding extra capabilities (validation, ApiExplorer for example) is easy!


Damien Bowden: ASP.NET Core IdentityServer4 Resource Owner Password Flow with custom UserRepository

This article shows how a custom user store or repository can be used in IdentityServer4. This can be used for an existing user management system which doesn’t use Identity or request user data from a custom source. The Resource Owner Flow using refresh tokens is used to access the protected data on the resource server. The client is implemented using IdentityModel.

Code: https://github.com/damienbod/AspNetCoreIdentityServer4ResourceOwnerPassword

Setting up a custom User Repository in IdentityServer4

To create a custom user store, an extension method needs to be created which can be added to the AddIdentityServer() builder. The .AddCustomUserStore() adds everything required for the custom user management.

services.AddIdentityServer()
		.AddSigningCredential(cert)
		.AddInMemoryIdentityResources(Config.GetIdentityResources())
		.AddInMemoryApiResources(Config.GetApiResources())
		.AddInMemoryClients(Config.GetClients())
		.AddCustomUserStore();
}

The extension method adds the required classes to the ASP.NET Core dependency injection services. A user respository is used to access the user data, a custom profile service is added to add the required claims to the tokens, and a validator is also added to validate the user credentials.

using CustomIdentityServer4.UserServices;

namespace Microsoft.Extensions.DependencyInjection
{
    public static class CustomIdentityServerBuilderExtensions
    {
        public static IIdentityServerBuilder AddCustomUserStore(this IIdentityServerBuilder builder)
        {
            builder.Services.AddSingleton<IUserRepository, UserRepository>();
            builder.AddProfileService<CustomProfileService>();
            builder.AddResourceOwnerValidator<CustomResourceOwnerPasswordValidator>();

            return builder;
        }
    }
}

The IUserRepository interface adds everything required by the application to use the custom user store throughout the IdentityServer4 application. The different views, controllers, use this interface as required. This can then be changed as required.

namespace CustomIdentityServer4.UserServices
{
    public interface IUserRepository
    {
        bool ValidateCredentials(string username, string password);

        CustomUser FindBySubjectId(string subjectId);

        CustomUser FindByUsername(string username);
    }
}

The CustomUser class is the the user class. This class can be changed to map the user data defined in the persistence medium.

namespace CustomIdentityServer4.UserServices
{
    public class CustomUser
    {
            public string SubjectId { get; set; }
            public string Email { get; set; }
            public string UserName { get; set; }
            public string Password { get; set; }
    }
}

The UserRepository implements the IUserRepository interface. Dummy users are added in this example to test. If you using a custom database, or dapper, or whatever, you could implement the data access logic in this class.

using System.Collections.Generic;
using System.Linq;
using System;

namespace CustomIdentityServer4.UserServices
{
    public class UserRepository : IUserRepository
    {
        // some dummy data. Replce this with your user persistence. 
        private readonly List<CustomUser> _users = new List<CustomUser>
        {
            new CustomUser{
                SubjectId = "123",
                UserName = "damienbod",
                Password = "damienbod",
                Email = "damienbod@email.ch"
            },
            new CustomUser{
                SubjectId = "124",
                UserName = "raphael",
                Password = "raphael",
                Email = "raphael@email.ch"
            },
        };

        public bool ValidateCredentials(string username, string password)
        {
            var user = FindByUsername(username);
            if (user != null)
            {
                return user.Password.Equals(password);
            }

            return false;
        }

        public CustomUser FindBySubjectId(string subjectId)
        {
            return _users.FirstOrDefault(x => x.SubjectId == subjectId);
        }

        public CustomUser FindByUsername(string username)
        {
            return _users.FirstOrDefault(x => x.UserName.Equals(username, StringComparison.OrdinalIgnoreCase));
        }
    }
}

The CustomProfileService uses the IUserRepository to get the user data, and adds the claims for the user to the tokens, which are returned to the client, if the user/application was validated.

using System.Security.Claims;
using System.Threading.Tasks;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;

namespace CustomIdentityServer4.UserServices
{
    public class CustomProfileService : IProfileService
    {
        protected readonly ILogger Logger;


        protected readonly IUserRepository _userRepository;

        public CustomProfileService(IUserRepository userRepository, ILogger<CustomProfileService> logger)
        {
            _userRepository = userRepository;
            Logger = logger;
        }


        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();

            Logger.LogDebug("Get profile called for subject {subject} from client {client} with claim types {claimTypes} via {caller}",
                context.Subject.GetSubjectId(),
                context.Client.ClientName ?? context.Client.ClientId,
                context.RequestedClaimTypes,
                context.Caller);

            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());

            var claims = new List<Claim>
            {
                new Claim("role", "dataEventRecords.admin"),
                new Claim("role", "dataEventRecords.user"),
                new Claim("username", user.UserName),
                new Claim("email", user.Email)
            };

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = _userRepository.FindBySubjectId(context.Subject.GetSubjectId());
            context.IsActive = user != null;
        }
    }
}

The CustomResourceOwnerPasswordValidator implements the validation.

using IdentityServer4.Validation;
using IdentityModel;
using System.Threading.Tasks;

namespace CustomIdentityServer4.UserServices
{
    public class CustomResourceOwnerPasswordValidator : IResourceOwnerPasswordValidator
    {
        private readonly IUserRepository _userRepository;

        public CustomResourceOwnerPasswordValidator(IUserRepository userRepository)
        {
            _userRepository = userRepository;
        }

        public Task ValidateAsync(ResourceOwnerPasswordValidationContext context)
        {
            if (_userRepository.ValidateCredentials(context.UserName, context.Password))
            {
                var user = _userRepository.FindByUsername(context.UserName);
                context.Result = new GrantValidationResult(user.SubjectId, OidcConstants.AuthenticationMethods.Password);
            }

            return Task.FromResult(0);
        }
    }
}

The AccountController is configured to use the IUserRepository interface.

   public class AccountController : Controller
    {
        private readonly IIdentityServerInteractionService _interaction;
        private readonly AccountService _account;
        private readonly IUserRepository _userRepository;

        public AccountController(
            IIdentityServerInteractionService interaction,
            IClientStore clientStore,
            IHttpContextAccessor httpContextAccessor,
            IUserRepository userRepository)
        {
            _interaction = interaction;
            _account = new AccountService(interaction, httpContextAccessor, clientStore);
            _userRepository = userRepository;
        }

        /// <summary>
        /// Show login page
        /// </summary>
        [HttpGet]

Setting up a grant type ResourceOwnerPasswordAndClientCredentials to use refresh tokens

The grant type ResourceOwnerPasswordAndClientCredentials is configured in the GetClients method in the IdentityServer4 application. To use refresh tokens, you must add the IdentityServerConstants.StandardScopes.OfflineAccess to the allowed scopes. Then the other refresh token settings can be set as required.

public static IEnumerable<Client> GetClients()
{
	return new List<Client>
	{
		new Client
		{
			ClientId = "resourceownerclient",

			AllowedGrantTypes = GrantTypes.ResourceOwnerPasswordAndClientCredentials,
			AccessTokenType = AccessTokenType.Jwt,
			AccessTokenLifetime = 120, //86400,
			IdentityTokenLifetime = 120, //86400,
			UpdateAccessTokenClaimsOnRefresh = true,
			SlidingRefreshTokenLifetime = 30,
			AllowOfflineAccess = true,
			RefreshTokenExpiration = TokenExpiration.Absolute,
			RefreshTokenUsage = TokenUsage.OneTimeOnly,
			AlwaysSendClientClaims = true,
			Enabled = true,
			ClientSecrets=  new List<Secret> { new Secret("dataEventRecordsSecret".Sha256()) },
			AllowedScopes = {
				IdentityServerConstants.StandardScopes.OpenId, 
				IdentityServerConstants.StandardScopes.Profile,
				IdentityServerConstants.StandardScopes.Email,
				IdentityServerConstants.StandardScopes.OfflineAccess,
				"dataEventRecords"
			}
		}
	};
}

When the token client requests a token, the offline_access must be sent in the HTTP request, to recieve a refresh token.

private static async Task<TokenResponse> RequestTokenAsync(string user, string password)
{
	return await _tokenClient.RequestResourceOwnerPasswordAsync(
		user,
		password,
		"email openid dataEventRecords offline_access");
}

Running the application

When all three applications are started, the console application gets the tokens from the IdentityServer4 application and the required claims are returned to the console application in the token. Not all the claims need to be added to the access_token, only the ones which are required on the resource server. If the user info is required in the UI, a separate request can be made for this info.

Here’s the token payload returned from the server to the client in the token. You can see the extra data added in the profile service, for example the role array.

{
  "nbf": 1492161131,
  "exp": 1492161251,
  "iss": "https://localhost:44318",
  "aud": [
    "https://localhost:44318/resources",
    "dataEventRecords"
  ],
  "client_id": "resourceownerclient",
  "sub": "123",
  "auth_time": 1492161130,
  "idp": "local",
  "role": [
    "dataEventRecords.admin",
    "dataEventRecords.user"
  ],
  "username": "damienbod",
  "email": "damienbod@email.ch",
  "scope": [
    "email",
    "openid",
    "dataEventRecords",
    "offline_access"
  ],
  "amr": [
    "pwd"
  ]
}

The token is used to get the data from the resource server. The client uses the access_token and adds it to the header of the HTTP request.

HttpClient httpClient = new HttpClient();
httpClient.SetBearerToken(access_token);

var payloadFromResourceServer = await httpClient.GetAsync("https://localhost:44365/api/DataEventRecords");
if (!payloadFromResourceServer.IsSuccessStatusCode)
{
	Console.WriteLine(payloadFromResourceServer.StatusCode);
}
else
{
	var content = await payloadFromResourceServer.Content.ReadAsStringAsync();
	Console.WriteLine(JArray.Parse(content));
}

The resource server validates each request using the UseIdentityServerAuthentication middleware extension method.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
IdentityServerAuthenticationOptions identityServerValidationOptions = new IdentityServerAuthenticationOptions
{
	Authority = "https://localhost:44318/",
	AllowedScopes = new List<string> { "dataEventRecords" },
	ApiSecret = "dataEventRecordsSecret",
	ApiName = "dataEventRecords",
	AutomaticAuthenticate = true,
	SupportedTokens = SupportedTokens.Both,
	// TokenRetriever = _tokenRetriever,
	// required if you want to return a 403 and not a 401 for forbidden responses
	AutomaticChallenge = true,
};

app.UseIdentityServerAuthentication(identityServerValidationOptions);

Each API is protected using the Authorize attribute with policies if needed. The HttpContext can be used to get the claims sent with the token, if required. The username is sent with the access_token in the header.

[Authorize("dataEventRecordsUser")]
[HttpGet]
public IActionResult Get()
{
	var userName = HttpContext.User.FindFirst("username")?.Value;
	return Ok(_dataEventRecordRepository.GetAll());
}

The client gets a refresh token and updates periodically in the client. You could use a background task to implement this in a desktop or mobile application.

public static async Task RunRefreshAsync(TokenResponse response, int milliseconds)
{
	var refresh_token = response.RefreshToken;

	while (true)
	{
		response = await RefreshTokenAsync(refresh_token);

		// Get the resource data using the new tokens...
		await ResourceDataClient.GetDataAndDisplayInConsoleAsync(response.AccessToken);

		if (response.RefreshToken != refresh_token)
		{
			ShowResponse(response);
			refresh_token = response.RefreshToken;
		}

		Task.Delay(milliseconds).Wait();
	}
}

The application then loops forever.

Links:

https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

https://github.com/IdentityModel/IdentityModel2

https://github.com/IdentityServer/IdentityServer4

https://github.com/IdentityServer/IdentityServer4.Samples



Dominick Baier: dotnet new Templates for IdentityServer4

The dotnet CLI includes a templating engine that makes it pretty straightforward to create your own project templates (see this blog post for a good intro).

This new repo is the home for all IdentityServer4 templates to come – right now they are pretty basic, but good enough to get you started.

The repo includes three templates right now:

dotnet new is4

Creates a minimal IdentityServer4 project without a UI and just one API and one client.

dotnet new is4ui

Adds the quickstart UI to the current project (can be combined with is4)

dotnet new is4inmem

Adds a boilerplate IdentityServer with UI, test users and sample clients and resources

See the readme for installation instructions.

is4 new


Filed under: .NET Security, ASP.NET Core, IdentityServer, OAuth, OpenID Connect, WebAPI


Damien Bowden: Implementing OpenID Implicit Flow using OpenIddict and Angular

This article shows how to implement the OpenID Connect Implicit Flow using OpenIddict hosted in an ASP.NET Core application, an ASP.NET Core web API and an Angular application as the client.

Code: https://github.com/damienbod/AspNetCoreOpeniddictAngularImplicitFlow

Three different projects are used to implement the application. The OpenIddict Implicit Flow Server is used to authenticate and authorise, the resource server is used to provide the API, and the Angular application implements the UI.

OpenIddict Server implementing the Implicit Flow

To use the OpenIddict NuGet packages to implement an OpenID Connect server, you need to use the myget server. You can add a NuGet.config file to your project to configure this, or add it to the package sources in Visual Studio 2017.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet" value="https://api.nuget.org/v3/index.json" />
    <add key="aspnet-contrib" value="https://www.myget.org/F/aspnet-contrib/api/v3/index.json" />
  </packageSources>
</configuration>

Then you can use the NuGet package manager to download the required packages. You need to select the key for the correct source in the drop down on the right hand side, and select the required pre-release packages.

Or you can just add them directly to the csproj file.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <OutputType>Exe</OutputType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="AspNet.Security.OAuth.Validation" Version="1.0.0-rtm-0241" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Google" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Authentication.Twitter" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.1" />
    <PackageReference Include="Microsoft.AspNetCore.Cors" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Openiddict" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.EntityFrameworkCore" Version="1.0.0-beta2-0598" />
    <PackageReference Include="OpenIddict.Mvc" Version="1.0.0-beta2-0598" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite.Design" Version="1.1.1" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0" />
    <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0" />
  </ItemGroup>

  <ItemGroup>
    <None Update="damienbodserver.pfx">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>

</Project>

The OpenIddict packages are configured in the ConfigureServices and the Configure methods in the Startup class. The following code configures the OpenID Connect Implicit Flow with a SQLite database using Entity Framework Core. The required endpoints are enabled, and Json Web tokens are used.

public void ConfigureServices(IServiceCollection services)
{
	services.AddDbContext<ApplicationDbContext>(options =>
	{
		options.UseSqlite(Configuration.GetConnectionString("DefaultConnection"));
		options.UseOpenIddict();
	});

	services.AddIdentity<ApplicationUser, IdentityRole>()
		.AddEntityFrameworkStores<ApplicationDbContext>();

	services.Configure<IdentityOptions>(options =>
	{
		options.ClaimsIdentity.UserNameClaimType = OpenIdConnectConstants.Claims.Name;
		options.ClaimsIdentity.UserIdClaimType = OpenIdConnectConstants.Claims.Subject;
		options.ClaimsIdentity.RoleClaimType = OpenIdConnectConstants.Claims.Role;
	});

	services.AddOpenIddict(options =>
	{
		options.AddEntityFrameworkCoreStores<ApplicationDbContext>();
		options.AddMvcBinders();
		options.EnableAuthorizationEndpoint("/connect/authorize")
			   .EnableLogoutEndpoint("/connect/logout")
			   .EnableIntrospectionEndpoint("/connect/introspect")
			   .EnableUserinfoEndpoint("/api/userinfo");

		options.AllowImplicitFlow();
		options.AddSigningCertificate(_cert);
		options.UseJsonWebTokens();
	});

	var policy = new Microsoft.AspNetCore.Cors.Infrastructure.CorsPolicy();

	policy.Headers.Add("*");
	policy.Methods.Add("*");
	policy.Origins.Add("*");
	policy.SupportsCredentials = true;

	services.AddCors(x => x.AddPolicy("corsGlobalPolicy", policy));

	services.AddMvc();

	services.AddTransient<IEmailSender, AuthMessageSender>();
	services.AddTransient<ISmsSender, AuthMessageSender>();
}

The Configure method defines JwtBearerAuthentication so the userinfo API can be used, or any other authorisered API. The OpenIddict middlware is also added. The commented out method InitializeAsync is used to add OpenIddict data to the existing database. The database was created using Entity Framework Core migrations from the command line.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
		app.UseDatabaseErrorPage();
	}
	else
	{
		app.UseExceptionHandler("/Home/Error");
	}

	app.UseCors("corsGlobalPolicy");

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();

	var jwtOptions = new JwtBearerOptions()
	{
		AutomaticAuthenticate = true,
		AutomaticChallenge = true,
		RequireHttpsMetadata = true,
		Audience = "dataEventRecords",
		ClaimsIssuer = "https://localhost:44319/",
		TokenValidationParameters = new TokenValidationParameters
		{
			NameClaimType = OpenIdConnectConstants.Claims.Name,
			RoleClaimType = OpenIdConnectConstants.Claims.Role
		}
	};

	jwtOptions.TokenValidationParameters.ValidAudience = "dataEventRecords";
	jwtOptions.TokenValidationParameters.ValidIssuer = "https://localhost:44319/";
	jwtOptions.TokenValidationParameters.IssuerSigningKey = new RsaSecurityKey(_cert.GetRSAPrivateKey().ExportParameters(false));
	app.UseJwtBearerAuthentication(jwtOptions);

	app.UseIdentity();

	app.UseOpenIddict();

	app.UseMvcWithDefaultRoute();

	// Seed the database with the sample applications.
	// Note: in a real world application, this step should be part of a setup script.
	// InitializeAsync(app.ApplicationServices, CancellationToken.None).GetAwaiter().GetResult();
}

Entity Framework Core database migrations:

> dotnet ef migrations add test
> dotnet ef database update test

The UserinfoController controller is used to return user data to the client. The API requires a token which is validated using the JWT Bearer token validation, configured in the Startup class.
The required claims need to be added here, as the application requires. This example adds some extra role claims which are used in the Angular SPA.

using System.Threading.Tasks;
using AspNet.Security.OAuth.Validation;
using AspNet.Security.OpenIdConnect.Primitives;
using OpeniddictServer.Models;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace OpeniddictServer.Controllers
{
    [Route("api")]
    public class UserinfoController : Controller
    {
        private readonly UserManager<ApplicationUser> _userManager;

        public UserinfoController(UserManager<ApplicationUser> userManager)
        {
            _userManager = userManager;
        }

        //
        // GET: /api/userinfo
        [Authorize(ActiveAuthenticationSchemes = OAuthValidationDefaults.AuthenticationScheme)]
        [HttpGet("userinfo"), Produces("application/json")]
        public async Task<IActionResult> Userinfo()
        {
            var user = await _userManager.GetUserAsync(User);
            if (user == null)
            {
                return BadRequest(new OpenIdConnectResponse
                {
                    Error = OpenIdConnectConstants.Errors.InvalidGrant,
                    ErrorDescription = "The user profile is no longer available."
                });
            }

            var claims = new JObject();
            claims[OpenIdConnectConstants.Claims.Subject] = await _userManager.GetUserIdAsync(user);

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Email))
            {
                claims[OpenIdConnectConstants.Claims.Email] = await _userManager.GetEmailAsync(user);
                claims[OpenIdConnectConstants.Claims.EmailVerified] = await _userManager.IsEmailConfirmedAsync(user);
            }

            if (User.HasClaim(OpenIdConnectConstants.Claims.Scope, OpenIdConnectConstants.Scopes.Phone))
            {
                claims[OpenIdConnectConstants.Claims.PhoneNumber] = await _userManager.GetPhoneNumberAsync(user);
                claims[OpenIdConnectConstants.Claims.PhoneNumberVerified] = await _userManager.IsPhoneNumberConfirmedAsync(user);
            }

            List<string> roles = new List<string> { "dataEventRecords", "dataEventRecords.admin", "admin", "dataEventRecords.user" };
            claims["role"] = JArray.FromObject(roles);

            return Json(claims);
        }
    }
}

The AuthorizationController controller implements the CreateTicketAsync method where the claims can be added to the tokens as required. The Implict Flow in this example requires both the id_token and the access_token and extra claims are added to the access_token. These are the claims used by the resource server to set the policies.

private async Task<AuthenticationTicket> CreateTicketAsync(OpenIdConnectRequest request, ApplicationUser user)
{
	var identity = new ClaimsIdentity(OpenIdConnectServerDefaults.AuthenticationScheme);

	var principal = await _signInManager.CreateUserPrincipalAsync(user);
	foreach (var claim in principal.Claims)
	{
		if (claim.Type == _identityOptions.Value.ClaimsIdentity.SecurityStampClaimType)
		{
			continue;
		}

		var destinations = new List<string>
		{
			OpenIdConnectConstants.Destinations.AccessToken
		};

		if ((claim.Type == OpenIdConnectConstants.Claims.Name) ||
			(claim.Type == OpenIdConnectConstants.Claims.Email) ||
			(claim.Type == OpenIdConnectConstants.Claims.Role)  )
		{
			destinations.Add(OpenIdConnectConstants.Destinations.IdentityToken);
		}

		claim.SetDestinations(destinations);

		identity.AddClaim(claim);
	}

	// Add custom claims
	var claimdataEventRecordsAdmin = new Claim("role", "dataEventRecords.admin");
	claimdataEventRecordsAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimAdmin = new Claim("role", "admin");
	claimAdmin.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	var claimUser = new Claim("role", "dataEventRecords.user");
	claimUser.SetDestinations(OpenIdConnectConstants.Destinations.AccessToken);

	identity.AddClaim(claimdataEventRecordsAdmin);
	identity.AddClaim(claimAdmin);
	identity.AddClaim(claimUser);

	// Create a new authentication ticket holding the user identity.
	var ticket = new AuthenticationTicket(new ClaimsPrincipal(identity),
	new AuthenticationProperties(),
	OpenIdConnectServerDefaults.AuthenticationScheme);

	// Set the list of scopes granted to the client application.
	ticket.SetScopes(new[]
	{
		OpenIdConnectConstants.Scopes.OpenId,
		OpenIdConnectConstants.Scopes.Email,
		OpenIdConnectConstants.Scopes.Profile,
		"role",
		"dataEventRecords"
	}.Intersect(request.GetScopes()));

	ticket.SetResources("dataEventRecords");

	return ticket;
}

If you require more examples, or different flows, refer to the excellent openiddict-samples .

Angular Implicit Flow client

The Angular application uses the AuthConfiguration class to set the options required for the OpenID Connect Implicit Flow. The ‘id_token token’ is defined as the response type so that an access_token is returned as well as the id_token. The jwks_url is required so that the client can ge the signiture from the server to validate the token. The userinfo_url and the logoutEndSession_url are used to define the user data url and the logout url. These could be removed and the data from the jwks_url could be ued to get these parameters. The configuration here has to match the configuration on the server.

import { Injectable } from '@angular/core';

@Injectable()
export class AuthConfiguration {

    // The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public iss = 'https://localhost:44319/';

    public server = 'https://localhost:44319';

    public redirect_url = 'https://localhost:44308';

    // This is required to get the signing keys so that the signiture of the Jwt can be validated.
    public jwks_url = 'https://localhost:44319/.well-known/jwks';

    public userinfo_url = 'https://localhost:44319/api/userinfo';

    public logoutEndSession_url = 'https://localhost:44319/connect/logout';

    // The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public client_id = 'angular4client';

    public response_type = 'id_token token';

    public scope = 'dataEventRecords openid';

    public post_logout_redirect_uri = 'https://localhost:44308/Unauthorized';
}

The OidcSecurityService is used to send the login request to the server and also handle the callback which validates the tokens. This class also persists the token data to the local storage.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import { Observable } from 'rxjs/Rx';
import { Router } from '@angular/router';
import { AuthConfiguration } from '../auth.configuration';
import { OidcSecurityValidation } from './oidc.security.validation';
import { JwtKeys } from './jwtkeys';

@Injectable()
export class OidcSecurityService {

    public HasAdminRole: boolean;
    public HasUserAdminRole: boolean;
    public UserData: any;

    private _isAuthorized: boolean;
    private actionUrl: string;
    private headers: Headers;
    private storage: any;
    private oidcSecurityValidation: OidcSecurityValidation;

    private errorMessage: string;
    private jwtKeys: JwtKeys;

    constructor(private _http: Http, private _configuration: AuthConfiguration, private _router: Router) {

        this.actionUrl = _configuration.server + 'api/DataEventRecords/';
        this.oidcSecurityValidation = new OidcSecurityValidation();

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.storage = sessionStorage; //localStorage;

        if (this.retrieve('_isAuthorized') !== '') {
            this.HasAdminRole = this.retrieve('HasAdminRole');
            this._isAuthorized = this.retrieve('_isAuthorized');
        }
    }

    public IsAuthorized(): boolean {
        if (this._isAuthorized) {
            if (this.oidcSecurityValidation.IsTokenExpired(this.retrieve('authorizationDataIdToken'))) {
                console.log('IsAuthorized: isTokenExpired');
                this.ResetAuthorizationData();
                return false;
            }

            return true;
        }

        return false;
    }

    public GetToken(): any {
        return this.retrieve('authorizationData');
    }

    public ResetAuthorizationData() {
        this.store('authorizationData', '');
        this.store('authorizationDataIdToken', '');

        this._isAuthorized = false;
        this.HasAdminRole = false;
        this.store('HasAdminRole', false);
        this.store('_isAuthorized', false);
    }

    public SetAuthorizationData(token: any, id_token: any) {
        if (this.retrieve('authorizationData') !== '') {
            this.store('authorizationData', '');
        }

        console.log(token);
        console.log(id_token);
        console.log('storing to storage, getting the roles');
        this.store('authorizationData', token);
        this.store('authorizationDataIdToken', id_token);
        this._isAuthorized = true;
        this.store('_isAuthorized', true);

        this.getUserData()
            .subscribe(data => this.UserData = data,
            error => this.HandleError(error),
            () => {
                for (let i = 0; i < this.UserData.role.length; i++) {
                    console.log(this.UserData.role[i]);
                    if (this.UserData.role[i] === 'dataEventRecords.admin') {
                        this.HasAdminRole = true;
                        this.store('HasAdminRole', true);
                    }
                    if (this.UserData.role[i] === 'admin') {
                        this.HasUserAdminRole = true;
                        this.store('HasUserAdminRole', true);
                    }
                }
            });
    }

    public Authorize() {
        this.ResetAuthorizationData();

        console.log('BEGIN Authorize, no auth data');

        let authorizationUrl = this._configuration.server + '/connect/authorize';
        let client_id = this._configuration.client_id;
        let redirect_uri = this._configuration.redirect_url;
        let response_type = this._configuration.response_type;
        let scope = this._configuration.scope;
        let nonce = 'N' + Math.random() + '' + Date.now();
        let state = Date.now() + '' + Math.random();

        this.store('authStateControl', state);
        this.store('authNonce', nonce);
        console.log('AuthorizedController created. adding myautostate: ' + this.retrieve('authStateControl'));

        let url =
            authorizationUrl + '?' +
            'response_type=' + encodeURI(response_type) + '&' +
            'client_id=' + encodeURI(client_id) + '&' +
            'redirect_uri=' + encodeURI(redirect_uri) + '&' +
            'scope=' + encodeURI(scope) + '&' +
            'nonce=' + encodeURI(nonce) + '&' +
            'state=' + encodeURI(state);

        window.location.href = url;
    }

    public AuthorizedCallback() {
        console.log('BEGIN AuthorizedCallback, no auth data');
        this.ResetAuthorizationData();

        let hash = window.location.hash.substr(1);

        let result: any = hash.split('&').reduce(function (result: any, item: string) {
            let parts = item.split('=');
            result[parts[0]] = parts[1];
            return result;
        }, {});

        console.log(result);
        console.log('AuthorizedCallback created, begin token validation');

        let token = '';
        let id_token = '';
        let authResponseIsValid = false;

        this.getSigningKeys()
            .subscribe(jwtKeys => {
                this.jwtKeys = jwtKeys;

                if (!result.error) {

                    // validate state
                    if (this.oidcSecurityValidation.ValidateStateFromHashCallback(result.state, this.retrieve('authStateControl'))) {
                        token = result.access_token;
                        id_token = result.id_token;
                        let decoded: any;
                        let headerDecoded;
                        decoded = this.oidcSecurityValidation.GetPayloadFromToken(id_token, false);
                        headerDecoded = this.oidcSecurityValidation.GetHeaderFromToken(id_token, false);

                        // validate jwt signature
                        if (this.oidcSecurityValidation.Validate_signature_id_token(id_token, this.jwtKeys)) {
                            // validate nonce
                            if (this.oidcSecurityValidation.Validate_id_token_nonce(decoded, this.retrieve('authNonce'))) {
                                // validate iss
                                if (this.oidcSecurityValidation.Validate_id_token_iss(decoded, this._configuration.iss)) {
                                    // validate aud
                                    if (this.oidcSecurityValidation.Validate_id_token_aud(decoded, this._configuration.client_id)) {
                                        // valiadate at_hash and access_token
                                        if (this.oidcSecurityValidation.Validate_id_token_at_hash(token, decoded.at_hash) || !token) {
                                            this.store('authNonce', '');
                                            this.store('authStateControl', '');

                                            authResponseIsValid = true;
                                            console.log('AuthorizedCallback state, nonce, iss, aud, signature validated, returning token');
                                        } else {
                                            console.log('AuthorizedCallback incorrect aud');
                                        }
                                    } else {
                                        console.log('AuthorizedCallback incorrect aud');
                                    }
                                } else {
                                    console.log('AuthorizedCallback incorrect iss');
                                }
                            } else {
                                console.log('AuthorizedCallback incorrect nonce');
                            }
                        } else {
                            console.log('AuthorizedCallback incorrect Signature id_token');
                        }
                    } else {
                        console.log('AuthorizedCallback incorrect state');
                    }
                }

                if (authResponseIsValid) {
                    this.SetAuthorizationData(token, id_token);
                    console.log(this.retrieve('authorizationData'));

                    // router navigate to DataEventRecordsList
                    this._router.navigate(['/dataeventrecords/list']);
                } else {
                    this.ResetAuthorizationData();
                    this._router.navigate(['/Unauthorized']);
                }
            });
    }

    public Logoff() {
        // /connect/endsession?id_token_hint=...&post_logout_redirect_uri=https://myapp.com
        console.log('BEGIN Authorize, no auth data');

        let authorizationEndsessionUrl = this._configuration.logoutEndSession_url;

        let id_token_hint = this.retrieve('authorizationDataIdToken');
        let post_logout_redirect_uri = this._configuration.post_logout_redirect_uri;

        let url =
            authorizationEndsessionUrl + '?' +
            'id_token_hint=' + encodeURI(id_token_hint) + '&' +
            'post_logout_redirect_uri=' + encodeURI(post_logout_redirect_uri);

        this.ResetAuthorizationData();

        window.location.href = url;
    }

    private runGetSigningKeys() {
        this.getSigningKeys()
            .subscribe(
            jwtKeys => this.jwtKeys = jwtKeys,
            error => this.errorMessage = <any>error);
    }

    private getSigningKeys(): Observable<JwtKeys> {
        return this._http.get(this._configuration.jwks_url)
            .map(this.extractData)
            .catch(this.handleError);
    }

    private extractData(res: Response) {
        let body = res.json();
        return body;
    }

    private handleError(error: Response | any) {
        // In a real world app, you might use a remote logging infrastructure
        let errMsg: string;
        if (error instanceof Response) {
            const body = error.json() || '';
            const err = body.error || JSON.stringify(body);
            errMsg = `${error.status} - ${error.statusText || ''} ${err}`;
        } else {
            errMsg = error.message ? error.message : error.toString();
        }
        console.error(errMsg);
        return Observable.throw(errMsg);
    }

    public HandleError(error: any) {
        console.log(error);
        if (error.status == 403) {
            this._router.navigate(['/Forbidden']);
        } else if (error.status == 401) {
            this.ResetAuthorizationData();
            this._router.navigate(['/Unauthorized']);
        }
    }

    private retrieve(key: string): any {
        let item = this.storage.getItem(key);

        if (item && item !== 'undefined') {
            return JSON.parse(this.storage.getItem(key));
        }

        return;
    }

    private store(key: string, value: any) {
        this.storage.setItem(key, JSON.stringify(value));
    }

    private getUserData = (): Observable<string[]> => {
        this.setHeaders();
        return this._http.get(this._configuration.userinfo_url, {
            headers: this.headers,
            body: ''
        }).map(res => res.json());
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        let token = this.GetToken();

        if (token !== '') {
            this.headers.append('Authorization', 'Bearer ' + token);
        }
    }
}

The OidcSecurityValidation class defines the functions used to validate the tokens defined in the OpenID Connect specification for the Implicit Flow.

import { Injectable } from '@angular/core';

// from jsrasiign
declare var KJUR: any;
declare var KEYUTIL: any;
declare var hextob64u: any;

// http://openid.net/specs/openid-connect-implicit-1_0.html

// id_token
//// id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
//// id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
// id_token C3: If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.
// id_token C4: If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
//// id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
//// id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
//// id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
// id_token C8: The iat Claim can be used to reject tokens that were issued too far away from the current time, limiting the amount of time that nonces need to be stored to prevent attacks.The acceptable range is Client specific.
//// id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
// id_token C10: If the acr Claim was requested, the Client SHOULD check that the asserted Claim Value is appropriate.The meaning and processing of acr Claim Values is out of scope for this document.
// id_token C11: When a max_age request is made, the Client SHOULD check the auth_time Claim value and request re- authentication if it determines too much time has elapsed since the last End- User authentication.

//// Access Token Validation
//// access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
//// access_token C2: Take the left- most half of the hash and base64url- encode it.
//// access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.

@Injectable()
export class OidcSecurityValidation {

    // id_token C7: The current time MUST be before the time represented by the exp Claim (possibly allowing for some small leeway to account for clock skew).
    public IsTokenExpired(token: string, offsetSeconds?: number): boolean {

        let decoded: any;
        decoded = this.GetPayloadFromToken(token, false);

        let tokenExpirationDate = this.getTokenExpirationDate(decoded);
        offsetSeconds = offsetSeconds || 0;

        if (tokenExpirationDate == null) {
            return false;
        }

        // Token expired?
        return !(tokenExpirationDate.valueOf() > (new Date().valueOf() + (offsetSeconds * 1000)));
    }

    // id_token C9: The value of the nonce Claim MUST be checked to verify that it is the same value as the one that was sent in the Authentication Request.The Client SHOULD check the nonce value for replay attacks.The precise method for detecting replay attacks is Client specific.
    public Validate_id_token_nonce(dataIdToken: any, local_nonce: any): boolean {
        if (dataIdToken.nonce !== local_nonce) {
            console.log('Validate_id_token_nonce failed');
            return false;
        }

        return true;
    }

    // id_token C1: The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
    public Validate_id_token_iss(dataIdToken: any, client_id: any): boolean {
        if (dataIdToken.iss !== client_id) {
            console.log('Validate_id_token_iss failed');
            return false;
        }

        return true;
    }

    // id_token C2: The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience.
    // The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
    public Validate_id_token_aud(dataIdToken: any, aud: any): boolean {
        if (dataIdToken.aud !== aud) {
            console.log('Validate_id_token_aud failed');
            return false;
        }

        return true;
    }

    public ValidateStateFromHashCallback(state: any, local_state: any): boolean {
        if (state !== local_state) {
            console.log('ValidateStateFromHashCallback failed');
            return false;
        }

        return true;
    }

    public GetPayloadFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[1];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetHeaderFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[0];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    public GetSignatureFromToken(token: any, encode: boolean) {
        let data = {};
        if (typeof token !== 'undefined') {
            let encoded = token.split('.')[2];
            if (encode) {
                return encoded;
            }
            data = JSON.parse(this.urlBase64Decode(encoded));
        }

        return data;
    }

    // id_token C5: The Client MUST validate the signature of the ID Token according to JWS [JWS] using the algorithm specified in the alg Header Parameter of the JOSE Header. The Client MUST use the keys provided by the Issuer.
    // id_token C6: The alg value SHOULD be RS256. Validation of tokens using other signing algorithms is described in the OpenID Connect Core 1.0 [OpenID.Core] specification.
    public Validate_signature_id_token(id_token: any, jwtkeys: any): boolean {

        if (!jwtkeys || !jwtkeys.keys) {
            return false;
        }

        let header_data = this.GetHeaderFromToken(id_token, false);
        let kid = header_data.kid;
        let alg = header_data.alg;

        if ('RS256' != alg) {
            console.log('Only RS256 supported');
            return false;
        }

        let isValid = false;

        for (let key of jwtkeys.keys) {
            if (key.kid === kid) {
                let publickey = KEYUTIL.getKey(key);
                isValid = KJUR.jws.JWS.verify(id_token, publickey, ['RS256']);
                return isValid;
            }
        }

        return isValid;
    }

    // Access Token Validation
    // access_token C1: Hash the octets of the ASCII representation of the access_token with the hash algorithm specified in JWA[JWA] for the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is RS256, the hash algorithm used is SHA-256.
    // access_token C2: Take the left- most half of the hash and base64url- encode it.
    // access_token C3: The value of at_hash in the ID Token MUST match the value produced in the previous step if at_hash is present in the ID Token.
    public Validate_id_token_at_hash(access_token: any, at_hash: any): boolean {

        let hash = KJUR.crypto.Util.hashString(access_token, 'sha256');
        let first128bits = hash.substr(0, hash.length / 2);
        let testdata = hextob64u(first128bits);

        if (testdata === at_hash) {
            return true; // isValid;
        }

        return false;
    }

    private getTokenExpirationDate(dataIdToken: any): Date {
        if (!dataIdToken.hasOwnProperty('exp')) {
            return null;
        }

        let date = new Date(0); // The 0 here is the key, which sets the date to the epoch
        date.setUTCSeconds(dataIdToken.exp);

        return date;
    }


    private urlBase64Decode(str: string) {
        let output = str.replace('-', '+').replace('_', '/');
        switch (output.length % 4) {
            case 0:
                break;
            case 2:
                output += '==';
                break;
            case 3:
                output += '=';
                break;
            default:
                throw 'Illegal base64url string!';
        }

        return window.atob(output);
    }
}

The jsrsasign is used to validate the token signature and is added to the html file as a link.

!doctype html>
<html>
<head>
    <base href="./">
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>ASP.NET Core 1.0 Angular IdentityServer4 Client</title>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
	
	<script src="assets/jsrsasign.min.js"></script>
</head>
<body>
    <my-app>Loading...</my-app>
</body>
</html>

Once logged into the application, the access_token is added to the header of each request and sent to the resource server or the required APIs on the OpenIddict server.

 private setHeaders() {

        console.log('setHeaders started');

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
        this.headers.append('Cache-Control', 'no-cache');

        let token = this._securityService.GetToken();
        if (token !== '') {
            let tokenValue = 'Bearer ' + token;
            console.log('tokenValue:' + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

ASP.NET Core Resource Server API

The resource server provides an API protected by security policies, dataEventRecordsUser and dataEventRecordsAdmin.

using AspNet5SQLite.Model;
using AspNet5SQLite.Repositories;

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace AspNet5SQLite.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class DataEventRecordsController : Controller
    {
        private readonly IDataEventRecordRepository _dataEventRecordRepository;

        public DataEventRecordsController(IDataEventRecordRepository dataEventRecordRepository)
        {
            _dataEventRecordRepository = dataEventRecordRepository;
        }

        [Authorize("dataEventRecordsUser")]
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(_dataEventRecordRepository.GetAll());
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpGet("{id}")]
        public IActionResult Get(long id)
        {
            return Ok(_dataEventRecordRepository.Get(id));
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPost]
        public void Post([FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Post(value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpPut("{id}")]
        public void Put(long id, [FromBody]DataEventRecord value)
        {
            _dataEventRecordRepository.Put(id, value);
        }

        [Authorize("dataEventRecordsAdmin")]
        [HttpDelete("{id}")]
        public void Delete(long id)
        {
            _dataEventRecordRepository.Delete(id);
        }
    }
}

The policies are implemented in the Startup class and are implemented using the role claims dataEventRecords.user, dataEventRecords.admin and the scope dataEventRecords.

var guestPolicy = new AuthorizationPolicyBuilder()
	.RequireAuthenticatedUser()
	.RequireClaim("scope", "dataEventRecords")
	.Build();

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role",  "dataEventRecords.user");
	});

});

Jwt Bearer Authentication is used to validate the API HTTP requests.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
JwtSecurityTokenHandler.DefaultOutboundClaimTypeMap.Clear();
			
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
	Authority = "https://localhost:44319/",
	Audience = "dataEventRecords",
	RequireHttpsMetadata = true,
	TokenValidationParameters = new TokenValidationParameters
	{
		NameClaimType = OpenIdConnectConstants.Claims.Subject,
		RoleClaimType = OpenIdConnectConstants.Claims.Role
	}
});

Running the application

When the application is started, all 3 applications are run, using the Visual Studio 2017 multiple project start option.

After the user clicks the login button, the user is redirected to the OpenIddict server to login.

After a successful login, the user is redirected back to the Angular application.

Links:

https://github.com/openiddict/openiddict-core

http://kevinchalet.com/2016/07/13/creating-your-own-openid-connect-server-with-asos-implementing-the-authorization-code-and-implicit-flows/

https://github.com/openiddict/openiddict-core/issues/49

https://github.com/openiddict/openiddict-samples

https://blogs.msdn.microsoft.com/webdev/2017/01/23/asp-net-core-authentication-with-identityserver4/

https://blogs.msdn.microsoft.com/webdev/2016/10/27/bearer-token-authentication-in-asp-net-core/

https://blogs.msdn.microsoft.com/webdev/2017/04/06/jwt-validation-and-authorization-in-asp-net-core/

https://jwt.io/

https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows



Anuraj Parameswaran: Working with Azure Blob storage in ASP.NET Core

This post is about uploading and downloading images from Azure Blob storage using ASP.NET Core. First you need to create a blob storage account and then a container which you’ll use to store all the images. You can do this from Azure portal. You need to select Storage > Storage Account - Blob, file, Table, Queue > Create a Storage Account.


Dominick Baier: New in IdentityServer4: Events

Well – not really new – but redesigned.

IdentityServer4 has two diagnostics facilities – logging and events. While logging is more like low level “printf” style – events represent higher level information about certain logical operations in IdentityServer (think Windows security event log).

Events are structured data and include event IDs, success/failure information activity IDs, IP addresses, categories and event specific details. This makes it easy to query and analyze them and extract useful information that can be used for further processing.

Events work great with event stores like ELK, Seq or Splunk.

Screenshot 2017-03-30 18.31.06.png

Find more details in our docs.


Filed under: ASP.NET Core, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Damien Bowden: .NET Core, ASP.NET Core logging with NLog and PostgreSQL

This article shows how .NET Core or ASP.NET Core applications can log to a PostgreSQL database using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

Other posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch
  3. Settings the NLog database connection string in the ASP.NET Core appsettings.json
  4. .NET Core logging to MySQL using NLog
  5. .NET Core logging with NLog and PostgreSQL

Setting up PostgreSQL

pgAdmin can be used to setup the PostgreSQL database which is used to save the logs. A log database was created for this demo, which matches the connection string in the nlog.config file.

Using the pgAdmin, open a query edit view and execute the following script to create a table in the log database.

CREATE TABLE logs
( 
    Id serial primary key,
    Application character varying(100) NULL,
    Logged text,
    Level character varying(100) NULL,
    Message character varying(8000) NULL,
    Logger character varying(8000) NULL, 
    Callsite character varying(8000) NULL, 
    Exception character varying(8000) NULL
)

At present it is not possible to log a date property to PostgreSQL using NLog, only text fields are supported. A github issue exists for this here. Due to this, the Logged field is defined as a text, and uses the DateTime value when the log is created.

.NET or ASP.NET Core Application

The required packages need to be added to the csproj file. For an ASP.NET Core aplication, add NLog.Web.AspNetCore and Npgsql, for a .NET Core application add NLog and Npgsql.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <AssemblyName>ConsoleNLogPostgreSQL</AssemblyName>
    <OutputType>Exe</OutputType>
    <PackageId>ConsoleNLog</PackageId>
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
    <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
    <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
    <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.1" />
    <PackageReference Include="NLog.Web.AspNetCore" Version="4.3.1" />
    <PackageReference Include="Npgsql" Version="3.2.2" />
    <PackageReference Include="System.Data.SqlClient" Version="4.3.0" />
  </ItemGroup>
</Project>

Or use the NuGet package manager in Visual Studio 2017.

The nlog.config file is then setup to log to PostgreSQL using the database target with the dbProvider configured for Npgsql and the connectionString for the required instance of PostgreSQL. The commandText must match the database setup in the TSQL script. If you add, for example extra properties from the NLog.Web.AspNetCore package to the logs, these also need to be added here.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
  
  <targets>
    <target xsi:type="File" name="allfile" fileName="${var:configDir}\nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="${var:configDir}\nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

    <target name="database" xsi:type="Database"
              dbProvider="Npgsql.NpgsqlConnection, Npgsql"
              connectionString="User ID=damienbod;Password=damienbod;Host=localhost;Port=5432;Database=log;Pooling=true;"
             >

          <commandText>
              insert into logs (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
      
    <logger name="*" minlevel="Trace" writeTo="database" />
      
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

When using ASP.NET Core, the NLog.Web.AspNetCore can be added to the nlog.config file to use the extra properties provided here.

<extensions>
     <add assembly="NLog.Web.AspNetCore"/>
</extensions>
            

Using the log

The logger can be used using the LogManager or added to the NLog log configuration in the Startup class in an ASP.NET Core application.

Basic example:

LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

var logger = LogManager.GetLogger("console");
logger.Warn("console logging is great");
logger.Error(new ArgumentException("oh no"));

Startup configuration in an ASP.NET Core application:

public void ConfigureServices(IServiceCollection services)
{
	services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
	// Add framework services.
	services.AddMvc();

	services.AddScoped<LogFilter>();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	//add NLog.Web
	app.AddNLogWeb();

	////foreach (DatabaseTarget target in LogManager.Configuration.AllTargets.Where(t => t is DatabaseTarget))
	////{
	////	target.ConnectionString = Configuration.GetConnectionString("NLogDb");
	////}
	
	////LogManager.ReconfigExistingLoggers();

	LogManager.Configuration.Variables["connectionString"] = Configuration.GetConnectionString("NLogDb");
	LogManager.Configuration.Variables["configDir"] = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

	app.UseMvc();
}

When the application is run, the logs are added to the database.

Links

https://www.postgresql.org/

https://www.pgadmin.org/

https://github.com/nlog/NLog/wiki/Database-target

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://docs.asp.net/en/latest/fundamentals/configuration.html



Dominick Baier: NDC London 2017

As always – NDC was a very good conference. Brock and I did a workshop, two talks and an interview. Here are the relevant links:

Check our website for more training dates.


Filed under: .NET Security, ASP.NET, IdentityModel, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityModel.OidcClient v2 & the OpenID RP Certification

A couple of weeks ago I started re-writing (an re-designing) my OpenID Connect & OAuth 2 client library for native applications. The library follows the guidance from the OpenID Connect and OAuth 2.0 for native Applications specification.

Main features are:

  • Support for OpenID Connect authorization code and hybrid flow
  • Support for PKCE
  • NetStandard 1.4 library, which makes it compatible with x-plat .NET Core, desktop .NET, Xamarin iOS & Android (and UWP soon)
  • Configurable policy to lock down security requirements (e.g. requiring at_hash or c_hash, policies around discovery etc.)
  • either stand-alone mode (request generation and response processing) or support for pluggable (system) browser implementations
  • support for pluggable logging via .NET ILogger

In addition, starting with v2 – OidcClient is also now certified by the OpenID Foundation for the basic and config profile.

oid-l-certification-mark-l-cmyk-150dpi-90mm

It also passes all conformance tests for the code id_token grant type (hybrid flow) – but since I don’t support the other hybrid flow combinations (e.g. code token or code id_token token), I couldn’t certify for the full hybrid profile.

For maximum transparency, I checked in my conformance test runner along with the source code. Feel free to try/verify yourself.

The latest version of OidcClient is the dalwhinnie release (courtesy of my whisky semver scheme). Source code is here.

I am waiting a couple more days for feedback – and then I will release the final 2.0.0 version. If you have some spare time, please give it a try (there’s a console client included and some more sample here <use the v2 branch for the time being>). Thanks!


Filed under: .NET Security, IdentityModel, OAuth, OpenID Connect, WebAPI


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.