Andrew Lock: How to pass parameters to a view component

How to pass parameters to a view component

In my last post I showed how to create a custom view component to simplify my Razor views, and separate the logic of what to display from the UI concern.

View components are a good fit where you have some complex rendering logic, which does not belong in the UI, and is also not a good fit for an action endpoint - approximately equivalent to child actions from the previous version of ASP.NET.

In this post I will show how you can pass parameters to a view component when invoking it from your view, from a controller, or when used as a tag helper.

In the previous post I showed how to create a simple LoginStatusViewComponent that shows you the email of the user and a log out link when a user is logged in, and register or login links when the user is anonymous:

How to pass parameters to a view component

The view component itself was simple, but it separated out the logic of which template to display from the templates themselves. It was created with a simple InvokeAsync method that did not require any parameters:

public class LoginStatusViewComponent : ViewComponent  
{
    private readonly SignInManager<ApplicationUser> _signInManager;
    private readonly UserManager<ApplicationUser> _userManager;

    public LoginStatusViewComponent(SignInManager<ApplicationUser> signInManager, UserManager<ApplicationUser> userManager)
    {
        _signInManager = signInManager;
        _userManager = userManager;
    }

    public async Task<IViewComponentResult> InvokeAsync()
    {
        if (_signInManager.IsSignedIn(HttpContext.User))
        {
            var user = await _userManager.GetUserAsync(HttpContext.User);
            return View("LoggedIn", user);
        }
        else
        {
            return View("Anonymous");
        }
    }
}

Invoking the LoginStatus view component from the _layout.cshtml involves calling Component.InvokeAsync and awaiting the response:

 @await Component.InvokeAsync("LoginStatus")

Updating a view component to accept parameters

The example presented is pretty simple, in that it is self contained; the InvokeAsync method does not have any parameters to pass to it. But what if we wanted to control how the view component behaves when invoked. For example, imagine that you want to control whether to display the Register link for anonymous users. Maybe your site that has an external registration system instead, so the "register" link is not valid in some cases.

First, lets create a simple view model to use in our "anonymous" view:

public class AnonymousViewModel  
{
    public bool IsRegisterLinkVisible { get; set; }
}

Next, we update the InvokeAsync method of our view component to take a boolean parameter. If the user is not logged in, we will pass this parameter down into the view model:

public async Task<IViewComponentResult> InvokeAsync(bool shouldShowRegisterLink)  
{
    if (_signInManager.IsSignedIn(HttpContext.User))
    {
        var user = await _userManager.GetUserAsync(HttpContext.User);
        return View("LoggedIn", user);
    }
    else
    {
        var viewModel = new AnonymousViewModel
        {
            IsRegisterLinkVisible = shouldShowRegisterLink
        };
        return View(viewModel);
    }
}

Finally, we update the anonymous default.cshtml template to honour this boolean:

@model LoginStatusViewComponent.AnonymousViewModel
<ul class="nav navbar-nav navbar-right">  
    @if(Model.IsRegisterLinkVisible)
    {
        <li><a asp-area="" asp-controller="Account" asp-action="Register">Register</a></li>
    }
    <li><a asp-area="" asp-controller="Account" asp-action="Login">Log in</a></li>
</ul>  

Passing parameters to view components using InvokeAsync

Our component is all set up to conditionally show or hide the register link, all that remains is to invoke it.

Passing parameters to a view component is achieved using anonymous types. In our layout, we specify the parameters in an optional parameter passed to InvokeAsync:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    @await Component.InvokeAsync("LoginStatus", new { shouldShowRegisterLink = false })
</div>  

With this in place, the register link can be shown:

How to pass parameters to a view component

or hidden:

How to pass parameters to a view component

If you omit the anonymous type, then the parameters will all have their default values (false for our bool, but null for objects).

Passing parameters to view components when invoked from a controller

Passing parameters to a view component when invoked from a controller is very similar - just pass an anonymous method with the appropriate values when creating the controller:

public IActionResult IndexVC()  
{
    return ViewComponent("LoginStatus", new { shouldShowRegisterLink = false })
}

Passing parameters to view components when invoked as a tag helper in ASP.NET Core 1.1.0

In the previous post I showed how to invoke view components as tag helpers. The parameterless version of our invocation looks like this:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    <vc:login-status></vc:login-status>
</div>  

Passing parameters to a view component tag helper is the same as for normal tag helpers. You convert the parameters to lower-kebab case and add them as attributes to the tag, e.g.:

<vc:login-status should-show-register-link="false"></vc:login-status>  

This gives a nice syntax for invoking our view components without having to drop into C# land use @await Component.InvokeAsync(), and will almost certainly become the preferred way to use them in the future.

Summary

In this post I showed how you can pass parameters to a view component. When invoking from a view in ASP.NET Core 1.0.0 or from a controller, you can use an anonymous method to pass parameters, where the properties are the name of the parameters.

In ASP.NET Core 1.1.0 you can use the alternative tag helper invocation method to pass parameters as attributes. Just remember to use lower-kebab-case for your component name and parameters! You can find sample code for this approach on GitHub.


Dominick Baier: Platforms where you can run IdentityServer4

There is some confusion about where, and on which platform/OS you can run IdentityServer4 – or more generally speaking: ASP.NET Core.

IdentityServer4 is ASP.NET Core middleware – and ASP.NET Core (despite its name) runs on the full .NET Framework 4.5.x and upwards or .NET Core.

If you are using the full .NET Framework you are tied to Windows – but have the advantage of using a platform that you (and your devs, customers, support staff etc) already know well. It is just a .NET based web app at this point.

If you are using .NET Core, you get the benefits of the new stack including side-by-side versioning and cross-platform. But there is a learning curve involved getting to know .NET Core and its tooling.


Filed under: .NET Security, ASP.NET, IdentityServer, OpenID Connect, WebAPI


Anuraj Parameswaran: Integrate HangFire With ASP.NET Core

This post is about integrating HangFire With ASP.NET Core. HangFire is an incredibly easy way to perform fire-and-forget, delayed and recurring jobs inside ASP.NET applications. CPU and I/O intensive, long-running and short-running jobs are supported. No Windows Service / Task Scheduler required. Backed by Redis, SQL Server, SQL Azure and MSMQ. Hangfire provides a unified programming model to handle background tasks in a reliable way and run them on shared hosting, dedicated hosting or in cloud. The product I am working has a feature of adding watermark to the images uploaded by users. Right now we are using a console app, which will monitor a directory in specified intervals and apply watermark to the newly uploaded images. But using HangFire we can schedule / execute the watermark opertation as a background task, instead of polling a directory for new images.

To integrate HangFire, first you need to add HangFire as a dependency in the project.json file. "Hangfire" : "1.6.8", is the package, I am using .NET Core 1.1.

Now you need to modify your startup class Configure() and ConfigureServices() methods.

Here is the ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    services.AddHangfire(config => 
        config.UseSqlServerStorage(Configuration.GetConnectionString("HangfireConnection")));
    
    services.AddMvc();

    //Following line is only required if your jobs are failing.
    
    services.AddTransient<HomeController, HomeController>();
}

And here is the configure method.

public void Configure(IApplicationBuilder app, 
    IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();
    //The following line is also optional, if you required to monitor your jobs.
    //Make sure you're adding required authentication 
    app.UseHangfireDashboard();
    app.UseHangfireServer();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

Now you’re done with the Hangfire configuration. Now as part of the upload logic or action method, you can add the watermark job to background or can be scheduled to later time.

Here is the code to apply watermark in a background job.

BackgroundJob.Enqueue(() => ApplyWatermark(filename));

If you like to schedule it for later time, like after 5 minutes, you can do that using Schedule() method.

BackgroundJob.Schedule(() => ApplyWatermark(filename), TimeSpan.FromMinutes(5));

HangFire also supports Recurring jobs and Continuations jobs. Recurring jobs fire many times on the specified CRON schedule.

RecurringJob.AddOrUpdate(
    () => Console.WriteLine("Recurring!"),
    Cron.Daily);

Continuations are executed when its parent job has been finished.

As mentioned Hangfire Dashboard helps to monitor the tasks.

HangFire Dashboard

By default Hangfire allows access to Dashboard pages only for local requests. In order to give appropriate rights for production use. You need to implement IDashboardAuthorizationFilter interface to configure authorization.

Here is the minimal IDashboardAuthorizationFilter, which will allows all the authenticated users to view the dashboard.(Note : This approach is not recommended for production.)

public class CustomAuthorizeFilter : IDashboardAuthorizationFilter
{
    public bool Authorize([NotNull] DashboardContext context)
    {
        var httpcontext = context.GetHttpContext();
        return httpcontext.User.Identity.IsAuthenticated;
    }
}

And here the implementation of Authorization filter.

app.UseHangfireDashboard("/hangfire", new DashboardOptions() { 
    Authorization = new[] { new CustomAuthorizeFilter() }
});

Happy Programming :)


Damien Bowden: Angular 2 Lazy Loading with Webpack 2

This article shows how Angular 2 lazy loading can be supported using Webpack 2 for both JIT and AOT builds. The Webpack loader angular-router-loader from Brandon Roberts is used to implement this.

A big thanks to Roberto Simonetti for his help in this.

Code: Visual Studio 2015 project | Visual Studio 2017 project

Blogs in this series:

2017.01.18: Updated to webpack 2.2.0

First create an Angular 2 module

In this example, the about module will be lazy loaded when the user clicks on the about tab. The about.module.ts is the entry point for this feature. The module has its own component and routing.
The app will now be setup to lazy load the AboutModule.

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

import { AboutRoutes } from './about.routes';
import { AboutComponent } from './components/about.component';

@NgModule({
    imports: [
        CommonModule,
        AboutRoutes
    ],

    declarations: [
        AboutComponent
    ],

})

export class AboutModule { }

Add the angular-router-loader Webpack loader to the packages.json file

To add lazy loading to the app, the angular-router-loader npm package needs to be added to the packages.json npm file in the devDependencies.

"devDependencies": {
    "@types/node": "7.0.0",
    "angular2-template-loader": "^0.6.0",
    "angular-router-loader": "^0.5.0",

Configure the Angular 2 routing

The lazy loading routing can be added to the app.routes.ts file. The loadChildren defines the module and the class name of the module which can be lazy loaded. It is also possible to pre-load lazy load modules if required.

import { Routes, RouterModule } from '@angular/router';

export const routes: Routes = [
    { path: '', redirectTo: 'home', pathMatch: 'full' },
    {
        path: 'about', loadChildren: './modules/about/about.module#AboutModule',
    }
];

export const AppRoutes = RouterModule.forRoot(routes);

Update the tsconfig-aot.json and tsconfig.json files

Now the tsconfig.json for development JIT builds and the tsconfig-aot.json for AOT production builds need to be configured to load the AboutModule module.

AOT production build

The files property contains all the module entry points as well as the app entry file. The angularCompilerOptions property defines the folder where the AOT will be built into. This must match the configuration in the Webpack production config file.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angular2App/app/app.module.ts",
    "angular2App/app/modules/about/about.module.ts",
    "angular2App/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

JIT development build

The modules and entry points are also defined for the JIT build.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ],
    "types": [
      "node"
    ]
  },
  "files": [
    "angular2App/app/app.module.ts",
    "angular2App/app/modules/about/about.module.ts",
    "angular2App/main.ts"
  ],
  "awesomeTypescriptLoaderOptions": {
    "useWebpackText": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

Configure Webpack to chunk and use the router lazy loading

Now the webpack configuration needs to be updated for the lazy loading.

AOT production build

The webpack.prod.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders. The genDir folder defined here must match the definition in tsconfig-aot.json.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader?aot=true&genDir=aot/'
        ]
    },

JIT development build

The webpack.dev.js file requires that the chunkFilename property is set in the output, so that webpack chunks the lazy load modules.

output: {
        path: './wwwroot/',
        filename: 'dist/[name].bundle.js',
        chunkFilename: 'dist/[id].chunk.js',
        publicPath: '/'
},

The angular-router-loader is added to the loaders.

 module: {
  rules: [
    {
        test: /\.ts$/,
        loaders: [
            'awesome-typescript-loader',
            'angular-router-loader',
            'angular2-template-loader',        
            'source-map-loader',
            'tslint-loader'
        ]
    },

Build and run

Now the application can be built using the npm build scripts and the dotnet command tool.

Open a command line in the root of the src files. Install the npm packages:

npm install

Now build the production build. The build-production does a ngc build, and then a webpack production build.

npm run build-production

You can see that Webpack creates an extra chunked file for the About Module.

lazyloadingwebpack_01

Then start the application. The server is implemented using ASP.NET Core 1.1.

dotnet run

When the application is started, the AboutModule is not loaded.

lazyloadingwebpack_02

When the about tab is clicked, the chunked AboutModule is loaded.

lazyloadingwebpack_03

Absolutely fantastic. You could also pre-load the modules if required. See this blog.

Links:

https://github.com/brandonroberts/angular-router-loader

https://www.npmjs.com/package/angular-router-loader

https://github.com/robisim74/angular2localization/tree/gh-pages

https://vsavkin.com/angular-router-preloading-modules-ba3c75e424cb

https://webpack.github.io/docs/



Henrik F. Nielsen: ASP.NET WebHooks V1 RTM (Link)

ASP.NET WebHooks V1 RTM was announced a little while back. WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more. When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.

Microsoft ASP.NET WebHooks makes it easier to both send and receive WebHooks as part of your ASP.NET application:

In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!

The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, and is available as Open Source on GitHub, and as Nuget packages. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing Microsoft ASP.NET WebHooks V1 RTM.

Have fun!

Henrik


Anuraj Parameswaran: Using MEF in .NET Core

This post is about using MEF (Managed Extensibility Framework) in .NET Core. The Managed Extensibility Framework or MEF is a library for creating lightweight, extensible applications. It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well.

To use MEF first you need to add reference of Microsoft.Composition in your project.json, also you need to import portable-net45+win8+wp8+wpa81. This package is not compatible with the dnxcore50. And here is the project.json file.

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
    "Microsoft.Composition": "1.0.30"
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": "portable-net45+win8+wp8+wpa81"
    }
  }
}

I am using the example code from MEF website.

First you need to create interface you want to export. And implement the interface and decorate the class with export attribute.

public interface IMessageSender
{
    void Send(string message);
}

[Export(typeof(IMessageSender))]
public class EmailSender : IMessageSender
{
    public void Send(string message)
    {
        Console.WriteLine(message);
    }
}

Now the compose part, Catalogs are not available in Microsoft.Composition namespace.

var assemblies = new[] { typeof(Program).GetTypeInfo().Assembly };
var configuration = new ContainerConfiguration()
    .WithAssembly(typeof(Program).GetTypeInfo().Assembly);
using (var container = configuration.CreateContainer())
{
    MessageSender = container.GetExport<IMessageSender>();
}

It will load all the types from the Assembly with export attribute and attach to the import attribute. Here is the complete code.

public class Program
{
    public static void Main(string[] args)
    {
        Program p = new Program();
        p.Run();
    }

    public void Run()
    {
        Compose();
        MessageSender.Send("Hello MEF");
    }

    [Import]
    public IMessageSender MessageSender { get; set; }
    private void Compose()
    {
        var assemblies = new[] { typeof(Program).GetTypeInfo().Assembly };
        var configuration = new ContainerConfiguration()
            .WithAssembly(typeof(Program).GetTypeInfo().Assembly);
        using (var container = configuration.CreateContainer())
        {
            MessageSender = container.GetExport<IMessageSender>();
        }
    }
}

And here is the screenshot of MEF running on .net core console app.

MEF on .NET Core

Happy Programming :)


Andrew Lock: An introduction to ViewComponents - a login status view component

An introduction to ViewComponents - a login status view component

View components are one of the potentially less well known features of ASP.NET Core Razor views. Unlike tag-helpers which have the pretty much direct equivalent of Html Helpers in the previous ASP.NET, view components are a bit different.

In spirit, they fit somewhere between a partial view and a full controller - approximately like a ChildAction. However whereas actions and controllers have full model binding semantics and the filter pipeline etc, view components are invoked directly with explicit data. They are more powerful than a partial view however, as they can contain business logic, and separate the UI generation from the underlying behaviour.

View components seem to fit best in situations where you would want to use a partial, but that the rendering logic is complicated and may need to be tested.

In this post, I'll use the example of a Login widget that displays your email address when you are logged in:

An introduction to ViewComponents - a login status view component

and a register / login link when you are logged out:

An introduction to ViewComponents - a login status view component

This is a trivial example - the behaviour above is achieved without the use of view components in the templates. This post is just meant to introduce you to the concept of view components, so you can see when to use them in your own applications.

Creating a view component

View components can be defined in a multitude of ways. You can give your component a name ending in ViewComponent, you can decorate it with the [ViewComponent] attribute, or you can derive from the ViewComponent base class. The latter of these is probably the most obvious, and provides a number of helper properties you can use, but the choice is yours.

To implement a view component you must expose a public method called InvokeAsync which is called when the component is invoked:

public Task<IViewComponentResult> InvokeAsync();  

As is typical for ASP.NET Core, this method is found at runtime using reflection, so if you forget to add it, you won't get compile time errors, but you will get an exception at runtime:

An introduction to ViewComponents - a login status view component

Other than this restriction, you are pretty much free to design your view components as you like. They support dependency injection, so you are able to inject dependencies into the constructor and use them in your InvokeAsync method. For example, you could inject a DbContext and query the database for the data to display in your component.

The LoginStatusViewComponent

Now you have a basic understanding of view components, we can take a look at the LoginStatusViewComponent. I created this component in a project created using the default MVC web template in VisualStudio with authentication.

This simple view component only has a small bit of logic, but it demonstrates the features of view components nicely.

public class LoginStatusViewComponent : ViewComponent  
{
    private readonly SignInManager<ApplicationUser> _signInManager;
    private readonly UserManager<ApplicationUser> _userManager;

    public LoginStatusViewComponent(SignInManager<ApplicationUser> signInManager, UserManager<ApplicationUser> userManager)
    {
        _signInManager = signInManager;
        _userManager = userManager;
    }

    public async Task<IViewComponentResult> InvokeAsync()
    {
        if (_signInManager.IsSignedIn(HttpContext.User))
        {
            var user = await _userManager.GetUserAsync(HttpContext.User);
            return View("LoggedIn", user);
        }
        else
        {
            return View();
        }
    }
}

You can see I have chosen to derive from the base ViewComponent class, as that provides me access to a number of helper methods.

We are injecting two services into the constructor of our component. These will be fulfilled automatically by the dependency injection container when our component is invoked.

Our InvokeAsync method is pretty self explanatory. We are checking if the current user is signed in using the SignInManager<>, and if they are we fetch the associated ApplicationUser from the UserManager<>. Finally we call the helper View method, passing in a template to render and the model user. If the user is not signed in, we call the helper View without a template argument.

The calls at the end of the InvokeAsync method are reminiscent of action methods. They are doing a very similar thing, in that they are creating a result which will execute a view template, passing in the provided model.

In our example, we are rendering a different template depending on whether the user is logged in or not. That means we could test this ViewComponent in isolation, testing that the correct template is displayed depending on our business requirements, without having to inspect the HTML output, which would be our only choice if this logic was embedded in a partial view instead.

Rendering View templates

When you use return View() in your view component, you are returning a ViewViewComponentResult (yes, that name is correct!) which is analogous to the ViewResult you typically return from MVC action methods.

This object contains an optional template name and view model, which is used to invoke a Razor view template. The location of the view to execute is given by convention, very similar to MVC actions. In the case of our LoginStatusViewComponent, the Razor engine will search for views in two folders:

  1. Views\Components\LoginStatus; and
  2. Views\Components\Shared

If you don't specify the name of the template to find, then the engine will assume the file is called default.cshtml. In the example I provided, when the user is signed in we explicitly provide a template name, so the engine will look for the template at

  1. Views\Components\LoginStatus\LoggedIn.cshtml; and
  2. Views\Components\Shared\LoggedIn.cshtml

The view templates themselves are just normal razor, so they can contain all the usual features, tag helpers, strongly typed models etc. The LoggedIn.cshtml file for our LoginviewComponent is shown below:

@model ApplicationUser
<form asp-area="" asp-controller="Account" asp-action="LogOff" method="post" id="logoutForm" class="navbar-right">  
    <ul class="nav navbar-nav navbar-right">
        <li>
            <a asp-area="" asp-controller="Manage" asp-action="Index" title="Manage">Hello @Model.Email!</a>
        </li>
        <li>
            <button type="submit" class="btn btn-link navbar-btn navbar-link">Log off</button>
        </li>
    </ul>
</form>  

There is nothing special here - we are using the form and action link tag helpers to create links and we are writing values from our strongly typed model to the response. All bread and butter for razor templates!

When the user is not logged in, I didn't specify a template name, so the default name of default.cshtml is used:

An introduction to ViewComponents - a login status view component

This view is even simpler as we didn't pass a model to the view, it just contains a couple of links:

<ul class="nav navbar-nav navbar-right">  
    <li><a asp-area="" asp-controller="Account" asp-action="Register">Register</a></li>
    <li><a asp-area="" asp-controller="Account" asp-action="Login">Log in</a></li>
</ul>  

Invoking a view component

With your component configured, all that remains is to invoke it from your view. View components can be invoked from a different view by calling, in this case, @await Component.InvokeAsync("LoginStatus"), where "LoginStatus" is the name of the view component. We can call it in the header of our _Layout.cshtml:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    @await Component.InvokeAsync("LoginStatus")
</div>  

Invoking directly from a controller

It is also possible to return a view component directly from a controller; this is the closest you can get to directly exposing a view component at an endpoint:

public IActionResult IndexVC()  
{
    return ViewComponent("LoginStatus");
}

Calling View Components like TagHelpers in ASP.NET Core 1.1.0

View components work well, but one of the things that seemed like a bit of a step back was the need to explicitly use the @ symbol to render them. One of the nice things brought to Razor with ASP.NET Core was tag-helpers. These do pretty much the same job as the HTML helpers from the previous ASP.NET MVC Razor views, but in a more editor-friendly way.

For example, consider the following block, which would render a label, text box and validation summary for a property on your model called Email

<div class="form-group">  
    @Html.LabelFor(x=>x.Email, new { @class= "col-md-2 control-label"})
    <div class="col-md-10">
        @Html.TextBoxFor(x=>x.Email, new { @class= "form-control"})
        @Html.ValidationMessageFor(x=>x.Email, null, new { @class= "text-danger" })
    </div>
</div>  

Compare that to the new tag helpers, which allow you to declare your model bindings as asp- attributes:

<div class="form-group">  
    <label asp-for="Email" class="col-md-2 control-label"></label>
    <div class="col-md-10">
        <input asp-for="Email" class="form-control" />
        <span asp-validation-for="Email" class="text-danger"></span>
    </div>
</div>  

Syntax highlighting is easier for basic editors and you don't need to use ugly @ symbols to escape the class properties - everything is just that little bit nicer. In ASP.NET Core 1.1.0, you can get similar benefits for calling your tag helpers, by using a vc: prefix.

To repeat my LoginStatus example in ASP.NET Core 1.1.0, you first need to register your view components as tag helpers in _ViewImports.cshtml (where WebApplication1 is the namespace of your view components) :

@addTagHelper *, WebApplication1

and you can then invoke your view component using the tag helper syntax:

<div class="navbar-collapse collapse">  
    <ul class="nav navbar-nav">
        <li><a asp-area="" asp-controller="Home" asp-action="Index">Home</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="About">About</a></li>
        <li><a asp-area="" asp-controller="Home" asp-action="Contact">Contact</a></li>
    </ul>
    <vc:login-status></vc:login-status>
</div>  

Note the name of the tag helper here, vc:login-status. The vc helper, indicates that you are invoking a view component, and the name of the helper is our view component's name (LoginStatus) converted to lower-kebab case (thanks to the ASP.NET monsters for figuring out the correct name)!

With these two pieces in place, your tag-helper is functionally equivalent to the previous invocation, but is a bit nicer to read:)

Summary

This post provided an introduction to building your first view component, including how to invoke it. You can find sample code on GitHub. In the next post, I'll show how you can pass parameters to your component when you invoke it.


Anuraj Parameswaran: Using NLog in ASP.NET Core

This post is about using NLog in ASP.NET Core. NLog is a free logging platform for .NET, Xamarin, Silverlight and Windows Phone with rich log routing and management capabilities. NLog makes it easy to produce and manage high-quality logs for your application regardless of its size or complexity.

To use NLog in ASP.NET Core, first you need to add the NLog package in the project.json file. I am using ASP.NET Core 1.1 version, in the project.json you can add NLog package in the dependencies section. You need to use the "NLog.Extensions.Logging": "1.0.0-rtm-alpha5" package. Once you added the package, you can restore the packages to use NLog in your code. Next you need the configuration file, which is required by NLog, it helps to configure NLog logging options.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="c:\temp\internal.txt">


  <!-- define various log targets -->
  <targets>
    <!-- write logs to file -->
    <target xsi:type="File" name="allfile" fileName="c:\temp\nlog-all-${shortdate}.log"
                 layout="${longdate}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile" fileName="c:\temp\nlog-own-${shortdate}.log"
              layout="${longdate}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile" />
  </rules>
</nlog>

It is a typical nlog.config file, which need to be placed in the web project root folder, not in the wwwroot. You can find config file in the GitHub repo as well.

Now you need to modify your startup class configure method to enable NLog and add the configuration options.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddNLog();
    env.ConfigureNLog("nlog.config");
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

It is done. Now if you go and check C:\Temp folder, you can see the log file.

If you want to enable logging in controllers, you can either inject the ILogger using ASP.NET Core DI mechanism, or you can use NLog’s LogManager class.

Logging with ASP.NET Core DI.

private readonly ILogger<HomeController> _logger;
public HomeController(ILogger<HomeController> logger)
{
    _logger = logger;
}

public IActionResult Index()
{
    _logger.LogInformation("Index Page invoked");
    return View();
}

Logging with NLog’s LogManager class.

private static Logger _logger = LogManager.GetCurrentClassLogger();
public IActionResult Index()
{
    _logger.Info("Index Page invoked");
    return View();
}

If you are using ASP.NET Core DI mechanism, it will create a new log file, but if you are using NLog own mechanism it will log on the same file. Make sure you’re including the NLog.config file in the publishOptions, include section.

"publishOptions": {
    "include": [
        "wwwroot",
        "Views",
        "appsettings.json",
        "web.config",
        "nlog.config"
    ]
},

Happy Programming :)


Dominick Baier: Bootstrapping OpenID Connect: Discovery

OpenID Connect clients and APIs need certain configuration values to initiate the various protocol requests and to validate identity and access tokens. You can either hard-code these values (e.g. the URL to the authorize and token endpoint, key material etc..) – or get those values dynamically using discovery.

Using discovery has advantages in case one of the needed values changes over time. This will be definitely the case for the key material you use to sign your tokens. In that scenario you want your token consumers to be able to dynamically update their configuration without having to take them down or re-deploy.

The idea is simple, every OpenID Connect provider should offer a a JSON document under the /.well-known/openid-configuration URL below its base-address (often also called the authority). This document has information about the issuer name, endpoint URLs, key material and capabilities of the provider, e.g. which scopes or response types it supports.

Try https://demo.identityserver.io/.well-known/openid-configuration as an example.

Our IdentityModel library has a little helper class that allows loading and parsing a discovery document, e.g.:

var disco = await DiscoveryClient.GetAsync("https://demo.identityserver.io");
Console.WriteLine(disco.Json);

It also provides strongly typed accessors for most elements, e.g.:

Console.WriteLine(disco.TokenEndpoint);

..or you can access the elements by name:

Console.WriteLine(disco.Json.TryGetString("introspection_endpoint"));

It also gives you access to the key material and the various properties of the JSON encoded key set – e.g. iterating over the key ids:

foreach (var key in disco.KeySet.Keys)
{
    Console.WriteLine(key.Kid);
}

Discovery and security
As you can imagine, the discovery document is nice target for an attacker. Being able to manipulate the endpoint URLs or the key material would ultimately result in a compromise of a client or an API.

As opposed to e.g. WS-Federation/WS-Trust metadata, the discovery document is not signed. Instead OpenID Connect relies on transport security for authenticity and integrity of the configuration data.

Recently we’ve been involved in a penetration test against client libraries, and one technique the pen-testers used was compromising discovery. Based on their feedback, the following extra checks should be done when consuming a discovery document:

  • HTTPS must be used for the discovery endpoint and all protocol endpoints
  • The issuer name should match the authority specified when downloading the document (that’s actually a MUST in the discovery spec)
  • The protocol endpoints should be “beneath” the authority – and not on a different server or URL (this could be especially interesting for multi-tenant OPs)
  • A key set must be specified

Based on that feedback, we added a configurable validation policy to DiscoveryClient that defaults to the above recommendations. If for whatever reason (e.g. dev environments) you need to relax a setting, you can use the following code:

var client = new DiscoveryClient("http://dev.identityserver.internal");
client.Policy.RequireHttps = false;
 
var disco = await client.GetAsync();

Btw – you can always connect over HTTP to localhost and 127.0.0.1 (but this is also configurable).

Source code here, nuget here.


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Trying IdentityServer4

We have a number of options how you can experiment or get started with IdentityServer4.

Starting point
It all starts at https://identityserver.io – from here you can find all below links as well as our next workshop dates, consulting, production support etc.

Source code
You can find all the source code in our IdentityServer organization on github. Especially IdentityServer4 itself, the samples, and the access token validation middleware.

Nuget
Here’s a list of all our nugets – here’s IdentityServer4, here’s the validation middleware.

Documentation and tutorials
Documentation can be found here. Especially useful to get started are our tutorials.

Demo Site
We have a demo site at https://demo.identityserver.io that runs the latest version of IdentityServer4. We have also pre-configured a number of client types, e.g. hybrid and authorization code (with and without PKCE) as well as implicit and client credentials flow. You can use this site to try IdentityServer with your favourite OpenID Connect client library. There is also a test API that you can call with our access tokens.

Compatibility check
Here’s a repo that contains all permutations of IdentityServer3 and 4, Katana and ASP.NET Core Web APIs and JWTs and reference tokens. We use this test harness to ensure cross version compatibility. Feel free to try it yourself.

CI builds
Our CI feed can be found here.

HTH


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: Understanding and updating package versions for .NET Core 1.0.3

Understanding and updating package versions for .NET Core 1.0.3

Microsoft introduced the second update to their Long Term Support (LTS) version of .NET Core on 13th December, 3 months after releasing the first update to the platform. This included updates to .NET Core, ASP.NET Core and Entity Framework Core, and takes the overall version number to 1.0.3, though this number can be confusing, as you'll see shortly! You can read about the update in their blog post - I'm going to focus primarily on the ASP.NET Core changes here.

Understanding the version numbers

The first thing to take in with this update, is that it is only for the LTS track. .NET Core and ASP.NET Core follow releases in two different tracks: the safer, more stable, LTS version; and the alternative Fast Track Support (FTS) which sees new features at a higher rate.

Depending on your requirements for stability and the need for new features, you can stick to either the FTS or LTS track - both are supported. The important thing is that you make sure your whole application sticks to one or the other. You can't use some packages from the LTS track and some from the FTS track.

As of writing, the LTS version is at 1.0.3, which follows version numbers of the format 1.0.x. This, as expected implies it will only see patch/bug fixes. In contrast, the FTS version is currently at 1.1.0, which brings a number of additional features over the LTS branch. You can read more about the versioning story on the .NET blog.

Is this the second or third LTS update?

You may have noticed I said that this was the second update to the LTS track, even though we're up to update 1.0.3. That's because the .NET Core 1.0.2 update didn't actually change any code, it simply fixed an issue in the installer on macOS. So although the version number was bumped, there weren't actually any noticeable changes.

Package numbers don't match the ASP.NET Core version

This is where things start to get confusing.

ASP.NET Core is composed of a whole host of loosely coupled packages which can be added to your application to provide various features, as and when you need them. If you don't need a feature, you don't add it to your project. This contrasts with the previous model of ASP.NET in which you always had access to all of the features. It was more of a set-meal approach rather than the à la carte buffet approach of ASP.NET Core.

Each of these packages that make up ASP.NET Core - packages such as Microsoft.AspNetCore.Mvc, Microsoft.Extensions.Configuration.Abstractions, and Microsoft.AspNetCore.Diagnostics - follow semantic versioning. They version independently of one another, and of the framework as a whole.

ASP.NET Core has an overall version number, which for the LTS track is 1.0.3. However, just because the overall ASP.NET Core version has incremented, that doesn't mean that the underlying packages of which it is composed have necessarily changed. If a package has not changed, there is no sense in updating its version number, even though a new version of ASP.NET Core is being released.

Updating your project

Although Microsoft have take a perfectly reasonable approach with regard to this in theory, the reality of trying to keep up with these version changes is somewhat bewildering.

In order to stay supported, you have to ensure all your packages stay on the latest version of the LTS (or FTS) track of ASP.NET Core. But there isn't anywhere that actually lists out all the supported packages for a given overall version of ASP.NET Core, or provides an easy way to update all the packages in your project to the latest on the LTS track. And it's not easy to know what they should be - some packages may be on version 1.0.2, others 1.0.1 and some may still be 1.0.0. It's very hard to tell whether your project.json (or csproj) is all up-to-date.

In a recent post, Steve Gordon ran into exactly this problem when updating the allReady project to 1.0.3. He found he had to go through the NuGet Package Manager GUI in ASP.NET Core and update each of his dependencies independently. He couldn't use the 'Update All' button as this would update to the latest in the FTS track. Hopefully his suggestion of a toggle for selecting which track you wish to stick to will be implemented in VS2017!

As part of his post, he lists all the dependencies he had to update in his project.json in making the move. You also have to ensure you install the latest SDK from https://dot.net and update your global.json accordingly.

Steve lists a whole host of packages to update, but I wanted to try and provide a more comprehensive list, so I decided to take a look through each of the ASP.NET Core repos, and fetch the latest version of the packages for the LTS update.

Latest versions

The latest version of ASP.NET Core packages for version 1.0.3 are listed below. This list attempts to be exhaustive for the core packages in the Microsoft ASP.NET Core repos in GitHub. It's quite possible I've missed some out though - if so, let me know in the comments!

Note that not all of these packages will have changed in the 1.0.3 release (though it seems like most have), these are just the latest packages that it uses.

  "Microsoft.ApplicationInsights.AspNetCore" : "1.0.2",
  "Microsoft.AspNet.Identity.AspNetCoreCompat" : "0.1.1",
  "Microsoft.AspNet.WebApi.Client" : "5.2.2",
  "Microsoft.AspNetCore.Antiforgery" : "1.0.2",
  "Microsoft.AspNetCore.Authentication" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Cookies" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Facebook" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Google" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.JwtBearer" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.MicrosoftAccount" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.OAuth" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.OpenIdConnect" : "1.0.1",
  "Microsoft.AspNetCore.Authentication.Twitter" : "1.0.1",
  "Microsoft.AspNetCore.Authorization" : "1.0.1",
  "Microsoft.AspNetCore.Buffering" : "0.1.1",
  "Microsoft.AspNetCore.CookiePolicy" : "1.0.1",
  "Microsoft.AspNetCore.Cors" : "1.0.1",
  "Microsoft.AspNetCore.Cryptography.Internal" : "1.0.1",
  "Microsoft.AspNetCore.Cryptography.KeyDerivation" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.Extensions" : "1.0.1",
  "Microsoft.AspNetCore.DataProtection.SystemWeb" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Diagnostics.Elm" : "0.1.1",
  "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" : "1.0.1",
  "Microsoft.AspNetCore.Hosting" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.Server.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Hosting.WindowsServices" : "1.0.1",
  "Microsoft.AspNetCore.Html.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Http" : "1.0.1",
  "Microsoft.AspNetCore.Http.Abstractions" : "1.0.1",
  "Microsoft.AspNetCore.Http.Extensions" : "1.0.1",
  "Microsoft.AspNetCore.Http.Features" : "1.0.1",
  "Microsoft.AspNetCore.HttpOverrides" : "1.0.1",
  "Microsoft.AspNetCore.Identity" : "1.0.1",
  "Microsoft.AspNetCore.Identity.EntityFrameworkCore" : "1.0.1",
  "Microsoft.AspNetCore.JsonPatch" : "1.0.0",
  "Microsoft.AspNetCore.Localization" : "1.0.1",
  "Microsoft.AspNetCore.MiddlewareAnalysis" : "1.0.1",
  "Microsoft.AspNetCore.Mvc" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Abstractions" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.ApiExplorer" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Core" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Cors" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.DataAnnotations" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Formatters.Json" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Formatters.Xml" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Localization" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Razor" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.Razor.Host" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.TagHelpers" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.ViewFeatures" : "1.0.2",
  "Microsoft.AspNetCore.Mvc.WebApiCompatShim" : "1.0.2",
  "Microsoft.AspNetCore.Owin" : "1.0.1",
  "Microsoft.AspNetCore.Razor.Runtime" : "1.0.1",
  "Microsoft.AspNetCore.Routing" : "1.0.2",
  "Microsoft.AspNetCore.Routing.Abstractions" : "1.0.2",
  "Microsoft.AspNetCore.Server.IISIntegration" : "1.0.1",
  "Microsoft.AspNetCore.Server.IISIntegration.Tools" : "1.0.0-preview4-final",
  "Microsoft.AspNetCore.Server.Kestrel" : "1.0.2",
  "Microsoft.AspNetCore.Server.Kestrel.Https" : "1.0.2",
  "Microsoft.AspNetCore.Server.Testing" : "0.1.1",
  "Microsoft.AspNetCore.StaticFiles" : "1.0.1",
  "Microsoft.AspNetCore.TestHost" : "1.0.1",
  "Microsoft.AspNetCore.Testing" : "1.0.1",
  "Microsoft.AspNetCore.WebUtilities" : "1.0.1",
  "Microsoft.CodeAnalysis.CSharp" : "1.3.0",
  "Microsoft.DotNet.Watcher.Core" : "1.0.0-preview4-final",
  "Microsoft.DotNet.Watcher.Tools" : "1.0.0-preview4-final",
  "Microsoft.EntityFrameworkCore" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.InMemory" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Design.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Relational.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Specification.Tests" : "1.0.2",
  "Microsoft.EntityFrameworkCore.SqlServer" : "1.0.2",
  "Microsoft.EntityFrameworkCore.SqlServer.Design" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Sqlite" : "1.0.2",
  "Microsoft.EntityFrameworkCore.Sqlite.Design" : "1.0.2",
  "Microsoft.Extensions.Caching.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Caching.Memory" : "1.0.1",
  "Microsoft.Extensions.Caching.Redis" : "1.0.1",
  "Microsoft.Extensions.Caching.SqlConfig.Tools" : "1.0.0-preview4-final",
  "Microsoft.Extensions.Caching.SqlServer" : "1.0.1",
  "Microsoft.Extensions.CommandLineUtils" : "1.0.1",
  "Microsoft.Extensions.Configuration" : "1.0.1",
  "Microsoft.Extensions.Configuration.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Configuration.Binder" : "1.0.1",
  "Microsoft.Extensions.Configuration.CommandLine" : "1.0.1",
  "Microsoft.Extensions.Configuration.EnvironmentVariables" : "1.0.1",
  "Microsoft.Extensions.Configuration.FileExtensions" : "1.0.1",
  "Microsoft.Extensions.Configuration.Ini" : "1.0.1",
  "Microsoft.Extensions.Configuration.Json" : "1.0.1",
  "Microsoft.Extensions.Configuration.UserSecrets" : "1.0.1",
  "Microsoft.Extensions.Configuration.Xml" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection.Abstractions" : "1.0.1",
  "Microsoft.Extensions.DependencyInjection.Specification.Tests" : "1.0.1",
  "Microsoft.Extensions.DependencyModel" : "1.0.0",
  "Microsoft.Extensions.DiagnosticAdapter": "1.0.1",
  "Microsoft.Extensions.FileProviders.Abstractions" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Composite" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Embedded" : "1.0.1",
  "Microsoft.Extensions.FileProviders.Physical" : "1.0.1",
  "Microsoft.Extensions.FileSystemGlobbing" : "1.0.1",
  "Microsoft.Extensions.Globalization.CultureInfoCache" : "1.0.1",
  "Microsoft.Extensions.Localization" : "1.0.1",
  "Microsoft.Extensions.Localization.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Logging" : "1.0.1",
  "Microsoft.Extensions.Logging.Abstractions" : "1.0.1",
  "Microsoft.Extensions.Logging.Console" : "1.0.1",
  "Microsoft.Extensions.Logging.Debug" : "1.0.1",
  "Microsoft.Extensions.Logging.EventLog" : "1.0.1",
  "Microsoft.Extensions.Logging.Filter" : "1.0.1",
  "Microsoft.Extensions.Logging.Testing" : "1.0.1",
  "Microsoft.Extensions.Logging.TraceSource" : "1.0.1",
  "Microsoft.Extensions.ObjectPool" : "1.0.1",
  "Microsoft.Extensions.Options" : "1.0.1",
  "Microsoft.Extensions.Options.ConfigurationExtensions" : "1.0.1",
  "Microsoft.Extensions.PlatformAbstractions" : "1.0.0",
  "Microsoft.Extensions.Primitives" : "1.0.1",
  "Microsoft.Extensions.SecretManager.Tools" : "1.0.0-preview4-final",
  "Microsoft.Extensions.WebEncoders" : "1.0.1",
  "Microsoft.IdentityModel.Protocols.OpenIdConnect" : "2.0.0",
  "Microsoft.Net.Http.Headers" : "1.0.1",
  "Microsoft.VisualStudio.Web.BrowserLink.Loader" : "14.0.1"

Hopefully someone will find this useful when trying to work out which *&^#$% package they need to update!


Damien Bowden: Building production ready Angular apps with Visual Studio and ASP.NET Core

This article shows how Angular SPA apps can be built using Visual Studio and ASP.NET Core which can be used in production. Lots of articles, blogs templates exist for ASP.NET Core and Angular but very few support Angular production builds.

Although Angular 2 is not so old, many different seeds and build templates already exist, so care should be taken when choosing the infrastructure for the Angular application. Any Angular template, seed which does not support AoT or treeshaking should NOT be used, and also any third party Angular component which does not support AoT should not be used.

This example uses webpack 2 to build and bundle the Angular application. In the package.json, npm scripts are used to configure the different builds and can be used inside Visual Studio using the npm task runner.

Code: Visual Studio 2015 project | Visual Studio 2017 project

Blogs in this series:

2017.01.18: Updated to webpack 2.2.0
2017.01.14 Added lazy loading, updated the Angular 2.4.3 and webpack 2.2.0-rc.4

Short introduction to AoT and treeshaking

AoT

AoT stands for Ahead of Time compilation. As per definition from the Angular docs:

“With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the app first.”

With AoT, you have smaller packages sizes, fewer asynchronous requests and better security. All is explained very well in the Angular Docs:

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

The AoT uses the platformBrowser to bootstrap and not platformBrowserDynamic which is used for JIT, Just in Time.

// Entry point for AoT compilation.
export * from './polyfills';

import { platformBrowser } from '@angular/platform-browser';
import { enableProdMode } from '@angular/core';
import { AppModuleNgFactory } from '../aot/angular2App/app/app.module.ngfactory';

enableProdMode();

platformBrowser().bootstrapModuleFactory(AppModuleNgFactory);

treeshaking

Treeshaking removes the unused portions of the libraries from the application, reducing the size of the application.

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

npm task runner

npm scripts can be used easily inside Visual Studio by using the npm task runner. Once installed, this needs to be configured correctly.

VS2015: Go to Tools –> Options –> Projects and Solutions –> External Web Tools and select all the checkboxes. More infomation can be found here.

In VS2017, this is slightly different:

Go to Tools –> Options –> Projects and Solutions –> Web Package Management –> External Web Tools and select all checkboxes:

vs_angular_build_01

npm scripts

ngc

ngc is the angular compiler which is used to do the AoT build using the tsconfig-aot.json configuration.

"ngc": "ngc -p ./tsconfig-aot.json",

The tsconfig-aot.json file builds to the aot folder.

{
  "compilerOptions": {
    "target": "es5",
    "module": "es2015",
    "moduleResolution": "node",
    "sourceMap": false,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": true,
    "noImplicitAny": true,
    "suppressImplicitAnyIndexErrors": true,
    "skipLibCheck": true,
    "lib": [
      "es2015",
      "dom"
    ]
  },
  "files": [
    "angular2App/app/app.module.ts",
    "angular2App/app/modules/about/about.module.ts",
    "angular2App/main-aot.ts"
  ],
  "angularCompilerOptions": {
    "genDir": "aot",
    "skipMetadataEmit": true
  },
  "compileOnSave": false,
  "buildOnSave": false
}

build-production

The build-production npm script is used for the production build and can be used for the publish or the CI as required. The npm script used the ngc script and the webpack-production build.

"build-production": "npm run ngc && npm run webpack-production",

webpack-production npm script:

"webpack-production": "set NODE_ENV=production&& webpack",

watch-webpack-dev

The watch build monitors the source files and builds if any file changes.

"watch-webpack-dev": "set NODE_ENV=development&& webpack --watch --color",

start (webpack-dev-server)

The start script runs the webpack-dev-server client application and also the ASPNET Core server application.

"start": "concurrently \"webpack-dev-server --inline --progress --port 8080\" \"dotnet run\" ",

Any of these npm scripts can be run from the npm task runner.

vs_angular_build_02

Deployment

When deploying the application for an IIS, the build-production needs to be run, then the dotnet publish command, and then the contents can be copied to the IIS server. The publish-for-iis npm script can be used to publish. The command can be started from a build server without problem.

"publish-for-iis": "npm run build-production && dotnet publish -c Release" 

vs_angular_build_02

https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-publish

When deploying to an IIS, you need to install the DotNetCore.1.1.0-WindowsHosting.exe for the IIS. Setting up the server IIS docs:

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

Why not webpack task runner?

The Webpack task runner cannot be used for Webpack Angular applications because it does not support the required commands for Angular Webpack builds, either dev or production. The webpack -d build causes map errors in IE and the ngc compiler cannot be used, hence no production builds can be started from the Webpack Task Runner. For Angular Webpack projects, do not use the Webpack Task Runner, use the npm task runner.

Full package.json

{
  "name": "angular2-webpack-visualstudio",
  "version": "1.0.0",
  "description": "",
  "main": "wwwroot/index.html",
  "author": "",
  "license": "ISC",
  "scripts": {
    "ngc": "ngc -p ./tsconfig-aot.json",
    "start": "concurrently \"webpack-dev-server --inline --progress --port 8080\" \"dotnet run\" ",
    "webpack-dev": "set NODE_ENV=development && webpack",
    "webpack-production": "set NODE_ENV=production && webpack",
    "build-dev": "npm run webpack-dev",
    "build-production": "npm run ngc && npm run webpack-production",
    "watch-webpack-dev": "set NODE_ENV=development && webpack --watch --color",
    "watch-webpack-production": "npm run build-production --watch --color",
    "publish-for-iis": "npm run build-production && dotnet publish -c Release"
  },
  "dependencies": {
    "@angular/common": "~2.4.3",
    "@angular/compiler": "~2.4.3",
    "@angular/core": "~2.4.3",
    "@angular/forms": "~2.4.3",
    "@angular/http": "~2.4.3",
    "@angular/platform-browser": "~2.4.3",
    "@angular/platform-browser-dynamic": "~2.4.3",
    "@angular/router": "~3.4.1",
    "@angular/upgrade": "~2.4.3",
    "angular-in-memory-web-api": "0.2.4",
    "core-js": "2.4.1",
    "reflect-metadata": "0.1.9",
    "rxjs": "5.0.3",
    "zone.js": "0.7.5",
    "@angular/compiler-cli": "~2.4.3",
    "@angular/platform-server": "~2.4.3",
    "bootstrap": "^3.3.7",
    "ie-shim": "~0.1.0"
  },
  "devDependencies": {
    "@types/node": "7.0.0",
    "angular2-template-loader": "^0.6.0",
    "angular-router-loader": "^0.5.0",
    "awesome-typescript-loader": "^2.2.4",
    "clean-webpack-plugin": "^0.1.15",
    "concurrently": "^3.1.0",
    "copy-webpack-plugin": "^4.0.1",
    "css-loader": "^0.26.1",
    "file-loader": "^0.9.0",
    "html-webpack-plugin": "^2.26.0",
    "jquery": "^2.2.0",
    "json-loader": "^0.5.4",
    "node-sass": "^4.3.0",
    "raw-loader": "^0.5.1",
    "rimraf": "^2.5.4",
    "sass-loader": "^4.1.1",
    "source-map-loader": "^0.1.6",
    "style-loader": "^0.13.1",
    "ts-helpers": "^1.1.2",
    "tslint": "^4.3.1",
    "tslint-loader": "^3.3.0",
    "typescript": "2.0.3",
    "url-loader": "^0.5.7",
    "webpack": "^2.2.0",
    "webpack-dev-server": "^1.16.2"
  },
  "-vs-binding": {
    "ProjectOpened": [
      "watch-webpack-dev"
    ]
  }
}

Full webpack.prod.js

var path = require('path');

var webpack = require('webpack');

var HtmlWebpackPlugin = require('html-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

console.log('@@@@@@@@@ USING PRODUCTION @@@@@@@@@@@@@@@');

module.exports = {

    entry: {
        'vendor': './angular2App/vendor.ts',
        'polyfills': './angular2App/polyfills.ts',
        'app': './angular2App/main-aot.ts' // AoT compilation
    },

    output: {
        path: './wwwroot/',
        filename: 'dist/[name].[hash].bundle.js',
        chunkFilename: 'dist/[id].[hash].chunk.js',
        publicPath: '/'
    },

    resolve: {
        extensions: ['.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        rules: [
            {
                test: /\.ts$/,
                loaders: [
                    'awesome-typescript-loader',
                    'angular-router-loader?aot=true&genDir=aot/'
                ]
            },
            {
                test: /\.(png|jpg|gif|woff|woff2|ttf|svg|eot)$/,
                loader: 'file-loader?name=assets/[name]-[hash:6].[ext]'
            },
            {
                test: /favicon.ico$/,
                loader: 'file-loader?name=/[name].[ext]'
            },
            {
                test: /\.css$/,
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loaders: ['style-loader', 'css-loader', 'sass-loader']
            },
            {
                test: /\.html$/,
                loader: 'raw-loader'
            }
        ],
        exprContextCritical: false
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoEmitOnErrorsPlugin(),
        new webpack.optimize.UglifyJsPlugin({
            compress: {
                warnings: false
            },
            output: {
                comments: false
            },
            sourceMap: false
        }),
        new webpack.optimize.CommonsChunkPlugin(
            {
                name: ['vendor', 'polyfills']
            }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            inject: 'body',
            template: 'angular2App/index.html'
        }),

        new CopyWebpackPlugin([
            { from: './angular2App/images/*.*', to: 'assets/', flatten: true }
        ])
    ]
};

Links:

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/

https://github.com/preboot/angular2-webpack

https://webpack.github.io/docs/

https://github.com/jtangelder/sass-loader

https://github.com/petehunt/webpack-howto/blob/master/README.md

https://blogs.msdn.microsoft.com/webdev/2015/03/19/customize-external-web-tools-in-visual-studio-2015/

https://marketplace.visualstudio.com/items?itemName=MadsKristensen.NPMTaskRunner

http://sass-lang.com/

http://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html

https://angular.io/docs/ts/latest/guide/webpack.html

http://blog.mgechev.com/2016/06/26/tree-shaking-angular2-production-build-rollup-javascript/

https://angular.io/docs/ts/latest/tutorial/toh-pt5.html

http://angularjs.blogspot.ch/2016/06/improvements-coming-for-routing-in.html?platform=hootsuite

https://angular.io/docs/ts/latest/cookbook/aot-compiler.html

https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

https://weblog.west-wind.com/posts/2016/Jun/06/Publishing-and-Running-ASPNET-Core-Applications-with-IIS

http://blog.mgechev.com/2017/01/17/angular-in-production/



Anuraj Parameswaran: Building Dotnet with Gitlab CI

This post is about enabling Continuous Integration of .NET projects in Gitlab. Majority of GitLab’s CI examples are around Open Source technologies. In this post I will explain about implementing CI in ASP.NET and ASP.NET Core projects with Gitlab.

Building ASP.NET project with Gitlab

In GitLab CI, Runners run your yaml. A runner is an isolated (virtual) machine that picks up builds through the coordinator API of GitLab CI. So for building ASP.NET you need to first create a runner on your Windows system. You can find the install and configuration details on GitLab wiki.

Once installation is proper, you can find the Gitlab runner in your services.

GitLab service running on the windows system

You can find the runner in the project runners section.

GitLab service running on the windows system

Now for enabling the continuous integration, need to add .gitlab-ci.yml file. Here is a sample .gitlab-ci.yml, which will restore the packages and build the solution.

stages:
    - build
before_script:
  - 'C:\Nuget\nuget.exe restore MVCApp.sln'
job:
    stage: build
    script: '"C:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" "MVCApp.sln"'

Make sure you have nuget.exe is installed on C:\Nuget folder. And you have MSBuild.exe (VS2015) installed on Program Files folder. Once you commit any changes, the build will be triggered and here is the details of completed build.

GitLab pipeline - Build successfull

You can include the MSTest or NUnit as part of the yml file.

Building ASP.NET Core project with Gitlab

You can use similar approach for ASP.NET Core project as well. GitLab supports Docker based CI as well. So here is the .gitlab-ci.yml file for building ASP.NET Core projects.

image : microsoft/dotnet:latest
stages:
  - build
before_script:
  - 'dotnet restore'
build:
 stage: build
 script:
  - 'dotnet build'
 only:
   - master

This file will download the microsoft/dotnet image, before executing the build script, Gitlab will execute dotnet restore command, and in the build stage, Gitlab will execute the dotnet build command.

Happy Programming :)


Anuraj Parameswaran: Compile your ASP.NET Core MVC Views

This post is about compiling your ASP.NET Core MVC Views. Normally in ASP.NET MVC, the views are not compiled until they are requested by the browser. To avoid this you can precompile the views. Precompile also helps to identify any errors upfront than at runtime.

In previous versions in ASP.NET MVC, you can do this by adding / setting <MvcBuildViews>true</MvcBuildViews> in your CSProj file.

In ASP.NET Core 1.1, Microsoft introduced Precompile tools. You can do this by adding ViewCompilation.Tools in the tools section and you can either run it manually or can be as part of postpublish script.

Here is the project.json file. (Included required references only)

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0-preview1-final",
    "Microsoft.AspNetCore.Razor.Tools": {
      "version": "1.1.0-preview4-final",
      "type": "build"
    },
    "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Design": {
      "version": "1.1.0-preview4-final",
      "type": "build"
    }
  },
  "tools": {
    "Microsoft.AspNetCore.Mvc.Razor.ViewCompilation.Tools": {
      "version": "1.1.0-preview4-final"
    }
  }
}

And here is the postpublish script section.

"scripts": {
  "prepublish": [
    "npm install",
    "bower install",
    "gulp clean",
    "gulp min"
  ],
  "postpublish": [
    "dotnet razor-precompile --configuration %publish:Configuration% --framework %publish:TargetFramework% --output-path %publish:OutputPath% %publish:ProjectPath%",
    "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%"
  ]
}

Now you can run dotnet publish command, which will compile the code and views, once it is successfull, you can remove the Views folder or you can use the exclude the option. In previous versions of ASP.NET, the views should not be removed, it was place holders, but in ASP.NET Core, Views are compiled to a DLL.

dotnet publish - Precompile views

You can test the application, using dotnet <dllname>, which will host the application in Kestrel HttpServer.

Happy Programming :)


Damien Bowden: Creating an ASP.NET Core 1.1 VS2017 Docker application

This blog shows how to setup a basic ASP.NET Core 1.1 application using Visual studio 2017 and Docker.

Code: https://github.com/damienbod/AspNetCoreVS2017Docker

This article from Swaminathan Vetri demonstates how to setup everything for an ASP.NET Core 1.0 application.

Now the application needs to be updated to ASP.NET Core 1.1. Open the csproj file and update the PackageReference and also the TargetFramework to the 1.1 packages and target. At present, there is no help when updating the packages, like the project.json file had, so you need to know exactly what packages to use when updating directly.

<Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
  </PropertyGroup>
  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.NETCore.App" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.1.0" />
    <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.1.0" />
    <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <Content Update=".dockerignore">
      <DependentUpon>Dockerfile</DependentUpon>
    </Content>
    <Content Update="docker-compose.override.yml">
      <DependentUpon>docker-compose.yml</DependentUpon>
    </Content>
    <Content Update="docker-compose.vs.debug.yml">
      <DependentUpon>docker-compose.yml</DependentUpon>
    </Content>
    <Content Update="docker-compose.vs.release.yml">
      <DependentUpon>docker-compose.yml</DependentUpon>
    </Content>
  </ItemGroup>
</Project>

Then update the docker-compose.ci.build.yml file. You need to select the required image which can be found on Docker Hub. The microsoft/aspnetcore-build:1.1.0-msbuild image is used here.

version: '2'

services:
  ci-build:
    image: microsoft/aspnetcore-build:1.1.0-msbuild
    volumes:
      - .:/src
    working_dir: /src
    command: /bin/bash -c "dotnet restore && dotnet publish -c Release -o ./bin/Release/PublishOutput"

Now update the DockerFile to target the 1.1.0 version.

FROM microsoft/aspnetcore:1.1.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-bin/Release/PublishOutput} .
ENTRYPOINT ["dotnet", "AspNetCoreVS2017Docker.dll"]

Start the application using Docker. You can debug the application which is running inside Docker, all in Visual Studio 2017. Very neat.

vs2017_docker_basic_1

Notes:

When setting up Docker, you need to share the C drive in Docker.

Links:

Debugging Asp.Net core apps running in Docker Containers using VS 2017

https://www.sesispla.net/blog/language/en/2016/05/running-asp-net-core-1-0-rc2-in-docker/

https://hub.docker.com/r/microsoft/aspnetcore-build/tags/

https://blogs.msdn.microsoft.com/webdev/2016/11/16/new-docker-tools-for-visual-studio/

https://blogs.msdn.microsoft.com/stevelasker/2016/06/14/configuring-docker-for-windows-volumes/



Dominick Baier: IdentityServer4.1.0.0

It’s done.

Release notes here.

Nuget here.

Docs here.

I am off to holidays.

See you next year.


Filed under: .NET Security, ASP.NET, OAuth, OpenID Connect, WebAPI


Andrew Lock: Redirecting unknown cultures when using the url culture provider

Redirecting unknown cultures when using the url culture provider

This is the next in a series of posts on using the middleware as filters feature of ASP.NET Core 1.1 to add a url culture provider to your application. In this post I show how to handle the case where a user requests a culture that does not exist, or that we do not support, by redirecting to a URL with a supported culture.

The current series of posts is given below:

By working through each of these posts we are slowly building a full system for having a useable url culture provider. We now have globally defined routing conventions that ensure our urls are prefixed with a culture like en-GB or fr-FR. In the last post we added a culture constraint and catch-all routes to ensure that requests to a culture-less url like Home/Index/ are redirected to a cultured one, like en-GB/Home/Index.

One of the remaining holes in our current implementation is handling the case when users request a URL for a culture that does not exist, or we do not support. For example, in the example below, we do not support Spanish in the application, so the request localisation is set to the default culture en-GB:

Redirecting unknown cultures when using the url culture provider

This is fine from the application's point of view , but it is not great for the user. It looks to the user as though we support Spanish, as we have a Spanish culture url, but all the text will be in English. A potentially better approach would be to redirect the user to a URL with the culture that is actually being used. This also helps reduce the number of pages which are essentially equivalent, which is good for SEO.

Handling redirects in middleware as filters

The technique I'm going to use involves adding an additional piece of middleware to our middleware-as-filters pipeline. If you're not comfortable with how this works I suggest checking out the earlier posts in this series.

This middleware checks the culture that has been applied to the current request to see if it matches the value that was requested via the routing {culture} value. If the values match (ignoring case differences), the middleware just moves on to the next middleware in the pipeline and nothing else happens.

If the requested and actual cultures are different, then the middleware short-circuits the request, sending a redirect to the same URL but with the correct culture. Middleware-as-filters run as ResourceFilters, so they can bypass the action method completely, as in this case.

That is the high level approach, now onto the code. Brace yourself, there's quite a lot, which I'll walk through afterwards.

public class RedirectUnsupportedCulturesMiddleware  
{
    private readonly RequestDelegate _next;
    private readonly string _routeDataStringKey;

    public RedirectUnsupportedCulturesMiddleware(
        RequestDelegate next,
        RequestLocalizationOptions options)
    {
        _next = next;
        var provider = options.RequestCultureProviders
            .Select(x => x as RouteDataRequestCultureProvider)
            .Where(x => x != null)
            .FirstOrDefault();
        _routeDataStringKey = provider.RouteDataStringKey;
    }

    public async Task Invoke(HttpContext context)
    {
        var requestedCulture = context.GetRouteValue(_routeDataStringKey)?.ToString();
        var cultureFeature = context.Features.Get<IRequestCultureFeature>();

        var actualCulture = cultureFeature?.RequestCulture.Culture.Name;

        if (string.IsNullOrEmpty(requestedCulture) ||
            !string.Equals(requestedCulture, actualCulture, StringComparison.OrdinalIgnoreCase))
        {
            var newCulturedPath = GetNewPath(context, actualCulture);
            context.Response.Redirect(newCulturedPath);
            return;
        }

        await _next.Invoke(context);
    }

    private string GetNewPath(HttpContext context, string newCulture)
    {
        var routeData = context.GetRouteData();
        var router = routeData.Routers[0];
        var virtualPathContext = new VirtualPathContext(
            context,
            routeData.Values,
            new RouteValueDictionary { { _routeDataStringKey, newCulture } });

        return router.GetVirtualPath(virtualPathContext).VirtualPath;
    }
}

Breaking down the code

This is a standard piece of ASP.NET Core middleware, so our constructor takes a RequestDelegate which it calls in order to invoke the next middleware in the pipeline.

Our middleware also takes in an instance of RequestLocalizationOptions. It uses this to attempt to determine how the RouteDataRequestCultureProvider has been configured. In particular we need the RouteDataStringKey which represents culture in our URLs. By default it is "culture", but this approach would pick up any changes too.

Note that we assume that we will always have a RouteDataRequestCultureProvider here. That sort of makes sense, as redirecting to a different URL based on culture only makes sense if we are taking the culture from the URL!

We have implemented the standard middleware Invoke function without any further dependencies other than the HttpContext. When invoked, the middleware will attempt to find a route value corresponding to the specified RouteDataStringKey. This will give the name of the culture the user requested, for example es-ES.

Next, we obtain the current culture. I chose to retrieve this from the context using the IRequestCultureFeature, mostly just to show it is possible, but you could also just use the thread culture directly by using CultureInfo.CurrentCulture.Name.

We then compare the culture requested with the actual culture that was set. If the requested culture was one we support, then these should be the same (ignoring case). If the culture requested was not a real culture, was not a culture we support, or was a more-specific culture than we support, then these will not match.

Considering that last point - if the user requested de-DE but we only support de then the culture provider will automatically fall back to de. This is desirable behaviour, but the requested and actual cultures will not match.

Once we have identified that the cultures do not match, we need to redirect the user to the correct url. Achieving this goal seemed surprisingly tricky, and potentially rather fragile, but it worked for me.

In order to route to a url you need an instance of an IRouter. You can obtain a collection of these, along with all the current route values by calling HttpContext.GetData(). I simply chose the first IRouter instance, passed in all the current route values, and provided a new value for the "culture" route value to create a VirtualPathContext, which can in turn be used to generate a path. Hard work!

Adding the middleware to your application

Now we have our middleware, we actually need to add it to our application somewhere. Luckily, we are already using middleware as filters to extract the culture from the url, so we can simply insert our middleware into the pipeline.

public class LocalizationPipeline  
{
    public void Configure(IApplicationBuilder app, RequestLocalizationOptions options)
    {
        app.UseRequestLocalization(options);
        app.UseMiddleware<RedirectUnsupportedCulturesMiddleware>();
    }
}

So our localisation pipeline (which will be run as a filter, thanks to a global MiddlewareFilterAttribute) will first attempt to resolve the request's culture. Immediately after doing so, we run our new middleware, and redirect the request if it is not a culture we support.

If you're not sure what's going on here, I suggest checking out my earlier posts on setting up url localisation in your apps.

Trying it out

That should be all we need to do in order to automatically redirect requests that don't match in culture.

Trying a gibberish culture localhost/zz-ZZ redirects to our default culture:

Redirecting unknown cultures when using the url culture provider

Using a culture we don't support localhost/es-ES similarly redirects to the default culture:

Redirecting unknown cultures when using the url culture provider

If we support a fallback culture de then the request localhost/de-DE is redirected to that:

Redirecting unknown cultures when using the url culture provider

Caveats

One thing I haven't handled here is the difference between CurrentCulture and CurrentUICulture. These two can be different, and are supported by both the RequestLocalizationMiddleware and the RouteDataRequestCultureProvider. I chose not to address it here, but if you are using both in your application, you could easily extend the middleware to handle differences in either value.

Summary

Will these redirects in place, you should hopefully have the last piece of the puzzle for implementing the url culture provider in your ASP.NET Core 1.1 apps. If you come across anything I've missed, comments, or improvements, then do let me know!


Dominick Baier: IdentityServer4 is now OpenID Certified

As of today – IdentityServer4 is official certified by the OpenID Foundation. Release of 1.0 will be this Friday!

More details here.

oid-l-certification-mark-l-cmyk-150dpi-90mm


Filed under: .NET Security, OAuth, WebAPI


Anuraj Parameswaran: Unit Testing in dotnet core

This post will give a brief idea about creating and executing unit tests in dotnet core. Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually.

To create a unit test project, you can open execute the following command in command line.

dotnet new -t xunittest

The command will create a simple unit test project with project.json file and Tests.cs file.

Here is the Test.cs file.

using System;
using Xunit;

namespace Tests
{
    public class Tests
    {
        [Fact]
        public void Test1() 
        {
            Assert.True(true);
        }
    }
}

And here is the project.json file.

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable"
  },
  "dependencies": {
    "System.Runtime.Serialization.Primitives": "4.3.0",
    "xunit": "2.1.0",
    "dotnet-test-xunit": "1.0.0-rc2-192208-24"
  },
  "testRunner": "xunit",
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": [
        "dotnet5.4",
        "portable-net451+win8"
      ]
    }
  }
}

Now you can run dotnet restore command to restore the dependencies. And you can run the unit tests with dotnet test command.

Unit Tests Execution

In the code, there is a new Fact attribute is added instead of TestMethod attribute, because in .net core, xUnit is the default unit test framework, since it is cross platform. If you are using MS Test, you don’t need to change, you can use MS Test as well. For that you need to modify the project.json and use MS Test Framework and test runners.

Here is the changes required to run MS Test.

"dependencies": {
    "dotnet-test-mstest": "1.1.1-preview",
    "MsTest.TestFramework": "1.0.4-preview"
  },
  "testRunner": "mstest"

Also your code also need to change from xUnit to MS Test syntax.

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace Tests
{
    [TestClass]
    public class Tests
    {
        [TestMethod]
        public void Test1() 
        {
            Assert.IsTrue(true);
        }
    }
}

And here is the results.

MS Test Execution

Last but not least, nUnit is supported in .NET Core. I’ve tested it in a few scenarios and works as expected so you should be able to use it with the same reservations as for any other beta library. Similar to MS Test, you need to modify the project.json file.

"testRunner": "nunit",
"dependencies": {
    "NUnit": "3.4.1",
    "dotnet-test-nunit": "3.4.0-beta-2"
},

Also your code also need to change to include NUnit attributes.

using NUnit.Framework;

namespace Tests
{
    [TestFixture]
    public class Tests
    {
        [Test]
        public void Test1() 
        {
            Assert.IsTrue(true);
        }
    }
}

And here is the execution results.

NUnit Test Execution

You can use global.json file to include your project references. By default project-to-project references must be sibling folders. Using a global.json file allows a solution to specify non-standard locations to locate references

Happy Programming :)


Anuraj Parameswaran: How to setup a webserver in an Azure Virtual Machine

This post is about configuring and running a webserver in Azure Virtual Machine. Azure Virtual Machine is one of IaaS (Infrastructure as a service) offering from Microsoft Azure. Infrastructure as a service (IaaS) is an instant computing infrastructure, provisioned and managed over the Internet. Quickly scale up and down with demand and pay only for what you use.

Hope you’re already created a virtual machine, if not create a virtual machine, it is pretty straightforward.

Create Azure Virtual Machine

I have already created an Azure Virtual Machine with Windows 7 OS. To configure it as webserver, first you need to install IIS, you can do it by Remote Desktop to the machine and install IIS from Programs and Features.

Install IIS from Programs and Features

Once it completed, you need to add exception in firewall to allow incoming / out going traffic.

Adding exception to World Wide Web

Now you have completed the configuration in Virtual Machine. Now you need to modify VM Settings in the Portal. Click on the Virtual Machine name from the all resources list. Select the Network Interfaces menu.

Network Interfaces menu

From the listed network interfaces, click on the network interface, which will open the Network Security Group.

Network Security Group

Inside the Network Security Group settings, select the Inbound security rules option. You will see Remote Desktop rule there.

Network Security Group - Inbound Security Rules

Click add, which will open Add Inbound Rule option.

Add Inbound Rule

Name is for identifying the rule, in this post I am using Web as my inbound rule name, Priority can be anything, rules processed based on priority. Lower the number, higher the priority. Source setting helps to allow traffic from specific IP Address, for web server it can be any. Service to configure the port. You can choose HTTP, which will configure Port number 80 and Protocol also will change to TCP. Action need to set as allow. Once you save this, you can see the rule in the list.

Available inbound rules

Please note that the same rule will apply to all the VMs using the same Netwrok Security Group, which means you can reuse it for all your Web servers for example, while most likely choosing a different one for all your SQL machines. You can now connect directly to your VM using port 80 and the public IP provided in its Dashboard.

Here is the screenshot of the same.

Webserver running an Azure Virtual Machine

Happy Programming :)


Damien Bowden: Implementing a Client White-list using ASP.NET Core Middleware

This article shows how a client white-list could be implemented using ASP.NET Core middleware checking the Remote IP address of the request. If the client IP is on the white-list, no restrictions exist.

Code: https://github.com/damienbod/ClientIpAspNetCoreIIS

The middleware uses an admin white-list parameter from the constructor to compare with the remote ip address from the HttpContext Connection property. This is different to previous versions of .NET. In the example, all GET requests are allowed. If any other request method is used, the remote IP is used to check if it exists in the white-list. If it does not exist, a 403 is returned.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;

namespace ClientIpAspNetCore
{
    public class AdminWhiteListMiddleware
    {
        private readonly RequestDelegate _next;
        private readonly ILogger<AdminWhiteListMiddleware> _logger;
        private readonly string _adminWhiteList;

        public AdminWhiteListMiddleware(RequestDelegate next, ILogger<AdminWhiteListMiddleware> logger, string adminWhiteList)
        {
            _adminWhiteList = adminWhiteList;
            _next = next;
            _logger = logger;
        }

        public async Task Invoke(HttpContext context)
        {
            if (context.Request.Method != "GET")
            {
                var remoteIp = context.Connection.RemoteIpAddress;
                _logger.LogInformation($"Request from Remote IP address: {remoteIp}");

                string[] ip = _adminWhiteList.Split(';');
                if (!ip.Any(option => option == remoteIp.ToString()))
                {
                    _logger.LogInformation($"Forbidden Request from Remote IP address: {remoteIp}");
                    context.Response.StatusCode = (int)HttpStatusCode.Forbidden;
                    return;
                }
            }

            await _next.Invoke(context);

        }
    }
}

The white-list is configured in the appsettings.config. This is a ‘;’ separated list which is split in the middleware class.

{
    "AdminWhiteList":  "127.0.0.1;192.168.1.5",
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    }
}

In the startup class, the AdminWhiteListMiddleware type is added using the appsettings configuration.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	...

	app.UseStaticFiles();

	app.UseMiddleware<AdminWhiteListMiddleware>(Configuration["AdminWhiteList"]);
	app.UseMvc();
}

If a request is sent, other that a GET method, and it is not in the white-list, the 403 response is returned to the client and logged.

2016-12-18 16:45:42.8891|0|ClientIpAspNetCore.AdminWhiteListMiddleware|INFO|  Request from Remote IP address: 192.168.1.4 
2016-12-18 16:45:42.9031|0|ClientIpAspNetCore.AdminWhiteListMiddleware|INFO|  Forbidden Request from Remote IP address: 192.168.1.4 

An ActionFilter could also be used to implement this, for example if more specific logic is required.

using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc.Authorization;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.Extensions.Logging;

namespace ClientIpAspNetCore.Filters
{
    public class ClientIdCheckFilter : ActionFilterAttribute
    {
        private readonly ILogger _logger;

        public ClientIdCheckFilter(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger("ClassConsoleLogActionOneFilter");
        }

        public override void OnActionExecuting(ActionExecutingContext context)
        {
            _logger.LogInformation($"Remote IpAddress: {context.HttpContext.Connection.RemoteIpAddress}");

            // TODO implement some business logic for this...

            base.OnActionExecuting(context);
        }
    }
}

The ActionFilter can be added to the services.

public void ConfigureServices(IServiceCollection services)
{
	services.AddScoped<ClientIdCheckFilter>();

	services.AddMvc();
}

And can be used specifically on any controller as required.

[ServiceFilter(typeof(ClientIdCheckFilter))]
[Route("api/[controller]")]
public class ValuesController : Controller

Note: I have not tested this with all the different possible hops, forward headers. Only tested with IIS and kestrel.

Links:

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware

http://odetocode.com/blogs/scott/archive/2016/11/22/asp-net-core-and-the-enterprise-part-3-middleware.aspx



Anuraj Parameswaran: Working with Azure Logic Apps

This post is about working with Azure Logic Apps. Logic App provide a way to solve, simplify and implement scalable integration and workflow in the cloud. It provides coding and visual designer windows to automate the process. In this post I am creating a azure logic app, which helps to monitor website. Logic Apps is a fully managed iPaaS (integration Platform as a Service) allowing developers not to have to worry about building hosting, scalability, availability and management. Logic Apps will scale up automatically to meet demand.

Azure Logic App

Once you clicked on Create button, Create Logic App blade will open up. You can provide the name and other details.

Create Azure Logic App

Provide the name and other details will create the logic app. Once Logic app created successfully, clicking on Logic app name will redirect to the Logic App Designer.

Logic App Designer

You can either choose blank template or any other predefined templates. For this post I am using blank template.

The Logic App, I am creating will monitor my blog on specific intervals and if it is down, it will send an email to me. For implementing this, first I am adding an HTTP API, in the method I am setting GET and in the URI I am setting my blog url.

Logic App Designer - HTTP API

I am configuring the Frequency of HTTP GET to 3 hours, so every 3 hours Logic App will execute a GET request to my blog and returns the status code. Now click on the New Step button, and select add a condition option, this is to check wheather I am getting a HTTP Ok status code (200) or something else.

Condition option

And if I am not getting a 200, I need to send an email. For that click on the Add an action option and need to configure email API. Here I am using Outlook API to send email. For that you need to authorize Azure Logic app to send email and once it is done, you will get an interface like this, where you can provide the e-mail details.

Send email action

Now you can click on the save button in logic app designer, which will save the changes you did so far. You can test the implementation using Run option. Clicking on Code View option will display the Javascript code of the app.

Here is the email I recevied from Logic App

Email from Logic App

Logic Apps brings speed and scalability into the enterprise integration space. The ease of use of the designer, variety of available triggers and actions, and powerful management tools make centralizing your APIs simpler than ever. As businesses move towards digitalization, Logic Apps allows you to connect legacy and cutting-edge systems together. To turn off the app, click Disable in the command bar. View run and trigger histories to monitor when your logic app is running. You can click Refresh to see the latest data.

Happy Programming :)


Dominick Baier: Identity vs Permissions

We often see people misusing IdentityServer as an authorization/permission management system. This is troublesome – here’s why.

IdentityServer (hence the name) is really good at providing a stable identity for your users across all applications in your system. And with identity I mean immutable identity (at least for the lifetime of the session) – typical examples would be a user id (aka the subject id), a name, department, email address, customer id etc…

IdentityServer is not so well suited for for letting clients or APIs know what this user is allowed to do – e.g. create a customer record, delete a table, read a certain document etc…

And this is not inherently a weakness of IdentityServer – but IdentityServer is a token service, and it’s a fact that claims and especially tokens are not a particularly good medium for transporting such information. Here are a couple of reasons:

  • Claims are supposed to model the identity of a user, not permissions
  • Claims are typically simple strings – you often want something more sophisticated to model authorization information or permissions
  • Permissions of a user are often different depending which client or API it is using – putting them all into a single identity or access token is confusing and leads to problems. The same permission might even have a different meaning depending on who is consuming it
  • Permissions can change over the life time of a session, but the only way to get a new token is to make a roundtrip to the token service. This often requires some UI interaction which is not preferable
  • Permissions and business logic often overlap – where do you want to draw the line?
  • The only party that knows exactly about the authorization requirements of the current operation is the actual code where it happens – the token service can only provide coarse grained information
  • You want to keep your tokens small. Browser URL length restrictions and bandwidth are often limiting factors
  • And last but not least – it is easy to add a claim to a token. It is very hard to remove one. You never know if somebody already took a hard dependency on it. Every single claim you add to a token should be scrutinized.

In other words – keep permissions and authorization data out of your tokens. Add the authorization information to your context once you get closer to the resource that actually needs the information. And even then, it is tempting to model permissions using claims (the Microsoft services and frameworks kind of push you into that direction) – keep in mind that a simple string is a very limiting data structure. Modern programming languages have much better constructs than that.

What about roles?
That’s a very common question. Roles are a bit of a grey area between identity and authorization. My rule of thumb is that if a role is a fundamental part of the user identity that is of interest to every part of your system – and role membership does not or not frequently change – it is a candidate for a claim in a token. Examples could be Customer vs Employee – or Patient vs Doctor vs Nurse.

Every other usage of roles – especially if the role membership would be different based on the client or API being used, it’s pure authorization data and should be avoided. If you realize that the number of roles of a user is high – or growing – avoid putting them into the token.

Conclusion
Design for a clean separation of identity and permissions (which is just a re-iteration of authentication vs authorization). Acquire authorization data as close as possible to the code that needs it – only there you can make an informed decision what you really need.

I also often get the question if we have a similar flexible solution to authorization as we have with IdentityServer for authentication – and the answer is – right now – no. But I have the feeling that 2017 will be our year to finally tackle the authorization problem. Stay tuned!


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Dominick Baier: Optimizing Identity Tokens for size

Generally speaking, you want to keep your (identity) tokens small. They often need to be transferred via length constrained transport mechanisms – especially the browser URL which might have limitations (e.g. 2 KB in IE). You also need to somehow store the identity token for the length of a session if you want to use the post logout redirect feature at logout time.

Therefore the OpenID Connect specification suggests the following (in section 5.4):

The Claims requested by the profile, email, address, and phone scope values are returned from the UserInfo Endpoint, as described in Section 5.3.2, when a response_type value is used that results in an Access Token being issued. However, when no Access Token is issued (which is the case for the response_type value id_token), the resulting Claims are returned in the ID Token.

IOW – if only an identity token is requested, put all claims into the token. If however an access token is requested as well (e.g. via id_token token or code id_token), it is OK to remove the claims from the identity token and rather let the client use the userinfo endpoint to retrieve them.

That’s how we always handled identity token generation in IdentityServer by default. You could then override our default behaviour by setting the AlwaysIncludeInIdToken flag on the ScopeClaim class.

When we did the configuration re-design in IdentityServer4, we asked ourselves if this override feature is still required. Times have changed a bit and the popular client libraries out there (e.g. the ASP.NET Core OpenID Connect middleware or Brock’s JS client) automatically use the userinfo endpoint anyways as part of the authentication process.

So we removed it.

Shortly after that, several people brought to our attention that they were actually relying on that feature and are now missing their claims in the identity token without a way to change configuration. Sorry about that.

Post RC5, we brought this feature back – it is now a client setting, and not a claims setting anymore. It will be included in RTM next week and documented in our docs.

I hope this post explains our motivation, and some background, why this behaviour existed in the first place.


Filed under: .NET Security, IdentityServer, OpenID Connect, WebAPI


Damien Bowden: EF Core diagnosis and features with MS SQL Server

This article shows how Entity Framework Core messages can be logged, and compared using the SQL Profiler and also some of the cool new 1.1 features, but not all. All information can be found on the links at the bottom and especially the excellent docs for EF Core.

Code: https://github.com/damienbod/EFCoreFeaturesAndDiag

project.json with EF Core packages and tools

When using EF Core, you need to add the correct packages and tools to the project file. EF Core 1.1 has a lot of changes compared to 1.0. You should not mix the different versions from EF Core. Use either the LTS or the current version, but not both. The Microsoft.EntityFrameworkCore.Tools.DotNet is a new package which came with 1.1.

{
    "dependencies": {
        "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.1.0",   
        "Microsoft.EntityFrameworkCore": "1.1.0",
        "Microsoft.EntityFrameworkCore.Relational": "1.1.0",
        "Microsoft.EntityFrameworkCore.SqlServer": "1.1.0",
        "Microsoft.EntityFrameworkCore.SqlServer.Design": {
            "version": "1.1.0",
            "type": "build"
        },
        "Microsoft.EntityFrameworkCore.Tools": "1.1.0-preview4-final",
        ...
    },

    "tools": {
        "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.1.0-preview4",
        ...
    },

    ...

}

Startup ConfigureServices

The database and EF Core can be configured and usually is in the Startup ConfigureServices method for an ASP.NET Core application. The following code sets up EF Core to use a MS SQL server database and uses the new retry on failure method from version 1.1.

public void ConfigureServices(IServiceCollection services)
{
	var sqlConnectionString = Configuration.GetConnectionString("DataAccessMsSqlServerProvider");

	services.AddDbContext<DomainModelMsSqlServerContext>(
		options => options.UseSqlServer(
			sqlConnectionString,
			sqlServerOptions => sqlServerOptions.EnableRetryOnFailure()
		)
	);

	services.AddScoped<IDataAccessProvider, DataAccessMsSqlServerProvider>();

	services.AddMvc().AddJsonOptions(options =>
	{
		options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
	});
}

Data Model

As with version 1.1, Entity Framework Core can now use backing fields to connect with the database and not just properties. This opens up a whole new world of possibilities of how entities can be designed or used. The _description field from the following DataEventRecord entity will be used for the database.

public class DataEventRecord
{
	private string _description;

	[Key]
	public long DataEventRecordId { get; set; }

	public string Name { get; set; }

	public string MadDescription {
		get { return _description; }
		set { _description = value;  }
	}

	public DateTime Timestamp { get; set; }

	[ForeignKey("SourceInfoId")]
	public SourceInfo SourceInfo { get; set; }

	public long SourceInfoId { get; set; }
}

DbContext with Field and Column mapping

In the OnModelCreating method from the DBContext, the description field is mapped to the column description for the MadDescription property. The context also configures shadow properties to add updated timestamps.

using System;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace EFCoreFeaturesAndDiag.Model
{
    // >dotnet ef migration add testMigration
    public class DomainModelMsSqlServerContext : DbContext
    {
        public DomainModelMsSqlServerContext(DbContextOptions<DomainModelMsSqlServerContext> options) :base(options)
        { }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            builder.Entity<DataEventRecord>()
                .Property(b => b.MadDescription)
                .HasField("_description")
                .HasColumnName("_description");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

Logging and diagnosis

Logging for EF Core is configured for this application as described here in the Entity Framework Core docs.

The EfCoreFilteredLoggerProvider.

using Microsoft.Extensions.Logging;
using System;
using System.Linq;
using Microsoft.EntityFrameworkCore.Storage.Internal;

namespace EFCoreFeaturesAndDiag.Logging
{
    public class EfCoreFilteredLoggerProvider : ILoggerProvider
    {
        private static string[] _categories =
        {
            typeof(Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommandBuilderFactory).FullName,
            typeof(Microsoft.EntityFrameworkCore.Storage.Internal.SqlServerConnection).FullName
        };

        public ILogger CreateLogger(string categoryName)
        {
            if (_categories.Contains(categoryName))
            {
                return new MyLogger();
            }

            return new NullLogger();
        }

        public void Dispose()
        { }

        private class MyLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return true;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            {
                Console.WriteLine(formatter(state, exception));
            }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }

        private class NullLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return false;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            { }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }
    }
}

The EfCoreLoggerProvider class:

using Microsoft.Extensions.Logging;
using System;
using System.IO;

namespace EFCoreFeaturesAndDiag.Logging
{
    public class EfCoreLoggerProvider : ILoggerProvider
    {
        public ILogger CreateLogger(string categoryName)
        {
            return new MyLogger();
        }

        public void Dispose()
        { }

        private class MyLogger : ILogger
        {
            public bool IsEnabled(LogLevel logLevel)
            {
                return true;
            }

            public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
            {
                File.AppendAllText(@"C:\temp\log.txt", formatter(state, exception));
                Console.WriteLine(formatter(state, exception));
            }

            public IDisposable BeginScope<TState>(TState state)
            {
                return null;
            }
        }
    }
}

The EfCoreLoggerProvider logger is then added in the DataAccessMsSqlServerProvider constructor.

public class DataAccessMsSqlServerProvider : IDataAccessProvider
{
	private readonly DomainModelMsSqlServerContext _context;
	private readonly ILogger _logger;

	public DataAccessMsSqlServerProvider(DomainModelMsSqlServerContext context, ILoggerFactory loggerFactory)
	{
		_context = context;
		loggerFactory.AddProvider(new EfCoreLoggerProvider());
		_logger = loggerFactory.CreateLogger("DataAccessMsSqlServerProvider");

	}

The EF Core logs

Here’s the logs produced for an insert command:

Executing action method AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag) with arguments ((null)) - ModelState is ValidOpening connection to database 'EfcoreTest' on server 'N275\MSSQLSERVER2014'.Beginning transaction with isolation level 'Unspecified'.Executed DbCommand (60ms) [Parameters=[@p0='?' (Size = 4000), @p1='?' (Size = 4000), @p2='?', @p3='?'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [SourceInfos] ([Description], [Name], [Timestamp], [UpdatedTimestamp])
VALUES (@p0, @p1, @p2, @p3);
SELECT [SourceInfoId]
FROM [SourceInfos]
WHERE @@ROWCOUNT = 1 AND [SourceInfoId] = scope_identity();Executed DbCommand (1ms) [Parameters=[@p4='?' (Size = 4000), @p5='?' (Size = 4000), @p6='?', @p7='?', @p8='?'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [DataEventRecords] ([_description], [Name], [SourceInfoId], [Timestamp], [UpdatedTimestamp])
VALUES (@p4, @p5, @p6, @p7, @p8);
SELECT [DataEventRecordId]
FROM [DataEventRecords]
WHERE @@ROWCOUNT = 1 AND [DataEventRecordId] = scope_identity();Committing transaction.Closing connection to database 'EfcoreTest' on server 'N275\MSSQLSERVER2014'.Executed action method AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag), returned result Microsoft.AspNetCore.Mvc.OkObjectResult.No information found on request to perform content negotiation.Selected output formatter 'Microsoft.AspNetCore.Mvc.Formatters.StringOutputFormatter' and content type 'text/plain; charset=utf-8' to write the response.Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.Executed action AspNet5MultipleProject.Controllers.DataEventRecordsController.AddTest (EFCoreFeaturesAndDiag) in 1181.5083msConnection id "0HL13KIK3Q7QG" completed keep alive response.Request finished in 1259.1888ms 200 text/plain; charset=utf-8Request starting HTTP/1.1 GET http://localhost:46799/favicon.ico  The request path /favicon.ico does not match an existing fileRequest did not match any routes.Connection id "0HL13KIK3Q7QG" completed keep alive response.Request finished in 36.2143ms 404 

MS SQL Profiler

And here’s the corresponding insert request in the SQL Profiler from MS SQL Server.

sqlprofiler_efcore_01

EF Core Find

One feature which was also implemented in EF Core 1.1 is the Find method, which is very convenient to use.

public DataEventRecord GetDataEventRecord(long dataEventRecordId)
{
	return _context.DataEventRecords.Find(dataEventRecordId);
}

EF Core is looking really good and with the promise of further features and providers, this is becoming really exciting.

Links:

https://docs.microsoft.com/en-us/ef/

https://docs.microsoft.com/en-us/ef/core/miscellaneous/logging

https://blogs.msdn.microsoft.com/dotnet/2016/11/16/announcing-entity-framework-core-1-1/

https://msdn.microsoft.com/en-us/library/ms181091.aspx

https://github.com/aspnet/EntityFramework/releases/tag/rel%2F1.1.0

https://damienbod.com/2016/01/07/experiments-with-entity-framework-7-and-asp-net-5-mvc-6/

https://damienbod.com/2016/01/11/asp-net-5-with-postgresql-and-entity-framework-7/

https://damienbod.com/2015/12/05/asp-net-5-mvc-6-file-upload-with-ms-sql-server-filetable/

https://damienbod.com/2016/09/22/setting-the-nlog-database-connection-string-in-the-asp-net-core-appsettings-json/



Andrew Lock: Using a culture constraint and redirecting 404s with the url culture provider

Using a culture constraint and redirecting 404s with the url culture provider

This is the next in a series of posts on using the middleware as filters feature of ASP.NET Core 1.1 to add a url culture provider to your application. To get an idea for how this works, take a look at the microsoft.com homepage, which includes the request culture in the url.

Using a culture constraint and redirecting 404s with the url culture provider

In my original post, I showed how you could set this up in your own app using the new RouteDataRequestCultureProvider which shipped with ASP.NET Core 1.1. When combined with the middleware as filters feature, you can extract this culture name from the url and use it update the request culture.

In my previous post, we extended our implementation to setup global conventions, to ensure that all our routes would be prefixed with a {culture} url segment. As I pointed out in the post, the downside to this approach is that urls without a culture segment are not longer valid. Hitting the home page / of your application would give a 404 - hardly a friendly user experience!

In this post, I'll show how we can create a custom route constraint to help prevent invalid route matching, and add additional routes to catch those pesky 404s by redirecting to a cultured version of the url.

Creating a custom route constraint

As a reminder, in the last post we setup both a global route and an IApplicationModelConvention for attribute routes. The techniques described in this post can be used with both approaches, but I will just talk about the global route for brevity.

The global route we created used a {culture} segment which is extracted by the CultureProvider to determine the request culture:

app.UseMvc(routes =>  
            {
                routes.MapRoute(
                    name: "default",
                    template: "{culture}/{controller=Home}/{action=Index}/{id?}");

One of the problems with this route as it stands, is that there are no limitations on what can match the {culture} segment. If I navigate to /gibberish/ then that would match the route, using the default values for controller and action, and setting culture=gibberish as a route value.

Using a culture constraint and redirecting 404s with the url culture provider

Note that the url contains the route value gibberish, even though the request has fallen back to the default culture as gibberish is not a valid culture. Whether you consider this a big problem or not is somewhat up to you, but consider the case where the url is /Home/Index - that corresponds to a culture of Home and a controller of Index, even though this is clearly not the intention in the url.

Creating a constraint using regular expressions

We can mitigate this issue by adding a constraint to the route value. Constraints limit the values that a route value is allowed to have. If the route value does not satisfy the constraint, then the route will not match the request. There are a whole host of constraints you can use in your routes, such as restricting to integers, maximum lengths, whether the value is optional etc. You can also create new ones.

We want to restrict our {culture} route value to be a valid culture name, i.e. a 2 letter language code, optionally followed by a hyphen and a 2 letter region code. Now, ideally we would also validate that the 2 letters are actually a valid language (e.g. en, de, and fr are valid while zz is not), but for our purposes a simple regular expression will suffice.

With this slightly simplified model, we can easily create a new constraint to satisfy our requirements using the RegexRouteConstraint base class to do all the heavy lifting for us:

using Microsoft.AspNetCore.Routing.Constraints;

public class CultureRouteConstraint : RegexRouteConstraint  
{
    public CultureRouteConstraint()
        : base(@"^[a-zA-Z]{2}(\-[a-zA-Z]{2})?$") { }
}

The next step before we can use the constraint in our routes, is to tell the router about it. We do this by providing a string key for it, and registering our constraint with the RouteOptions object in ConfigureServices. I chose the key "culturecode".

services.Configure<RouteOptions>(opts =>  
    opts.ConstraintMap.Add("culturecode", typeof(CultureRouteConstraint)));

With this in place, we can start using the constraint in our routes

Using a custom constraint in routes

Using the constraint is as simple as adding the key "culturecode" after a colon when specifying our route values:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{culture:culturecode}/{controller=Home}/{action=Index}/{id?}");
});

Now, if we hit the gibberish url, we are met with the following instead:

Using a culture constraint and redirecting 404s with the url culture provider

Success! Sort of. Depending on how you look at it. The constraint is certainly doing the job, as the url provided does not match the specified route, so MVC returns a 404.

Adding the culture constraint doesn't seem to achieve a whole lot on its own, but it allows us to more safely add additional catch-all routes, to handle cases where the request culture was not provided.

Handling urls with no specified culture

As I mention in my last post, one of the problems with adding culture to the global routing conventions is that urls such as your home page at / will not match, and will return 404s.

How you want to handle to handle this is a matter of opinion. Maybe you want to have every 'culture-less' route match its 'cultured' equivalent with the default culture, so / would serve the same data as /en-GB/ (for your default culture).

An approach I prefer (and in fact the behaviour you see on the www.microsoft.com website), is that hitting a culture-less route sends a 302 redirect to the cultured route. In that case, / would redirect to /en-GB/.

We can achieve this behaviour by combining our culture constraint with a couple of additional routes, which we'll place after our global route defined above. I'll introduce the new routes one at a time.

routes.MapGet("{culture:culturecode}/{*path}", appBuilder => { });  

This route has two sections to it, the first route value is the {culture} value as we've seen before. The second, is a catch-all route which will match anything at all. This route would catch paths such as /en-GB/this/is/the/path, /en-US/, /es/Home/Missing - basically anything that has a valid culture value.

The handler for this method is essentially doing nothing - normally you would configure how to handle this route, but I am explicitly not adding to the pipeline, so that anything matching this route will return a 404. That means any URL which

  1. Has a culture; and
  2. Does not match the previous global route url

will return a 404.

Redirecting culture-less routes to the default culture

The above route does not do anything when used on its own after the global route, but it allows us to use a complete catch-all route afterward. It essentially filters out any requests that already have a culture route-value specified.

To redirect culture-less routes, we can use the following route:

routes.MapGet("{*path}", (RequestDelegate)(ctx =>  
{
    var defaultCulture = localizationOptions.DefaultRequestCulture.Culture.Name;
    var path = ctx.GetRouteValue("path") ?? string.Empty;
    var culturedPath = $"/{defaultCulture}/{path}";
    ctx.Response.Redirect(culturedPath);
    return Task.CompletedTask;
}));

This route uses a different overload of MapGet to provide a RequestDelgate rather than the Action<IApplicationBuilder> we used in the previous route. The difference is that a RequestDelegate is explicitly handling a matched route, while the previous route was essentially forking the pipeline when the route matched.

This route again uses a catch-all route value called {path}, which this time contains the whole request URL.

First, we obtain the name of the default culture from the RequestLocalizationOptions which we inject into the Configure method (see below for the full code in context). This could be en-GB in my case, or it may be en-US, de etc.

Next, we obtain the request url by fetching the {path} from the request and combine it with our default culture to create the culturedPath.

Finally, we redirect to the culture path and return a completed Task to satisfy the RequestDelegate method signature.

You may notice that I am only redirecting on a GET request. This is to prevent unexpected side effects, and in practice should not be an issue for most MVC sites, as users will be redirected to cultured urls when first hitting your site.

Putting it all together

We now have all the pieces we need to add redirecting to our MVC application. Our Configure method should now look something like this:

public void Configure(IApplicationBuilder app, RequestLocalizationOptions localizationOptions)  
{
    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{culture:culturecode}/{controller=Home}/{action=Index}/{id?}");
        routes.MapGet("{culture:culturecode}/{*path}", appBuilder => { });
        routes.MapGet("{*path}", (RequestDelegate)(ctx =>
        {
            var defaultCulture = localizationOptions.DefaultRequestCulture.Culture.Name;
            var path = ctx.GetRouteValue("path") ?? string.Empty;
            var culturedPath = $"/{defaultCulture}/{path}";
            ctx.Response.Redirect(culturedPath);
            return Task.CompletedTask;
        }));
    });

Now when we hit our homepage at localhost/ we are redirected to localhost/en-GB/ - a much nicer experience for the user than the 404 we received previously!

Using a culture constraint and redirecting 404s with the url culture provider

If we consider the route I described earlier, localhost/gibberish/Home/Index/, I will still receive a 404, as it did before. Note however that the user is redirected to a correctly cultured route first:

Using a culture constraint and redirecting 404s with the url culture provider

The first time the url is hit it skips the first and second routes, as it does not have a culture, and is redirected to its culture equivalent, localhost/en-GB/gibberish/Home/Index/.

When this url is hit, it matches the first route, but attempts to find a GibberishController which obviously does not match. It therefore matches our second, cultured catch-all route, which returns a 404. The purpose of this second route becomes clear here, in that it prevents an infinite redirect loop, and ensures we return a 404 for urls which genuinely should be returning Not Found.

Summary

In this post I showed how you could extend the global conventions for culture I described in my previous post to handle the case when a user does not provide the culture in the url.

Using a custom routing constraint and two catch-all routes it is possible to have a single 'correct' route which contains a culture, and to re-map culture-less requests onto this route.

For more details on creating and testing custom route constraints, I recommend you check out this post by Scott Hanselman.


Dominick Baier: IdentityServer4 and ASP.NET Core 1.1

aka RC5 – last RC – promised!

The update from ASP.NET Core 1.0 (aka LTS – long term support) to ASP.NET Core 1.1 (aka Current) didn’t go so well (at least IMHO).

There were a couple of breaking changes both on the APIs as well as in behaviour. Especially around challenge/response based authentication middleware and EF Core.

Long story short – it was not possible for us to make IdentityServer support both versions. That’s why we decided to move to 1.1, which includes a bunch of bug fixes, and will also most probably be the version that ships with the new Visual Studio.

To be more specific – we build against ASP.NET Core 1.1 and the 1.0.0-preview2-003131 SDK.

Here’s a guide that describes how to update your host to 1.1. Our docs and samples have been updated.


Filed under: ASP.NET, OAuth, OpenID Connect, WebAPI


Anuraj Parameswaran: View Components as Tag Helpers in ASP.NET Core

This post is about using View Components as Tag Helpers in ASP.NET Core. This feature is from ASP.NET Core 1.1 version onwards. In ASP.NET Core View Components are similar to partial views, but they are much more powerful. View components do not use model binding, and only depend on the data you provide when calling into it. View components can be used for Login panel, Dynamic navigation menus, Tag cloud etc.

A view component consists of two parts, the class (typically derived from ViewComponent) and the result it returns (typically a view). Similar to controllers, you can create view component by adding [ViewComponent] attribute. Also you can create with class name ends with the suffix ViewComponent. A view component defines its logic in an InvokeAsync method that returns an IViewComponentResult. Recommended location of the views of a ViewComponent is Views/Shared/Components/<view_component_name>/. The default view name for a view component is Default, which means your view file will typically be named Default.cshtml. If you are using a different file name, you can mention it in the return statement.

In this post I am creating a simple hello world view component, which simply display Hello World inside H1 tag.

I have created a HelloWorldViewComponent class inside ViewComponents folder in my asp.net core project.

[ViewComponent(Name = "HelloWorld")]
public class HelloWorldViewComponent : ViewComponent
{
    public async Task<IViewComponentResult> InvokeAsync()
    { 
        return View();
    }
}

In the view part of ViewComponent I have only HTML elements which display HelloWorld.

In ASP.NET Core 1.0, you can invoke a ViewComponent by calling Component.InvokeAsync() method. So if you want to invoke the HelloWorld ViewComponent, you can do it like this.

@await Component.InvokeAsync("HelloWorld")

This syntax looks different from HTML. ASP.NET Core team fixed this issue in ASP.NET Core 1.1 with View Components as Tag Helpers feature, which helps to invoke ViewComponent as Tag Helper. This gives developers the rich intellisense and editor support in the razor template editor as TagHelpers. With the Component.Invoke syntax, there is no obvious way to add CSS classes or get tooltips to assist in configuring the component. Finally, this keeps developers in HTML Editing mode to use View Components.

To use ViewComponents as TagHelpers you need to add your view components as TagHelpers using @addTagHelpers directive. Then you can use ViewComponent in your razor views like this.

<vc:hello-world></vc:hello-world>

As it is ViewComponent, you need to prefix it with vc: similar to asp: in TagHelpers.

Happy Programming :)


Andrew Lock: Applying the RouteDataRequest CultureProvider globally with middleware as filters

Applying the RouteDataRequest CultureProvider globally with middleware as filters

In my last post I showed how your could use the middleware as filters feature of ASP.NET Core 1.1.0 along with the RouteDataRequestCultureProvider to set the culture of your application from the url. This allowed you to distinguish between different cultures from a url segment, for example www.microsoft.com/en-GB/ and www.microsoft.com/fr-FR/.

The main downside to that approach was that it required inserting and additional {culture} route segment into all your routes, so that the RouteDataRequestCultureProvider could extract the route, and adding a MiddlewareFilter to every applicable controller. I only showed an example for when you are using Attribute routing, but it would also be necessary to add {culture} to all your convention-based routes too (if you're using them).

In this post, I'll show the various ways you can configure your routes globally, so that all your urls will have a culture prefix by default.

Adding a global MiddlewareFilter

I'm going to be continuing where I left off in the last post, with a ValuesController I am using for displaying the current culture:

[Route("{culture}/[controller]")]
[MiddlewareFilter(typeof(LocalizationPipeline))]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Hitting the url /fr-FR/Values/ShowMeTheCulture for example would show that the current culture was set to fr-FR, which was our goal. The downside to using this approach more generally is that we would need to add the MiddlewareFilter to all our controllers, and add the {culture} url segment. Ideally, we want to just be able to define our routes and controller the same as we were, before we were thinking about localisation.

The first of these problems is easily fixed by adding the MiddlewareFilter as a Global filter to MVC. You can do this by updating the call to AddMvc in ConfigureServices of your Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc(opts =>
    {
        opts.Filters.Add(new MiddlewareFilterAttribute(typeof(LocalizationPipeline)));
    });

    // other service configuration
}

By adding the filter here, we can remove the MiddlewareFilter attribute from our ValuesController; it will be automatically applied to all our action methods. That's the first step done!

Using a convention to globally add a culture prefix to attribute routes

Now we've dealt with that, we can take a look at our RouteAttribute based routes. We want to avoid having to explicitly add the {culture} segment to every route we define.

Luckily, in ASP.NET Core MVC, you can register custom conventions on application startup which specify additional conventions that can be applied to the url. For example, you could ensure all your url paths are prefixed with /api, or you could specify the current environment (live/test) in the url, or rename your action methods completely.

In this case, we are going to prefix all our attribute routes with {culture} so we don't have to do it manually. I'm not going to go extensively into how the convention works, so I strongly suggest checking out the above links for more details!

First we create our convention by implementing IApplicationModelConvention:

public class LocalizationConvention : IApplicationModelConvention  
{
    public void Apply(ApplicationModel application)
    {
        var culturePrefix = new AttributeRouteModel(new RouteAttribute("{culture}"));

        foreach (var controller in application.Controllers)
        {
            var matchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel != null).ToList();
            if (matchedSelectors.Any())
            {
                foreach (var selectorModel in matchedSelectors)
                {
                    selectorModel.AttributeRouteModel = AttributeRouteModel.CombineAttributeRouteModel(culturePrefix,
                        selectorModel.AttributeRouteModel);
                }
            }

            var unmatchedSelectors = controller.Selectors.Where(x => x.AttributeRouteModel == null).ToList();
            if (unmatchedSelectors.Any())
            {
                foreach (var selectorModel in unmatchedSelectors)
                {
                    selectorModel.AttributeRouteModel = culturePrefix;
                }
            }
        }
    }
}

This convention is pretty much identical to the one presented by Filip from StrathWeb. It works by looping through all the Controllers in the application, and checking if the controller has an AttributeRoute attribute. If it does, then it combines the route template with the {culture} prefix, otherwise it adds a new one.

After this convention has run, every controller should effectively have a RouteAttribute that is prefixed with {culture}. The next thing to do is to let MVC know about our new convention. We can do this by adding it in the call to AddMvc:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc(opts =>
    {
        opts.Conventions.Insert(0, new LocalizationConvention());
        opts.Filters.Add(new MiddlewareFilterAttribute(typeof(LocalizationPipeline)));
    });
}

With that in place, we can update our ValuesController to remove the {culture} prefix from the RouteAttribute, and can delete the MiddlewareFilterAttribute entirely:

[Route("[controller]")]
public class ValuesController : Controller  
{
    //overall route /{culture}/Values/ShowMeTheCulture
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

And we're done! We don't need to reference the {culture} directly in our route attributes, but our urls will still require it:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

Caveats

There's a couple of points to be aware of with this method. First off, it's important to understand we have replaced the previous route; so the previous route of /Values/ShowMeThCulture is no longer accessible - you must provide the culture, just as if you had added the {culture} segment to the RouteAttribute directly:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

The other point to be aware of, is that we specified the {culture} prefix on the controller RouteAttribute in the convention. That means using an action RouteAttribute that specifies a path relative to root (which ignores the controller RouteAttribute) will not contain the {culture} prefix.

For example using [Route("~/ShowMeTheCulture")] on an action, will correspond to the url /ShowMeTheCulture - not /{culture}/ShowMeTheCulture. This may or may not be desirable for your use case, but it's likely you want these routes to be localised too, so it's worth keeping an eye out for. There's probably a different way of writing the convention to handle this, but I haven't dug into it too far yet, so please let me know below if you know a way!

Updating the default route handler

We have covered adding a convention for attribute routing, but what if you're using global route handling conventions? In the default templates, ASP.NET Core MVC is configured with the following route:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
});

This allows you to create controllers without using a RouteAttribute. Instead, the controller and action will be inferred. This lets you create Controllers like this:

    public class HomeController : Controller
    {
        public string Index()
        {
            return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
        }
    }

The default values in our routing convention mean that this action method will be hit for the urls /Home/Index/, /Home/ and just /.

If we update the default convention with a {culture} segment, then we can continue to have this behaviour, but with the culture prefixed to the url, so that they map to
/en-GB/Home/Index/ or /fr-FR/ for example. It is as simple as updating the template in UseMvc:

app.UseMvc(routes =>  
{
    routes.MapRoute(
        name: "default",
        template: "{culture}/{controller=Home}/{action=Index}/{id?}");
});

Now when we browse our website, we will get the desired result:

Applying the RouteDataRequest CultureProvider globally with middleware as filters

Caveats

The biggest caveat here is that the IApplicationModelConvention we added previously will break this global route by adding a RouteAttribute to controllers that do not have one. Generally speaking, if you're using an IApplicationModelConvention I'd recommend using either the global routes or RouteAttributes rather than trying to combine both. Again, there's probably a way to write the convention to work with both attribute and global routes but I haven't dug too deep yet.

Also, as before, you won't be able to access the controller at /Home/Index anymore - you always have to specify the culture in the url.

Other considerations

With both of these routes, one major issue is that you always need to specify the culture in the url. This may be fine for an API, but for a website this could give a poor user experience - hitting the base url / would return a 404 with this setup!

It's important to setup additional routes to handle this, most likely redirecting to a route containing the default culture. For example, if you hit the url www.microsoft.com/ you will be redirected to www.microsoft.com/en-GB/ or something similar. This may require adding additional conventions or global routes depending on your setup. I will cover some approaches for doing this in a couple of upcoming posts.

Summary

This post led on from my previous post in which I showed how you could use the middleware as filters feature of ASP.NET Core to set the culture for a request using a URL segment. This post showed how to extend that setup to avoid having to add explicit {culture} segments to all of your controllers by adding a global convention (for route attributes) or by amending your global route configuration.

There are still a number of limitations to be aware of in this setup as I highlighted, but it brings you closer to a complete url localisation solution!


Ben Foster: Bare metal APIs with ASP.NET Core MVC

ASP.NET Core MVC now provides a true "one asp.net" framework that can be used for building both APIs and websites. But what if you only want to build an API?

Most of the ASP.NET Core MVC tutorials I've seen advise using the Microsoft.AspNetCore.Mvc package. While this does indeed give you what you need to build APIs, it also gives you a lot more:

  • Microsoft.AspNetCore.Mvc.ApiExplorer
  • Microsoft.AspNetCore.Mvc.Cors
  • Microsoft.AspNetCore.Mvc.DataAnnotations
  • Microsoft.AspNetCore.Mvc.Formatters.Json
  • Microsoft.AspNetCore.Mvc.Localization
  • Microsoft.AspNetCore.Mvc.Razor
  • Microsoft.AspNetCore.Mvc.TagHelpers
  • Microsoft.AspNetCore.Mvc.ViewFeatures
  • Microsoft.Extensions.Caching.Memory
  • Microsoft.Extensions.DependencyInjection
  • NETStandard.Library

A few of these packages are still needed if you're building APIs but many are specific to building full websites.

After installing the above package we typically register MVC in Startup.ConfigureServices like so:

services.AddMvc();

This code is responsible for wiring up the necessary MVC services with application container. Let's look at what this actually does:

public static IMvcBuilder AddMvc(this IServiceCollection services)
{
    var builder = services.AddMvcCore();

    builder.AddApiExplorer();
    builder.AddAuthorization();

    AddDefaultFrameworkParts(builder.PartManager);

    // Order added affects options setup order

    // Default framework order
    builder.AddFormatterMappings();
    builder.AddViews();
    builder.AddRazorViewEngine();
    builder.AddCacheTagHelper();

    // +1 order
    builder.AddDataAnnotations(); // +1 order

    // +10 order
    builder.AddJsonFormatters();

    builder.AddCors();

    return new MvcBuilder(builder.Services, builder.PartManager);
}

Again most of the service registration refers to the components used for rendering web pages.

Bare Metal APIs

It turns out that the ASP.NET team anticipated that developers may only want to build APIs and nothing else, so they gave us the ability to do just that.

First of all, rather than installing Microsoft.AspNetCore.Mvc, only install Microsoft.AspNetCore.Mvc.Core. This will give you the bare MVC middleware (routing, controllers, HTTP results) and not a lot else.

In order to process JSON requests and return JSON responses we also need the Microsoft.AspNetCore.Mvc.Formatters.Json package.

Then, to add both the core MVC middleware and JSON formatter, add the following code to ConfigureServices:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddJsonFormatters();
}

The final thing to do is to change your controllers to derive from ControllerBase instead of Controller. This provides a base class for MVC controllers without any View support.

Looking at the final list of packages in project.json, you can see we really don't need that much after all, especially given most of these are related to configuration and logging:

"Microsoft.AspNetCore.Mvc.Core": "1.1.0",
"Microsoft.AspNetCore.Mvc.Formatters.Json": "1.1.0",
"Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
"Microsoft.Extensions.Configuration.Json": "1.1.0",
"Microsoft.Extensions.Configuration.CommandLine": "1.1.0",
"Microsoft.Extensions.Logging": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Microsoft.Extensions.Logging.Debug": "1.1.0"

You can find the complete code on GitHub.


Anuraj Parameswaran: Using Automapper in ASP.NET Core project

This post is about using Automapper in ASP.NET Core project. AutoMapper is an object-object mapper which allows you to solve the problem of manually mapping each property of a class with the same properties of another class. Before AutoMapper, if you want to map properties of one object to another, you have manually assign each of the property objects, it will hard and error-prone if an object got lot of properties. If you are a MVC developer most of the time you manually map model objects with viewmodel objects. AutoMapper helps to do it in clean and readable way. For using AutoMapper, first you need to set the mapping classes in AutoMapper, you can use CreateMap method for this. To for map classes, you can use Map method.

So in ASP.NET Core, first you need to include AutoMapper in the project.json file as a dependency. Once you did that run dotnet restore to download AutoMapper. Once AutoMapper added to the project, you need to configure the mapping. You can do it using either in Startup class, ConfigureServices() method. Or you can create a profile class, where you can add all the mappings and use it in Startup class, ConfigureServices() method. In this post I am mapping User and UserViewModel class. Here is the Model and ViewModel classes.

public class User
{
    [Key]
    public int Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }
    public string Password { get; set; }
    public DateTime CreatedOn { get; set; } = DateTime.UtcNow;
}

public class UserViewModel
{
    [Required]
    public string Name { get; set; }
    [Required, DataType(DataType.EmailAddress)]
    public string Email { get; set; }
    [Required, DataType(DataType.Password)]
    public string Password { get; set; }
    [Required, DataType(DataType.Password), Compare("Password")]
    public string ConfirmPassword { get; set; }
    [Required]
    public bool AgreedToTerms { get; set; }
}

And here is the code in ConfigureServices method.

var config = new AutoMapper.MapperConfiguration(cfg =>
{
    cfg.CreateMap<UserViewModel, User>();
});

var mapper = config.CreateMapper();

You can map the classes using Map method of Mapper object. To use Mapper object in controllers, you can inject it using ASP.NET Core dependency injection.

private readonly IMapper _mapper;
public HomeController(IMapper mapper)
{
    _mapper = mapper;
}
public IActionResult Index(UserViewModel uservm)
{
    if (ModelState.IsValid)
    {
        var user = _mapper.Map<User>(uservm);
    }

    return View();
}

This is the screenshot of mapping using Automapper.

AutoMapper running in ASP.NET Core Web app

Here is the profile class implementation, which quite similar to current implementation except all the mappings will be inside this class.

public class AutoMapperProfileConfiguration : Profile
{
    public AutoMapperProfileConfiguration()
    : this("MyProfile")
    {
    }
    protected AutoMapperProfileConfiguration(string profileName)
    : base(profileName)
    {
        CreateMap<UserViewModel, User>();
    }
}

And you can use it like this in the Startup class.

public void ConfigureServices(IServiceCollection services)
{
    var config = new AutoMapper.MapperConfiguration(cfg =>
    {
        cfg.AddProfile(new AutoMapperProfileConfiguration());
    });

    var mapper = config.CreateMapper();
    services.AddSingleton(mapper);
    services.AddMvc();
}

And you need to use Automapper package in project.json as dependency.

Happy Programming :)


Dominick Baier: New in IdentityServer4: Resource-based Configuration

For RC4 we decided to re-design our configuration object model for resources (formerly known as scopes).

I know, I know – we are not supposed to make fundamental breaking changes once reaching the RC status – but hey – we kind of had our “DNX” moment, and realized that we either change this now – or never.

Why did we do that?
We spent the last couple of years explaining OpenID Connect and OAuth 2.0 based architectures to hundreds of students in training classes, attendees at conferences, fellow developers, and customers from all types of industries.

While most concepts are pretty clear and make total sense – scopes were the most confusing part for most people. The abstract nature of a scope as well as the fact that the term scope has a somewhat different meaning in OpenID Connect and OAuth 2.0, made this concept really hard to grasp.

Maybe it’s also partly our fault, that we stayed very close to the spec-speak with our object model and abstraction level, that we forced that concept onto every user of IdentityServer.

Long story short – every time I needed to explain scope, I said something like “A scope is a resource a client wants to access.”..and “there are two types of scopes: identity related and APIs…”.

This got us thinking if it would make more sense to introduce the notion of resources in IdentityServer, and get rid of scopes.

What did we do?
Before RC4 – our configuration object model had three main parts: users, client, and scopes (and there were two types of scopes – identity and resource – and some overlapping settings between them).

Starting with RC4 – the configuration model does not have scope anymore as a top-level concept, but rather identity resources and API resources.

terminology

We think this is a more natural way (and language) to model a typical token-based system.

From our new docs:

User
A user is a human that is using a registered client to access resources.

Client
A client is a piece of software that requests tokens from IdentityServer – either for authenticating a user (requesting an identity token)
or for accessing a resource (requesting an access token). A client must be first registered with IdentityServer before it can request tokens.

Resources
Resources are something you want to protect with IdentityServer – either identity data of your users (like user id, name, email..), or APIs.

Enough talk, show me the code!
Pre-RC4, you would have used a scope store to return a flat list of scopes. Now the new resource store deals with two different resource types: IdentityResource and ApiResource.

Let’s start with identity – standard scopes used to be defined like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        StandardScopes.OpenId,
        StandardScopes.Profile
    };
}

..and now:

public static IEnumerable<IdentityResource> GetIdentityResources()
{
    return new List<IdentityResource>
    {
        new IdentityResources.OpenId(),
        new IdentityResources.Profile()
    };
}

Not very different. Now let’s define a custom identity resource with associated claims:

var customerProfile = new IdentityResource(
    name:        "profile.customer",
    displayName: "Customer profile",
    claimTypes:  new[] { "name""status""location" });

This is all that’s needed for 90% of all identity resources you will ever define. If you need to tweak details, you can set various properties on the IdentityResource class.

Let’s have a look at the API resources. You used to define a resource-scope like this:

public static IEnumerable<Scope> GetScopes()
{
    return new List<Scope>
    {
        new Scope
        {
            Name = "api1",
            DisplayName = "My API #1",
 
            Type = ScopeType.Resource
        }
    };
}

..and the new way:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource("api1""My API #1")
    };
}

Again – for the simple case there is not a huge difference. The ApiResource object model starts to become more powerful when you have advanced requirements like APIs with multiple scopes (and maybe different claims based on the scope) and support for introspection, e.g.:

public static IEnumerable<ApiResource> GetApis()
{
    return new[]
    {
        new ApiResource
        {
            Name = "calendar",
 
            // secret for introspection endpoint
            ApiSecrets =
            {
                new Secret("secret".Sha256())
            },
 
            // claims to include in access token
            UserClaims =
            {
                JwtClaimTypes.Name,
                JwtClaimTypes.Email
            },
 
            // API has multiple scopes
            Scopes =
            {
                new Scope
                {
                    Name = "calendar.read_only",
                    DisplayName = "Read only access to the calendar"
                },
                new Scope
                {
                    Name = "calendar.full_access",
                    DisplayName = "Full access to the calendar",
                    Emphasize = true,
 
                    // include additional claim for that scope
                    UserClaims =
                    {
                        "status"
                    }
                }
            }
        }
    };

IOW – We reversed the configuration approach, and you now model APIs (which might have scopes) – and not scopes (that happen to represent an API).

We like the new model much better as it reflects how you architect a token-based system much better. We hope you like it too – and sorry for moving the cheese ;)

As always – give us feedback on the issue tracker. RTM is very close.


Filed under: .NET Security, ASP.NET, OAuth, Uncategorized, WebAPI


Andrew Lock: Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

In this post, I show how you can use the 'middleware as filters' feature of ASP.NET Core 1.1.0 to easily add request localisation based on url segments.

The end goal we are aiming for is to easily specify the culture in the url, similar to the way Microsoft handle it on their public website. If you navigate to https://microsoft.com, then you'll be redirected to https://www.microsoft.com/en-gb/ (or similar for your culture)

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

Using URL parameters is one of the approaches to localisation Google suggests as it is more user and SEO friendly than some of the other options.

Localisation in ASP.NET Core 1.0.0

The first step to localising your application is to associate the current request with a culture. Once you have that, you can customise the strings in your request to match the culture as required.

Localisation is already perfectly possible in ASP.NET Core 1.0.0 (and the subsequent patch versions). You can localise your application using the RequestLocalizationMiddleware, and you can use a variety of providers to obtain the culture from cookies, querystrings or the Accept-Language header out of the box.

It is also perfectly possible to write your own provider to obtain the culture from somewhere else, from the url for example. You could use the RoutingMiddleware to fork the pipeline, and extract a culture segment from it, and then run your MVC pipeline inside that fork, but you would still need to be sure to handle the other fork, where the cultured url pattern is not matched and a culture can't be extracted.

While possible, this is a little bit messy, and doesn't necessarily correspond to the desired behaviour. Luckily, in ASP.NET Core 1.1.0, Microsoft have added two features that make the process far simpler: middleware as filters, and the RouteDataRequestCultureProvider.

In my previous post, I looked at the middleware as filters feature in detail, showing how it is implemented; in this post I'll show how you can put the feature to use.

The other piece of the puzzle, the RouteDataRequestCultureProvider, does exactly what you would expect - it attempts to identify the current culture based on RouteData segments. You can use this as a drop-in provider if you are using the RoutingMiddleware approach mentioned previously, but I will show how to use it in the MVC pipeline in combination with the middleware as filters feature. To see how the provider can be used in a normal middleware pipeline, check out the tests in the localisation repository on GitHub.

Setting up the project

As I mentioned, these features are all available in the ASP.NET Core 1.1.0 release, so you will need to install the preview version of the .NET core framework. Just follow the instructions in the announcement blog post.

After installing (and fighting with a couple of issues), I started by scaffolding a new web project using

dotnet new -t web  

which creates a new MVC web application. For simplicity I stripped out most of the web pieces and added a single ValuesController, That would simply write out the current culture when you hit /Values/ShowMeTheCulture:

public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Adding localisation

The next step was to add the necessary localisation services and options to the project. This is the same as for version 1.0.0 so you can follow the same steps from the docs or my previous posts. The only difference is that we will add a new RequestCultureProvider.

First, add the Microsoft.AspNetCore.Localization.Routing package to your project.json. You may need to update some other packages too to ensure the versions align. Note that not all the packages will necessarily be 1.1.0, it depends on the latest versions of the packages that shipped.

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Options": "1.1.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0",
    "Microsoft.AspNetCore.Localization.Routing": "1.1.0"
  },

You can now configure the RequestLocalizationOptions in the ConfigureServices method of your Startup class:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();

    var supportedCultures = new[]
    {
        new CultureInfo("en-US"),
        new CultureInfo("en-GB"),
        new CultureInfo("de"),
        new CultureInfo("fr-FR"),
    };

    var options = new RequestLocalizationOptions()
    {
        DefaultRequestCulture = new RequestCulture(culture: "en-GB", uiCulture: "en-GB"),
        SupportedCultures = supportedCultures,
        SupportedUICultures = supportedCultures
    };
    options.RequestCultureProviders = new[] 
    { 
         new RouteDataRequestCultureProvider() { Options = options } 
    };

    services.AddSingleton(options);
}

This is all pretty standard up to this point. I have added the cultures I support, and defined the default culture to be en-GB. Finally, I have added the RouteDataRequestCultureProvider as the only provider I will support at this point, and registered the options in the DI container.

Adding localisation to the urls

Now we've setup our localisation options, we just need to actually try and extract the culture from the url. As a reminder, we are trying to add a culture prefix to our urls, so that /controller/action becomes /en-gb/controller/action or /fr/controller/action. There are a number of ways to achieve this, but if your are using attribute routing, one possibility is to add a {culture} routing parameter to your route:

[Route("{culture}/[controller]")]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

With the addition of this route, we can now hit the urls defined above, but we're not yet doing anything with the {culture} segment, so all our requests use the default culture:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

To actually convert that value to a culture we need the middleware as filters feature.

Adding localisation using a MiddlewareFilter

In order to extract the culture from the RouteData we need to run the RequestLocalisationMiddleware, which will use the RouteDataRequestCultureProvider. However, in this case, we can't run it as part of the normal middleware pipeline.

Middleware can only use data that has been added by preceding components in the pipeline, but we need access to routing information (the RouteData segments). Routing doesn't happen till the MVC middleware runs, which we need to run to extract the RouteData segments from the url. Therefore, we need request localisation to happen after action selection, but before the action executes; in other words, in the MVC filter pipeline.

To use a MiddlewareFilter, use first need to create a pipeline. This is like a mini Startup file in which you Configure an IApplicationBuilder to define the middleware that should run as part of the pipeline. You can configure several middleware to run in this way.

In this case, the pipeline is very simple, as we literally just need to run the RequestLocalisationMiddleware:

public class LocalizationPipeline  
{
    public void Configure(IApplicationBuilder app, RequestLocalizationOptions options)
    {
        app.UseRequestLocalization(options);
    }
}

We can then apply this pipeline using a MiddlewareFilterAttribute to our ValuesController:

[Route("{culture}/[controller]")]
[MiddlewareFilter(typeof(LocalizationPipeline))]
public class ValuesController : Controller  
{
    [Route("ShowMeTheCulture")]
    public string GetCulture()
    {
        return $"CurrentCulture:{CultureInfo.CurrentCulture.Name}, CurrentUICulture:{CultureInfo.CurrentUICulture.Name}";
    }
}

Now if we run the application, you can see the culture is resolved correctly from the url:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

And there you have it. You can now localise your application using urls instead of querystrings or cookie values. There is obviously more to getting a working solution together here. For example you need to provide an obvious route for the user to easily switch cultures. You also need to consider how this will affect your existing routes, as clearly your urls have changed!

Optional RouteDataRequestCultureProvider configuration

By default, the RouteDataRequestCultureProvider will look for a RouteData key with the value culture when determining the current culture. It also looks for a ui-culture key for setting the UI culture, but if that's missing then it will fallback to culture, as you can see in the previous screenshots. If we tweak the ValuesController, RouteAttribute to be

Route("{culture}/{ui-culture}/[controller]")]  

then we can specify the two separately:

Url culture provider using middleware as filters in ASP.NET Core 1.1.0

When configuring the provider, you can change the RouteData keys to something other that culture and ui-culture if you prefer. It will have no effect on the final result, it will just change the route tokens that are used to identify the culture. For example, we could change the culture RouteData parameter to be lang when configuring the provider:

options.RequestCultureProviders = new[] {  
    new RouteDataRequestCultureProvider() 
        { 
            RouteDataStringKey = "lang",
            Options = options
        } 
    };

We could then write our attribute routes as

Route("{lang}/[controller]")]  

Summary

In this post I showed how you could use the url to localise your application by making use of the MiddlewareFilter and RouteDataRequestCultureProvider that are provided in ASP.NET Core 1.1.0. I will write a couple more posts on using this approach in practical applications.

If you're interested in how the ASP.NET team implemented the feature, then check out my previous post. You can also see an example usage on the announcement page and on Hisham's blog.


Damien Bowden: Contributing to OSS projects on gitHub using fork and upstreams

This article is a simple guideline on how you could contribute to gitHub OSS projects using fork and upstream. This is not the only way to do it. git Extensions is used for this demo, but any git client can be used. In this example, aspnet/AspLabs from Microsoft is used as the target repository.

So you have something to contribute, cool, that’s the hard part.

Before you can make your contribution, you need to create a fork of the repository where you want to make your contribution. Open the project on github, and click the fork button in the top right corner.

githuboss_01

Now clone your forked repository

githuboss_02

In git Extensions, click the clone repository and select a folder somewhere on your computer.

githuboss_03

Now you have a master branch and also a server master branch of your forked repository. The next step is to configure the remote upstream branch. This is required to synchronize with the parent repository, as you might not be the only person contributing to the repository. Click the Repository menu in git Extensions and add a new remote repository with the url from the parent repository.

githuboss_04

Now you can pull from the upstream repository. You pull from the upstream/master branch to your local master branch. Due to this you should NEVER work on your master branch. Then you can also configure your git to rebase the local master with the upstream master if preferred.

githuboss_05

Once you have pulled from the upstream, you can push to your remote master, ie the forked master. Just to mention it again, NEVER WORK ON YOUR LOCAL FORKED MASTER, and you will save yourself hassle.

Now you’re ready to work. Create a new branch. A good recommendation is to use the following pattern for naming:

<gitHub username>/<reason-for-the-branch-in-lowercase>

Here’s an example:

damienbod/add-urls-check

By using your gitHub username, it makes it easier for the person reviewing the pull request.

When your work is finished on the branch, you are ready to create a pull request. Go to the parent repository and click on the ‘New pull request’ button:

githuboss_06

Choose your working branch and select the target branch on the parent repository, usually the master.

NOTE: if your branch was created from an older master commit than the actual master on the parent, you need to pull from the upstream and rebase your branch to the latest commit. This is easy as you do not work on the local master.

If you are contributing to an aspnet repository, you will need to sign an electronic agreement before you can contribute.

If you are working together with a maintainer of the repository, or your pull request is the result of an issue, you could add a comment with the github name of the person that will review and merge, so that he or she will be notified that you are ready. They will receive a notification on gitHub.

Now just wait and fix the issues as required. Once the pull request is merged, you need to pull from the upstream on your local forked repository and rebase if necessary to continue with you next pull request.

And who knows, you might even get a coin from Microsoft.

Thanks to Andrew Stanton-Nurse for his tips.

I am also grateful for tips from anyone on how to improve this guideline.

Links:

http://asp.net-hacker.rocks/2016/12/07/contributing-to-oss-projects-on-github-using-fork-and-upstreams.html

https://gitextensions.github.io/

gist from CristinaSolana



Andrew Lock: Exploring Middleware as MVC Filters in ASP.NET Core 1.1

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

One of the new features released in ASP.NET Core 1.1 is the ability to use middleware as an MVC Filter. In this post I'll take a look at how the feature is implemented by peering into the source code, rather than focusing on how you can use it. In the next post I'll look at how you can use the feature to allow greater code reuse.

Middleware vs Filters

The first step is to consider why you would choose to use middleware over filters, or vice versa. Both are designed to handle cross-cutting concerns of your application and both are used in a 'pipeline', so in some cases you could choose either successfully.

The main difference between them is their scope. Filters are a part of MVC, so they are scoped entirely to the MVC middleware. Middleware only has access to the HttpContext and anything added by preceding middleware. In contrast, filters have access to the wider MVC context, so can access routing data and model binding information for example.

Generally speaking, if you have a cross cutting concern that is independent of MVC then using middleware makes sense, if your cross cutting concern relies on MVC concepts, or must run midway through the MVC pipeline, then filters make sense.

Exploring Middleware as MVC Filters in ASP.NET Core 1.1

So why would you want to use middleware as filters then? A couple of reasons come to mind for me.

First, you have some middleware that already does what you want, but you now need the behaviour to occur midway through the MVC middleware. You could rewrite your middleware as a filter, but it would be nicer to just be able to plug it in as-is. This is especially true if you are using a piece of third-party middleware and you don't have access to the source code.

Second, you have functionality that needs to logically run as both middleware and a filter. In that case you can just have the one implementation that is used in both places.

Using the MiddlewareFilterAttribute

On the announcement post, you will find an example of how to use Middleware as filters. Here I'll show a cut down example, in which I want to run MyCustomMiddleware when a specific MVC action is called.

There are two parts to the process, the first is to create a middleware pipeline object:

public class MyPipeline  
{
    public void Configure(IApplicationBuilder applicationBuilder) 
    {
        var options = // any additional configuration

        applicationBuilder.UseMyCustomMiddleware(options);
    }
}

and the second is to use an instance of the MiddlewareFilterAttribute on an action or a controller, wherever it is needed.

[MiddlewareFilter(typeof(MyPipeline))]
public IActionResult ActionThatNeedsCustomfilter()  
{
    return View();
}

With this setup, MyCustomMiddleware will run each time the action method ActionThatNeedsCustomfilter is called.

It's worth noting that the MiddlewareFilterAttribute on the action method does not take a type of the middleware component itself (MyCustomMiddleware), it actually takes a pipeline object which configures the middleware itself. Don't worry about this too much as we'll come back to it again later.

For the rest of this post, I'll dip into the MVC repository and show how the feature is implemented.

The MiddlewareFilterAttribute

As we've already seen, the middleware filter feature starts with the MiddlewareFilterAttribute applied to a controller or method. This attribute implements the IFilterFactory interface which is useful for injecting services into MVC filters. The implementation of this interface just requires one method, CreateInstance(IServiceProvider provider):

public class MiddlewareFilterAttribute : Attribute, IFilterFactory, IOrderedFilter  
{
    public MiddlewareFilterAttribute(Type configurationType)
    {
        ConfigurationType = configurationType;
    }

    public Type ConfigurationType { get; }

    public IFilterMetadata CreateInstance(IServiceProvider serviceProvider)
    {
        var middlewarePipelineService = serviceProvider.GetRequiredService<MiddlewareFilterBuilder>();
        var pipeline = middlewarePipelineService.GetPipeline(ConfigurationType);

        return new MiddlewareFilter(pipeline);
    }
}

The implementation of the attribute is fairly self explanatory. First a MiddlewareFilterBuilder object is obtained from the dependency injection container. Next, GetPipeline is called on the builder, passing in the ConfigurationType that was supplied when creating the attribute (MyPipeline in the previous example).

GetPipeline returns a RequestDelegate which represents a middleware pipeline which takes in an HttpContext and returns a Task:

public delegate Task RequestDelegate(HttpContext context);  

Finally, the delegate is used to create a new MiddlewareFilter, which is returned by the method. This pattern of using an IFilterFactory attribute to create an actual filter instance is very common in the MVC code base, and works around the problems of service injection into attributes, as well as ensuring each component sticks to the single responsibility principle.

Building the pipeline with the MiddlewareFilterBuilder

In the last snippet we saw the MiddlewareFilterBuilder being used to turn our MyPipeline type into an actual, runnable piece of middleware. Taking a look inside the MiddlewareFilterBuilder, you will see an interesting use case of a Lazy<> with a ConcurrentDictionary, to ensure that each pipeline Type passed in to the service is only ever created once. This was the usage I wrote about in my last post.

The call to GetPipeline initialises a pipeline for the provided type using the BuildPipeline method, shown below in abbreviated form:

private RequestDelegate BuildPipeline(Type middlewarePipelineProviderType)  
{
    var nestedAppBuilder = ApplicationBuilder.New();

    // Get the 'Configure' method from the user provided type.
    var configureDelegate = _configurationProvider.CreateConfigureDelegate(middlewarePipelineProviderType);
    configureDelegate(nestedAppBuilder);

    nestedAppBuilder.Run(async (httpContext) =>
    {
        // additional end-middleware, covered later
    });

    return nestedAppBuilder.Build();
}

This method creates a new IApplicationBuilder, and uses it to configure a middleware pipeline, using the custom pipeline supplied earlier (MyPipeline'). It then adds an additional piece of 'end-middleware' at the end of the pipeline which I'll come back to later, and builds the pipeline into a RequestDelegate.

Creating the pipeline from MyPipeline is performed by a MiddlewareFilterConfigurationProvider, which attempts to find an appropriate Configure method on it.

You can think of the MyPipeline class as a mini-Startup class. Just like the Startup class you need a Configure method to add middleware to an IApplicationBuilder, and just like in Startup, you can inject additional services into the method. One of the big differences is that you can't have environment-specific Configure methods like ConfigureDevelopment here - your class must have one, and only one, configuration method called Configure.

The MiddlewareFilter

So just to recap, you add a MiddlewareFilterAttribute to one of your action methods or controllers, passing in a pipeline to use as a filter, e.g. MyPipeline. This uses a MiddlewareFilterBuilder to create a RequestDelegate, which in turn is used to create a MiddlewareFilter. This is the object actually added to the MVC filter pipeline.

The MiddlewareFilter implements IAsyncResourceFilter, so it runs early in the filter pipeline - after AuthorizationFilters have run, but before Model Binding and Action filters. This allows you to potentially short-circuit requests completely should you need to.

The MiddlewareFilter implements the single required method OnResourceExecutionAsync. The execution is very simple. First it records the MVC ResourceExecutingContext context of the filter, as well as the next filter to execute ResourceExecutionDelegate, as a new MiddlewareFilterFeature. This feature is then stored against the HttpContext itself, so it can be accessed elsewhere. The middleware pipeline we created previously is then invoked using the HttpContext.

public class MiddlewareFilter : IAsyncResourceFilter  
{
    private readonly RequestDelegate _middlewarePipeline;
    public MiddlewareFilter(RequestDelegate middlewarePipeline)
    {
        _middlewarePipeline = middlewarePipeline;
    }

    public Task OnResourceExecutionAsync(ResourceExecutingContext context, ResourceExecutionDelegate next)
    {
        var httpContext = context.HttpContext;

        var feature = new MiddlewareFilterFeature()
        {
            ResourceExecutionDelegate = next,
            ResourceExecutingContext = context
        };
        httpContext.Features.Set<IMiddlewareFilterFeature>(feature);

        return _middlewarePipeline(httpContext);
    }

From the point of view of the middleware pipeline we created, it is as though it was called as part of the normal pipline; it just receives an HttpContext to work with. If needs be though, it can access the MVC context by accessing the MiddlewareFilterFeature.

If you have written any filters previously, something may seem a bit off with this code. Normally, you would call await next() to execute the next filter in the pipeline before returning, but we are just returning the Task from our RequestDelegate invocation. How does the pipeline continue? To see how, we'll skip back to the 'end-middleware' I glossed over in BuildPipeline

Using the end-middleware to continue the filter pipeline

The middleware added at the end of the BuildPipeline method is responsible for continuing the execution of the filter pipeline. An abbreviated form looks like this:

nestedAppBuilder.Run(async (httpContext) =>  
{
    var feature = httpContext.Features.Get<IMiddlewareFilterFeature>();

    var resourceExecutionDelegate = feature.ResourceExecutionDelegate;
    var resourceExecutedContext = await resourceExecutionDelegate();

    if (!resourceExecutedContext.ExceptionHandled && resourceExecutedContext.Exception != null)
    {
        throw resourceExecutedContext.Exception;
    }
});

There are two main functions of this middleware. The primary goal is ensuring the filter pipeline is continued after the MiddlewareFilter has executed. This is achieved by loading the IMiddlewareFeatureFeature which was saved to the HttpContext when the filter began executing. It can then access the next filter via the ResourceExecutionDelegate and await its execution as usual.

The second goal, is to behave like a middleware pipeline rather than a filter pipeline when exceptions are thrown. That is, if a later filter or action method throws an exception, and no filter handles the exception, then the end-middleware re-throws it, so that the middleware pipeline used in the filter can handle it as middleware normally would (with a try-catch).

Note that Get<IMiddlewareFilterFeature>() will be called before the end of each MiddlewareFilter. If you have multiple MiddlewareFilters in the pipeline, each one will set a new instance of IMiddlewareFilterFeature, overwriting the values saved earlier. I haven't dug into it, but that could potentially cause an issue if you have middleware in your MyCustomMiddleware that both operates on the response being sent back through the pipeline after other middleware has executed, and also tries to load the IMiddlewareFilterFeature. In that case, it will get the IMiddlewareFilterFeature associated with a different MiddlewareFilter. It's a pretty unlikely scenario I suspect, but still, just watch out for it.

Wrapping up

That brings us to the end of this look under the covers of middleware filters. hopefully you found it interesting, personally, I just enjoy looking at the repos as a source of inspiration should I ever need to implement something similar in the future. Hope you enjoyed it!


Ben Foster: Using .NET Core Configuration with legacy projects

In .NET Core, configuration has been re-engineered, throwing away the System.Configuration model that relied on XML-based configuration files and introducing a number of new configuration components offering more flexibility and better extensibility.

At its lowest level, the new configuration system still provides access to key/value based settings. However, it also supports multiple configuration sources such as JSON files, and probably my favourite feature, strongly typed binding to configuration classes.

Whilst the new configuration system sit unders the ASP.NET repository on GitHub, it doesn't actually have any dependency on any of the new ASP.NET components meaning it can also be used in your non .net core projects too.

In this post I'll cover how to use .NET Core Configuration in an ASP.NET Web API application.

Install the packages

The new .NET Core configuration components are published under Microsoft.Extensions.Configuration.* packages on NuGet. For this demo I've installed the following packages:

  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.Json (support for JSON configuration files)
  • Microsoft.Extensions.Configuration.Binder (strongly-typed binding of configuration settings)

Initialising configuration

To initialise the configuration system we use ConfigurationBuilder. When you install additional configuration sources the builder will be extended with a number of new methods for adding those sources. Finally call Build() to create a configuration instance:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Build();

Accessing configuration settings

Once you have the configuration instance, settings can be accessed using their key:

var applicationName = configuration["ApplicationName"];

If your configuration settings have a heirarchical structure (likely if you're using JSON or XML files) then each level in the heirarchy will be separated with a :.

To demonstrate I've added a appsettings.json.config file containing a few configuration settings:

{
  "connectionStrings": {
    "MyDb": "server=localhost;database=mydb;integrated security=true"
  },
  "apiSettings": {
    "url": "http://localhost/api",
    "apiKey": "sk_1234566",
    "useCache":  true
  }
}

Note: I'm using the .config extension as a simple way to prevent IIS serving these files directly. Alternatively you can set up IIS request filtering to prevent access to your JSON config files.

I've then wired up an endpoint in my controller to return the configuration, using keys to access my values:

public class ConfigurationController : ApiController
{
    public HttpResponseMessage Get()
    {
        var config = new
        {
            MyDbConnectionString = Startup.Config["ConnectionStrings:MyDb"],
            ApiSettings = new
            {
                Url = Startup.Config["ApiSettings:Url"],
                ApiKey = Startup.Config["ApiSettings:ApiKey"],
                UseCache = Startup.Config["ApiSettings:UseCache"],
            }
        };

        return Request.CreateResponse(config);
    }
}

When I hit my /configuration endpoint I get the following JSON response:

{
    "MyDbConnectionString": "server=localhost;database=mydb;integrated security=true",
    "ApiSettings": {
        "Url": "http://localhost/api",
        "ApiKey": "sk_1234566",
        "UseCache": "True"
    }
}

Strongly-typed configuration

Of course accessing settings in this way isn't a vast improvement over using ConfigurationManager and as you'll notice above, we're not getting the correct type for all of our settings.

Fortunately the new .NET Core configuration system supports strongly-typed binding of your configuration, using Microsoft.Extensions.Configuration.Binder.

I created the following class to bind my configuration to:

public class AppConfig
{
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

To bind to this class directly, use the Get<T> extensions provided by the binder package. Here's my updated controller:

public HttpResponseMessage Get()
{
    var config = Startup.Config.Get<AppConfig>();
    return Request.CreateResponse(config);
}

The response:

{
   "ConnectionStrings":{
      "MyDb":"server=localhost;database=mydb;integrated security=true"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}

Now I can access my application configuration in a much nicer way:

if (config.ApiSettings.UseCache)
{

}

What about Web/App.config?

So far I've demonstrated how to use some of the new configuration features in a legacy application. But what if you still rely on traditional XML based configuration files like web.config or app.config?

In my application I still have a few settings in app.config (I'm self-hosting the API) that I require in my application. Ideally I'd like to use the .NET core configuration system to bind these to my AppConfig class too:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <connectionStrings>
    <add name="MyLegacyDb" connectionString="server=localhost;database=legacy" />
  </connectionStrings>
  <appSettings>
    <add key="ApplicationName" value="CoreConfigurationDemo"/>
  </appSettings>
</configuration>

There is an XML configuration source for .NET Core. However, if you try and use this for appSettings or connectionStrings elements you'll find the generated keys are not really ideal for strongly typed binding:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .AddXmlFile("app.config")
    .Build();

If we inspect the configuration after calling Build() we get the following key/value for the MyLegacyDb connection string:

[connectionStrings:add:MyLegacyDb:connectionString, server=localhost;database=legacy]

This is due to how the XML source binds XML attributes.

Given that we still have access to the older System.Configuration system it makes sense to use this to access our XML config files and then plug the values into the new .NET core configuration system. We can do this by creating a custom configuration provider.

Creating a custom configuration provider.

To implement a custom configuration provider you implement the IConfigurationProvider and IConfigurationSource interfaces. You can also derive from the abstract class ConfigurationProvider which will save you writing some boilerplate code.

For a more advanced implementation that requires reading file contents and supports multiple files of the same type, check out Andrew Lock's write-up on how to add a YAML configuration provider.

Since I'm relying on System.Configuration.ConfigurationManager to read app.config and do not need to support multiple files, my implementation is quite simple:

public class LegacyConfigurationProvider : ConfigurationProvider, IConfigurationSource
{
    public override void Load()
    {
        foreach (ConnectionStringSettings connectionString in ConfigurationManager.ConnectionStrings)
        {
            Data.Add($"ConnectionStrings:{connectionString.Name}", connectionString.ConnectionString);
        }

        foreach (var settingKey in ConfigurationManager.AppSettings.AllKeys)
        {
            Data.Add(settingKey, ConfigurationManager.AppSettings[settingKey]);
        }
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return this;
    }
}

When ConfigurationBuilder.Build() is called the Build method of each configured source is executed, which returns a IConfigurationProvider used to get the configuration data. Since we're deriving from ConfigurationProvider we can override Load, adding each of the connection strings and application settings from app.config.

I updated my AppConfig to include the new setting and connection string as below:

public class AppConfig
{
    public string ApplicationName { get; set; }
    public ConnectionStringsConfig ConnectionStrings { get; set; }
    public ApiSettingsConfig ApiSettings { get; set; }

    public class ConnectionStringsConfig
    {
        public string MyDb { get; set; }
        public string MyLegacyDb { get; set; }
    }   

    public class ApiSettingsConfig
    {
        public string Url { get; set; }
        public string ApiKey { get; set; }
        public bool UseCache { get; set; }
    }
}

The only change I need to make to my application is to add the configuration provider to my configuration builder:

IConfigurationRoot configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json.config", optional: true)
    .Add(new LegacyConfigurationProvider())
    .Build();

Then, when I hit my /configuration endpoint I get my complete configuration, bound from both my JSON file and app.config:

{
   "ApplicationName":"CoreConfigurationDemo",
   "ConnectionStrings":{
      "MyDb":"server=localhost;integrated security=true",
      "MyLegacyDb":"server=localhost;database=legacy"
   },
   "ApiSettings":{
      "Url":"http://localhost/api",
      "ApiKey":"sk_1234566",
      "UseCache":true
   }
}


Damien Bowden: Extending Identity in IdentityServer4 to manage users in ASP.NET Core

This article shows how Identity can be extended and used together with IdentityServer4 to implement application specific requirements. The application allows users to register and can access the application for 7 days. After this, the user cannot log in. Any admin can activate or deactivate a user using a custom user management API. Extra properties are added to the Identity user model to support this. Identity is persisted using EFCore and SQLite. The SPA application is implemented using Angular 2, Webpack 2 and Typescript 2.

Code: github

2017.01.07: Updated to IdentityServer4 1.0.0, webpack 2.2.0-rc.3, angular 2.4.1
2016.12.18: Updated to IdentityServer4 rc5, ASP.NET Core 1.1
2016.12.04: Updated to IdentityServer4 rc4

Other posts in this series:

Updating Identity

Updating Identity is pretty easy. The package provides the IdentityUser class implemented by the ApplicationUser. You can add any extra required properties to this class. This requires the Microsoft.AspNetCore.Identity.EntityFrameworkCore package which is included in the project as a NuGet package.

using System;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;

namespace IdentityServerWithAspNetIdentity.Models
{
    public class ApplicationUser : IdentityUser
    {
        public bool IsAdmin { get; set; }
        public string DataEventRecordsRole { get; set; }
        public string SecuredFilesRole { get; set; }
        public DateTime AccountExpires { get; set; }
    }
}

Identity needs to be added to the application. This is done in the startup class in the ConfigureServices method using the AddIdentity extension. SQLite is used to persist the data. The ApplicationDbContext which uses SQLite is then used as the store for Identity.

services.AddDbContext<ApplicationDbContext>(options =>
	options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();

The configuration is read from the appsettings for the SQLite database. The configuration is read using the ConfigurationBuilder in the Startup constructor.

"ConnectionStrings": {
        "DefaultConnection": "Data Source=C:\\git\\damienbod\\AspNet5IdentityServerAngularImplicitFlow\\src\\ResourceWithIdentityServerWithClient\\usersdatabase.sqlite"
    },
   

The Identity store is then created using the EFCore migrations.

dotnet ef migrations add testMigration

dotnet ef database update

The new properties in the Identity are used in three ways; when creating a new user, when creating a token for a user and validating the token on a resource using policies.

Using Identity creating a new user

The Identity ApplicationUser is created in the Register method in the AccountController. The new extended properties which were added to the ApplicationUser can be used as required. In this example, a new user will have access for 7 days. If the user can set custom properties, the RegisterViewModel model needs to be extended and the corresponding view.

[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Register(RegisterViewModel model, string returnUrl = null)
{
	ViewData["ReturnUrl"] = returnUrl;
	if (ModelState.IsValid)
	{
		var dataEventsRole = "dataEventRecords.user";
		var securedFilesRole = "securedFiles.user";
		if (model.IsAdmin)
		{
			dataEventsRole = "dataEventRecords.admin";
			securedFilesRole = "securedFiles.admin";
		}

		var user = new ApplicationUser {
			UserName = model.Email,
			Email = model.Email,
			IsAdmin = model.IsAdmin,
			DataEventRecordsRole = dataEventsRole,
			SecuredFilesRole = securedFilesRole,
			AccountExpires = DateTime.UtcNow.AddDays(7.0)
		};

		var result = await _userManager.CreateAsync(user, model.Password);
		if (result.Succeeded)
		{
			// For more information on how to enable account confirmation and password reset please visit http://go.microsoft.com/fwlink/?LinkID=532713
			// Send an email with this link
			//var code = await _userManager.GenerateEmailConfirmationTokenAsync(user);
			//var callbackUrl = Url.Action("ConfirmEmail", "Account", new { userId = user.Id, code = code }, protocol: HttpContext.Request.Scheme);
			//await _emailSender.SendEmailAsync(model.Email, "Confirm your account",
			//    $"Please confirm your account by clicking this link: <a href='{callbackUrl}'>link</a>");
			await _signInManager.SignInAsync(user, isPersistent: false);
			_logger.LogInformation(3, "User created a new account with password.");
			return RedirectToLocal(returnUrl);
		}
		AddErrors(result);
	}

	// If we got this far, something failed, redisplay form
	return View(model);
}

Using Identity creating a token in IdentityServer4

The Identity properties need to be added to the claims so that the client SPA or whatever client it is can use the properties. In IdentityServer4, the IProfileService interface is used for this. Each custom ApplicationUser property is added as claims as required.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
using IdentityModel;
using IdentityServer4.Extensions;
using IdentityServer4.Models;
using IdentityServer4.Services;
using IdentityServerWithAspNetIdentity.Models;
using Microsoft.AspNetCore.Identity;

namespace IdentityServerWithAspNetIdentitySqlite
{
    using IdentityServer4;

    public class IdentityWithAdditionalClaimsProfileService : IProfileService
    {
        private readonly IUserClaimsPrincipalFactory<ApplicationUser> _claimsFactory;
        private readonly UserManager<ApplicationUser> _userManager;

        public IdentityWithAdditionalClaimsProfileService(UserManager<ApplicationUser> userManager,  IUserClaimsPrincipalFactory<ApplicationUser> claimsFactory)
        {
            _userManager = userManager;
            _claimsFactory = claimsFactory;
        }

        public async Task GetProfileDataAsync(ProfileDataRequestContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            var principal = await _claimsFactory.CreateAsync(user);

            var claims = principal.Claims.ToList();
            claims = claims.Where(claim => context.RequestedClaimTypes.Contains(claim.Type)).ToList();
            claims.Add(new Claim(JwtClaimTypes.GivenName, user.UserName));

            if (user.IsAdmin)
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "admin"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "user"));
            }

            if (user.DataEventRecordsRole == "dataEventRecords.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "dataEventRecords"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "dataEventRecords"));
            }

            if (user.SecuredFilesRole == "securedFiles.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.admin"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "securedFiles"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles.user"));
                claims.Add(new Claim(JwtClaimTypes.Role, "securedFiles"));
                claims.Add(new Claim(JwtClaimTypes.Scope, "securedFiles"));
            }

            claims.Add(new Claim(IdentityServerConstants.StandardScopes.Email, user.Email));

            context.IssuedClaims = claims;
        }

        public async Task IsActiveAsync(IsActiveContext context)
        {
            var sub = context.Subject.GetSubjectId();
            var user = await _userManager.FindByIdAsync(sub);
            context.IsActive = user != null;
        }
    }
}

Using the Identity properties validating a token

The IsAdmin property is used to define whether a logged on user has the admin role. This was added to the token using the admin claim in the IProfileService. Now this can be used by defining a policy and validating the policy in a controller. The policies are added in the Startup class in the ConfigureServices method.

services.AddAuthorization(options =>
{
	options.AddPolicy("dataEventRecordsAdmin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "dataEventRecords.admin");
	});
	options.AddPolicy("admin", policyAdmin =>
	{
		policyAdmin.RequireClaim("role", "admin");
	});
	options.AddPolicy("dataEventRecordsUser", policyUser =>
	{
		policyUser.RequireClaim("role", "dataEventRecords.user");
	});
});

The policy can then be used for example in a MVC Controller using the Authorize attribute. The admin policy is used in the UserManagementController.

[Authorize("admin")]
[Produces("application/json")]
[Route("api/UserManagement")]
public class UserManagementController : Controller
{

Now that users can be admin users and expire after 7 days, the application requires a UI to manage this. This UI is implemented in the Angular 2 SPA. The UI requires a user management API to get all the users and also update the users. The Identity EFCore ApplicationDbContext context is used directly in the controller to simplify things, but usually this would be separated from the Controller, or if you have a lot of users, some type of search logic would need to be supported with a filtered result list. I like to have no logic in the MVC controller.

using System;
using System.Collections.Generic;
using System.Linq;
using IdentityServerWithAspNetIdentity.Data;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using ResourceWithIdentityServerWithClient.Model;

namespace ResourceWithIdentityServerWithClient.Controllers
{
    [Authorize("admin")]
    [Produces("application/json")]
    [Route("api/UserManagement")]
    public class UserManagementController : Controller
    {
        private readonly ApplicationDbContext _context;

        public UserManagementController(ApplicationDbContext context)
        {
            _context = context;
        }

        [HttpGet]
        public IActionResult Get()
        {
            var users = _context.Users.ToList();
            var result = new List<UserDto>();

            foreach(var applicationUser in users)
            {
                var user = new UserDto
                {
                    Id = applicationUser.Id,
                    Name = applicationUser.Email,
                    IsAdmin = applicationUser.IsAdmin,
                    IsActive = applicationUser.AccountExpires > DateTime.UtcNow
                };

                result.Add(user);
            }

            return Ok(result);
        }
        
        [HttpPut("{id}")]
        public void Put(string id, [FromBody]UserDto userDto)
        {
            var user = _context.Users.First(t => t.Id == id);

            user.IsAdmin = userDto.IsAdmin;
            if(userDto.IsActive)
            {
                if(user.AccountExpires < DateTime.UtcNow)
                {
                    user.AccountExpires = DateTime.UtcNow.AddDays(7.0);
                }
            }
            else
            {
                // deactivate user
                user.AccountExpires = new DateTime();
            }

            _context.Users.Update(user);
            _context.SaveChanges();
        }   
    }
}

Angular 2 User Management Component

The Angular 2 SPA is built using Webpack 2 with typescript. See https://github.com/damienbod/Angular2WebpackVisualStudio on how to setup a Angular 2, Webpack 2 app with ASP.NET Core.

The Angular 2 requires a service to access the ASP.NET Core MVC service. This is implemented in the UserManagementService which needs to be added to the app.module then.

import { Injectable } from '@angular/core';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import 'rxjs/add/operator/map';
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { SecurityService } from '../services/SecurityService';
import { User } from './models/User';

@Injectable()
export class UserManagementService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration, private _securityService: SecurityService) {
        this.actionUrl = `${_configuration.Server}/api/UserManagement/`;   
    }

    private setHeaders() {

        console.log("setHeaders started");

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        var token = this._securityService.GetToken();
        if (token !== "") {
            let tokenValue = 'Bearer ' + token;
            console.log("tokenValue:" + tokenValue);
            this.headers.append('Authorization', tokenValue);
        }
    }

    public GetAll = (): Observable<User[]> => {
        this.setHeaders();
        let options = new RequestOptions({ headers: this.headers, body: '' });

        return this._http.get(this.actionUrl, options).map(res => res.json());
    }

    public Update = (id: string, itemToUpdate: User): Observable<Response> => {
        this.setHeaders();
        return this._http
            .put(this.actionUrl + id, JSON.stringify(itemToUpdate), { headers: this.headers });
    }
}

The UserManagementComponent uses the service and displays all the users, and provides a way of updating each user.

import { Component, OnInit } from '@angular/core';
import { SecurityService } from '../services/SecurityService';
import { Observable } from 'rxjs/Observable';
import { Router } from '@angular/router';

import { UserManagementService } from '../user-management/UserManagementService';
import { User } from './models/User';

@Component({
    selector: 'user-management',
    templateUrl: 'user-management.component.html'
})

export class UserManagementComponent implements OnInit {

    public message: string;
    public Users: User[];

    constructor(
        private _userManagementService: UserManagementService,
        public securityService: SecurityService,
        private _router: Router) {
        this.message = "user-management";
    }
    
    ngOnInit() {
        this.getData();
    }

    private getData() {
        console.log('User Management:getData starting...');
        this._userManagementService
            .GetAll()
            .subscribe(data => this.Users = data,
            error => this.securityService.HandleError(error),
            () => console.log('User Management Get all completed'));
    }

    public Update(user: User) {
        this._userManagementService.Update(user.id, user)
            .subscribe((() => console.log("subscribed")),
            error => this.securityService.HandleError(error),
            () => console.log("update request sent!"));
    }

}

The UserManagementComponent template uses the Users data to display, update etc.

<div class="col-md-12" *ngIf="securityService.IsAuthorized">
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body"  *ngIf="Users">
            <table class="table">
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>IsAdmin</th>
                        <th>IsActive</th>
                        <th></th>
                    </tr>
                </thead>
                <tbody>
                    <tr style="height:20px;" *ngFor="let user of Users">
                        <td>{{user.name}}</td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isAdmin" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <input type="checkbox" [(ngModel)]="user.isActive" class="form-control" style="box-shadow:none" />
                        </td>
                        <td>
                            <button (click)="Update(user)" class="form-control">Update</button>
                        </td>
                    </tr>
                </tbody>
            </table>

        </div>
    </div>
</div>

The user-management component and the service need to be added to the module.

import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { routing } from './app.routes';
import { HttpModule, JsonpModule } from '@angular/http';

import { SecurityService } from './services/SecurityService';
import { DataEventRecordsService } from './dataeventrecords/DataEventRecordsService';
import { DataEventRecord } from './dataeventrecords/models/DataEventRecord';

import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';

import { DataEventRecordsListComponent } from './dataeventrecords/dataeventrecords-list.component';
import { DataEventRecordsCreateComponent } from './dataeventrecords/dataeventrecords-create.component';
import { DataEventRecordsEditComponent } from './dataeventrecords/dataeventrecords-edit.component';

import { UserManagementComponent } from './user-management/user-management.component';


import { HasAdminRoleAuthenticationGuard } from './guards/hasAdminRoleAuthenticationGuard';
import { HasAdminRoleCanLoadGuard } from './guards/hasAdminRoleCanLoadGuard';
import { UserManagementService } from './user-management/UserManagementService';

@NgModule({
    imports: [
        BrowserModule,
        FormsModule,
        routing,
        HttpModule,
        JsonpModule
    ],
    declarations: [
        AppComponent,
        ForbiddenComponent,
        HomeComponent,
        UnauthorizedComponent,
        DataEventRecordsListComponent,
        DataEventRecordsCreateComponent,
        DataEventRecordsEditComponent,
        UserManagementComponent
    ],
    providers: [
        SecurityService,
        DataEventRecordsService,
        UserManagementService,
        Configuration,
        HasAdminRoleAuthenticationGuard,
        HasAdminRoleCanLoadGuard
    ],
    bootstrap:    [AppComponent],
})

export class AppModule {}

Now the Identity users can be managed fro the Angular 2 UI.

extendingidentity_01

Links

https://github.com/IdentityServer/IdentityServer4

http://docs.identityserver.io/en/dev/

https://github.com/IdentityServer/IdentityServer4.Samples

https://docs.asp.net/en/latest/security/authentication/identity.html

https://github.com/IdentityServer/IdentityServer4/issues/349

https://damienbod.com/2016/06/12/asp-net-core-angular2-with-webpack-and-visual-studio/



Andrew Lock: Troubleshooting ASP.NET Core 1.1.0 install problems

Troubleshooting ASP.NET Core 1.1.0 install problems

I was planning on playing with the latest .NET Core 1.1.0 preview recently, but I ran into a few issues getting it working on my Mac. As I suspected, this was entirely down to my mistakes and my machine's setup, but I'm documenting it here in case any one else runs into similar problem!

Note that as of yesterday the RTM release of 1.1.0 is out, so while not strictly applicable, I would probably have run into the same problems! I've updated the post to reflect the latest version numbers.

TL;DR; There were two issues I ran into. First, the global.json file I used specified an older version of the tooling. Second, I had an older version of the tooling installed that was, according to SemVer, newer than the version I had just installed!

Installing ASP.NET Core 1.1.0

I began by downloading the .NET Core 1.1.0 installer for macOS from the downloads page, following the instructions from the announcement blog post. The installation was quick and went smoothly, installing side-by side with the existing .NET Core 1.0 RTM install.

Troubleshooting ASP.NET Core 1.1.0 install problems

Creating a new 1.1.0 project

According to the blog post, once you've run the installer you should be able to start creating 1.1.0 applications. Running donet new with the .NET CLI should create a new 1.1.0 application, with a project.json that contains an updated Microsoft.NETCore.App dependency, looking something like:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0"
      }
    },
    "imports": "dnxcore50"
  }
}

So I created a sub folder for a test project, ran dotnet new and eagerly checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

Hmmm, that doesn't look right, we still seem to be getting a 1.0.0 project instead of 1.1.0…

Check the global.json

My first thought was that the install hadn't worked correctly - it is a preview after all (it was when i originally tried it!) so wouldn't be completely unheard of. Running dotnet --version to check the version of the CLI being run returned

$ dotnet --version
1.0.0-preview2-003121  

So the preview 2 tooling is being used, which corresponds to the .NET Core 1.0 RTM release, definitely the wrong version.

It was then I remembered a similar issue I had when moving from RC2 to the RTM release - check the global.json! When I had created my sub folder for testing dotnet new, I had automatically copied across a global.json from a previous project. Looking inside, this was what I found:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003121"
  }
}

Bingo! If an SDK version is specified in global.json then it will be used preferentially over the latest tooling. Updating the sdk section with the appropriate value, or removing the SDK section means the latest tooling should be used, which should let me create my 1.1.0 project.

Take two - preview fun

After removing the sdk section of the global.json, I ran dotnet new again, and checked the project.json:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.0.0"
      }
    },
    "imports": "dnxcore50"
  }
}

D'oh, that's still not right! At this point, I started to think something must have gone wrong with the installation, as I couldn't think of any other explanation. Luckily, it's easy to see which versions of the SDK are installed by checking the file system. On a Mac, you can see them at:

/usr/local/share/dotnet/sdk/

Checking the folder, this is what I saw, notice anything odd?

Troubleshooting ASP.NET Core 1.1.0 install problems

There's quite a few different versions of the SDK in there, including the 1.0.0 RTM version (1.0.0-preview2-003121) and also the 1.1.0 Preview 1 version (1.0.0-preview2.1-003155). However there's also a slightly odd one that stands out - 1.0.0-preview3-003213. (Note, with the 1.1 RTM there is a whole new version, 1.0.0-preview2-1-003177)

Most people installing the .NET Core SDK will not run into this issue, as they likely won't have this additional preview3 version. I only have it installed (I think) because I created a couple of pull requests to the ASP.NET Core repositories recently. The way versioning works in the ASP.NET Core repositories for development versions means that although there is a preview3 version of the tooling, it is actually older that the preview2.1 version of the tooling just released, and generates 1.0.0 projects.

When you run dotnet new, and in the absence of a global.json with an sdk section, the CLI will use the most recent version of the tooling as determined by SemVer. Consequently, it had been using the preview3 version and generating 1.0.0 projects!

The simple solution was to delete the 1.0.0-preview3-003213 folder, and re-run dotnet new:

"frameworks": {
  "netcoreapp1.0": {
    "dependencies": {
      "Microsoft.NETCore.App": {
        "type": "platform",
        "version": "1.1.0-preview1-001100-00"
      }
    },
    "imports": "dnxcore50"
  }
}

Lo and behold, a 1.1.0 project!

Summary

The final issue I ran into is not something that general users have to worry about. The only reason it was a problem for me was due to working directly with the GitHub repo, and the slightly screwy SemVer versions when using development packages.

The global.json issue is one that you might run into when upgrading projects. It's well documented that you need to update it when upgrading, but it's easy to overlook.

Anyway, the issues I experienced were entirely down to my setup and stupidity rather than the installer or documentation, so hopefully things go smoother for you. Now time to play with new features!


Andrew Lock: Making ConcurrentDictionary GetOrAdd thread safe using Lazy

Making ConcurrentDictionary GetOrAdd thread safe using Lazy

I was browsing the ASP.NET Core MVC GitHub repo the other day, checking out the new 1.1.0 Preview 1 code, when I spotted a usage of ConcurrentDictionary that I thought was interesting. This post explores the GetOrAdd function, the level of thread safety it provides, and ways to add additional threading constraints.

I was looking at the code that enables using middleware as MVC filters where they are building up a filter pipeline. This needs to be thread-safe, so they sensibly use a ConcurrentDictionary<>, but instead of a dictionary of RequestDelegate, they are using a dictionary of Lazy<RequestDelegate>. Along with the initialisation is this comment:

// 'GetOrAdd' call on the dictionary is not thread safe and we might end up creating the pipeline more
// once. To prevent this Lazy<> is used. In the worst case multiple Lazy<> objects are created for multiple
// threads but only one of the objects succeeds in creating a pipeline.
private readonly ConcurrentDictionary<Type, Lazy<RequestDelegate>> _pipelinesCache  
    = new ConcurrentDictionary<Type, Lazy<RequestDelegate>>();

This post will explore the pattern they are using and why you might want to use it in your code.

tl;dr; To make a ConcurrentDictionary only call a delegate once when using GetOrAdd, store your values as Lazy<T>, and use by calling GetOrAdd(key, valueFactory).Value.

The GetOrAdd function

The ConcurrentDictionary is a dictionary that allows you to add, fetch and remove items in a thread-safe way. If you're going to be accessing a dictionary from multiple threads, then it should be your go-to class.

The vast majority of methods it exposes are thread safe, with the notable exception of one of the GetOrAdd overloads:

TValue GetOrAdd(TKey key, Func<TKey, TValue> valueFactory);  

This overload takes a key value, and checks whether the key already exists in the database. If the key already exists, then the associated value is returned; if the key does not exist, the provided delegate is run, the value is stored in the dictionary, and then returned to the caller.

For example, consider the following little program.

public static void Main(string[] args)  
{
    var dictionary = new ConcurrentDictionary<string, string>();

    var value = dictionary.GetOrAdd("key", x => "The first value");
    Console.WriteLine(value);

    value = dictionary.GetOrAdd("key", x => "The second value");
    Console.WriteLine(value);
}

The first time GetOrAdd is called, the dictionary is empty, so the value factory runs and returns the string "The first value", storing it against the key. On the second call, GetOrAdd finds the saved value and uses that instead of calling the factory. The output gives:

The first value  
The first value  

GetOrAdd and thread safety.

Internally, the ConcurrentDictionary uses locking to make it thread safe for most methods, but GetOrAdd does not lock while valueFactory is running. This is done to prevent unknown code from blocking all the threads, but it means that valueFactory might run more than once if it is called simultaneously from multiple threads. Thread safety kicks in when saving the returned value to the dictionary and when returning the generated value back to the caller however, so you will always get the same value back from each call.

For example, consider the program below, which uses tasks to run threads simultaneously. It works very similarly to before, but runs the GetOrAdd function on two separate threads. It also increments a counter every time the valueFactory is run.

public class Program  
{
    private static int _runCount = 0;
    private static readonly ConcurrentDictionary<string, string> _dictionary
        = new ConcurrentDictionary<string, string>();

    public static void Main(string[] args)
    {
        var task1 = Task.Run(() => PrintValue("The first value"));
        var task2 = Task.Run(() => PrintValue("The second value"));
        Task.WaitAll(task1, task2);

        PrintValue("The third value")

        Console.WriteLine($"Run count: {_runCount}");
    }

    public static void PrintValue(string valueToPrint)
    {
        var valueFound = _dictionary.GetOrAdd("key",
                    x =>
                    {
                        Interlocked.Increment(ref _runCount);
                        Thread.Sleep(100);
                        return valueToPrint;
                    });
        Console.WriteLine(valueFound);
    }
}

The PrintValue function again calls GetOrAdd on the ConcurrentDictionary, passing in a Func<> that increments the counter and returns a string. Running this program produces one of two outputs, depending on the order the threads are scheduled; either

The first value  
The first value  
The first value  
Run count: 2  

or

The second value  
The second value  
The second value  
Run count: 2  

As you can see, you will always get the same value when calling GetOrAdd, depending on which thread returns first. However the delegate is being run on both asynchronous calls, as shown by _runCount=2, as the value had not been stored from the first call before the second call runs. Stepping through, the interactions could look something like this:

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, and returns the value "The first value" back to the concurrent dictionary. The dictionary checks there is still no value for "key", and inserts the new KeyValuePair. Finally, it returns "The first value" to the caller.

  4. Thread B completes its invocation and returns the value "The second value" back to the concurrent dictionary. The dictionary sees the value for "key" stored by Thread A, so it discards the value it created and uses that one instead, returning the value back to the caller.

  5. Thread C calls GetOrAdd and finds the value already exists for "key", so returns the value, without having to invoke valueFactory

In this case, running the delegate more than once has no adverse effects - all we care about is that the same value is returned from each call to GetOrAdd. But what if the delegate has side effects such that we need to ensure it is only run once?

Ensuring the delegate only runs once with Lazy

As we've seen, there are no guarantees made by ConcurrentDictionary about the number of times the Func<> will be called. When building a middleware pipeline, however, we need to be sure that the middleware is only built once, as it could be doing some bootstrapping that is expensive or not thread safe. The solution that the ASP.NET Core team used is to use Lazy<> initialisation.

The output we are aiming for is

The first value  
The first value  
The first value  
Run count: 1  

or similarly for "The second value" - it doesn't matter which wins out, the important points are that the same value is returned every time, and that _runCount is always 1.

Looking back at our previous example, instead of using a ConcurrentDictionary<string, string>, we create a ConcurrentDictionary<string, Lazy<string>>, and we update the PrintValue() method to create a lazy object instead:

public static void PrintValueLazy(string valueToPrint)  
{
    var valueFound = _lazyDictionary.GetOrAdd("key",
                x => new Lazy<string>(
                    () =>
                        {
                            Interlocked.Increment(ref _runCount);
                            Thread.Sleep(100);
                            return valueToPrint;
                        }));
    Console.WriteLine(valueFound.Value);
}

There are only two changes we have made here. We have updated the GetOrAdd call to return a Lazy<string> rather than a string directly, and we are calling valueFound.Value to get the actual string value to write to the console. To see why this solves the problem, lets step through the example to see an example of what happens when we run the whole program.

  1. Thread A calls GetOrAdd on the dictionary for the key "key" but does not find it, so starts to invoke the valueFactory.

  2. Thread B also calls GetOrAdd on the dictionary for the key "key". Thread A has not yet completed, so no existing value is found, and Thread B also starts to invoke the valueFactory.

  3. Thread A completes its invocation, returning an uninitialised Lazy<string> object. The delegate inside the Lazy<string> has not been run at this point, we've just created the Lazy<string> container. The dictionary checks there is still no value for "key", and so inserts the Lazy<string> against it, and finally, returns the Lazy<string> back to the caller.

  4. Thread B completes its invocation, similarly returning an uninitialised Lazy<string> object. As before, the dictionary sees the Lazy<string> object for "key" stored by Thread A, so it discards the Lazy<string> it just created and uses that one instead, returning it back to the caller.

  5. Thread A calls Lazy<string>.Value. This invokes the provided delegate in a thread safe way, such that if it is called simultaneously by two threads, it will only run the delegate once.

  6. Thread B calls Lazy<string>.Value. This is the same Lazy<string> object that Thread A just initialised (remember the dictionary ensures you always get the same value back.) If Thread A is still running the initialisation delegate, then Thread B just blocks until it finishes and it can access the result. We just get the final return string, without invoking the delegate for a second time. This is what gives us the run-once behaviour we need.

  7. Thread C calls GetOrAdd and finds the Lazy<string> object already exists for "key", so returns the value, without having to invoke valueFactory. The Lazy<string> has already been initialised, so the resulting value is returned directly.

We still get the same behaviour from the ConcurrentDictionary in that we might run the valueFactory more than once, but now we are just calling new Lazy<>() inside the factory. In the worst case, we create multiple Lazy<> objects, which get discarded by the ConcurrentDictionary when consolidating inside the GetOrAdd method.

It is the Lazy<> object which enforces that we only run our expensive delegate once. By calling Lazy<>.Value we trigger the delegate to run in a thread safe way, such that we can be sure it will only be run by one thread at a time. Other threads which call Lazy<>.Value simultaneously will be blocked until the first call completes, and then will use the same result.

Summary

When using GetOrAdd , if your valueFactory is idempotent and not expensive, then there is no need for this trick. You can be sure you will always get the same value with each call, but you need to be aware the valueFactory may run multiple times.

If you have an expensive operation that must be run only once as part of a call to GetOrAdd, then using Lazy<> is a great solution. The only caveat to be aware of is that Lazy<>.Value will block other threads trying to access the value until the first call completes. Depending on your use case, this may or may not be a problem, and is the reason GetOrAdd does not have these semantics by default.


Andrew Lock: Fixing a bug: when concatenated strings turn into numbers in JavaScript

Fixing a bug: when concatenated strings turn into numbers in JavaScript

This is a very quick post about trying to fix a JavaScript bug that plagued me for an hour this morning.

tl;dr I was tripped up by a rogue unary operator and slap-dash copy-pasting.

The setup

On an existing web page, there was some JavaScript that builds up a string to insert into the DOM:

function GetTemplate(url, html)  
{
   // other details removed
   var template = '<div class="something"><a href="'
                  + url
                  + '" target="_blank"><strong>Details: </strong><span>'
                  + html
                  + '</span></a></div>';
  return template;
}

Ignore for now whether this code is an abomination and the vulnerabilities etc - it is what it is.

The requirement was simple: insert an optional additional <span> tag before the <strong>, only if the value of a variable provided is 'truthy'. Seems pretty easy right? It should have been.

The first attempt

I set about quickly rolling out the fix and came up with code something like this:

function GetTemplate(url, html, summary) {  
   // other details removed
   var template = '<div class="something"><a href="'
                  + url
                  + '" target="_blank">';

   if(summary) {
       template += '<span class="summary">' 
           + summary 
           + '</span>';
   }

   template +=
       +'<strong>Details: </strong><span>'
       + html
       + '</span></a></div>';

  return template;
}

All looked ok to me, F5 to reload the page, and… oh dear, that doesn't look right…

Fixing a bug: when concatenated strings turn into numbers in JavaScript

Can you spot what I did wrong?

The HTML that was generated looked like this:

<div class="something"><a href="https://thewebsite.com" target="blank">  
    <span class="summary">The summary</span>NaNThis is the inner message</span></a>
</div>  

Spotted it yet?

Concatenation vs addition

Looking at the generated HTML, there appears to be a Rogue "NaN" string that has found it's way into the generated HTML and there's also no sign of the <strong> tag in the output. The presence of the NaN was a pretty clear indicator that there was some conversion to numbers going on here, but I couldn't for the life of me see where!

As I'm sure you all know, in JavaScript the + symbol can be used both for numeric addition and string concatenation, depending on the variables either side. For example,

console.log('value:' + 3);           // 'value:3'  
console.log(3 + 1);                   // 4  
console.log('value:' + 3 + '+' + 1); // 'value:3+1'  
console.log('value:' + 3 + 1);       // 'value:31'  
console.log('value:' + (3 + 1));     // 'value:4'  
console.log(3 + ' is the value');    // '3 is the value'  

In these examples, when either the left or right operands of the + symbol are a string, the other operand is coerced to a string and a concatenation is performed. Otherwise the operator is treated as an addition.

The presence of the NaN in the output string indicated there must be some something going on where a string was trying to be used as a number. But given the concatenation rules above, and the fact we weren't using parseInt() or similar anywhere, it just didn't make any sense!

The culprit

Narrowing the problem down, the issue appeared to be in the third block of string concatenation, in which the strong tag is added:

template +=  
       +'<strong>Details: </strong><span>'
       + html
       + '</span></a></div>';

If you still haven't spotted it, writing it all one line may do the trick for you:

template += +'<strong>Details: </strong><span>' + html + '</span></a></div>';  

Right at the beginning of that statement I am calling 'string' += +'string'. See the extra + that's crept in through copying and pasting errors? That's the source of all my woes - a unary operation. To quote the You Don't Know JS book by Kyle Simpson:

+c here is showing the unary operator form (operator with only one operand) of the + operator. Instead of performing mathematic addition (or string concatenation -- see below), the unary + explicitly coerces its operand (c) to a number value.This has an interesting effect on the subsequent string in that it tries to convert it to a number.

This was the exact problem I had. The rogue + was attempting to convert the string <strong>Details: </strong><span> to a number, was failing and returning NaN. This was then coerced to a string as a result of the subsequent concatenations, and broke my HTML! Removing that + fixed everything.

Bonus

As an interesting side point to this, I was using gulp-uglify to minify the resulting javascript as part of the build. As part of that minification, the 'unary operator plus value' combination (+'<strong>Details: </strong><span>') was actually being stored in the minified javascript as an explicit NaN. Gulp had seen my error and set it in stone for me!

I'm sure there's a lesson to be learnt here about not rushing and copying and pasting, but my immediate thought was for a small gulp plugin that warns you about unexpected NaNs in your minified code! I wouldn't be surprised if that already exists…


Andrew Lock: Using dependency injection in a .Net Core console application

Using dependency injection in a .Net Core console application

One of the key features of ASP.NET Core is baked in dependency injection. There may be various disagreements on the way that is implemented, but in general encouraging a good practice by default seems like a win to me.

Whether you choose to use the built in container or a third party container will likely come down to whether the built in container is powerful enough for your given project. For small projects it may be fine, but if you need convention based registration, logging/debugging tools, or more esoteric approaches like property injection, then you'll need to look elsewhere. Luckily, third party container are pretty easy to integrate, and are going to be getting easier.

Why use the built-in container?

One question that's come up a few times, is whether you can use the built-in provider in a .NET Core console application? The short answer is not out-of-the-box, but adding it in is pretty simple. Having said that, whether it is worth using in this case is another question.

One of the advantage of the built-in container in ASP.NET Core is that the framework libraries themselves register their dependencies with it. When you call the AddMvc() extension method in your Startup.ConfigureServices method, the framework registers a whole plethora of services with the container. If you later add a third-party container, those dependencies are passed across to be re-registered, so they are available when resolved via the third-party.

If you are writing a console app, then you likely don't need MVC or other ASP.NET Core specific services. In that case, it may be just as easy to start right off the bat using StructureMap or AutoFac instead of the limited built-in provider.

Having said that, most common services designed for use with ASP.NET Core will have extensions for registering with the built in container via IServiceCollection, so if you are using services such as logging, or the Options pattern, then it is certainly easier to use the provided extensions, and plug a third party on top of that if required.

Adding DI to a console app

If you decide the built-in container is the right approach, then adding it to your application is very simple using the Microsoft.Extensions.DependencyInjection package. To demonstrate the approach, I'm going to create a simple
application that has two services:

public interface IFooService  
{
    void DoThing(int number);
}

public interface IBarService  
{
    void DoSomeRealWork();
}

Each of these services will have a single implementation. The BarService depends on an IFooService, and the FooService uses an ILoggerFactory to log some work:

public class BarService : IBarService  
{
    private readonly IFooService _fooService;
    public BarService(IFooService fooService)
    {
        _fooService = fooService;
    }

    public void DoSomeRealWork()
    {
        for (int i = 0; i < 10; i++)
        {
            _fooService.DoThing(i);
        }
    }
}

public class FooService : IFooService  
{
    private readonly ILogger<FooService> _logger;
    public FooService(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<FooService>();
    }

    public void DoThing(int number)
    {
        _logger.LogInformation($"Doing the thing {number}");
    }
}

As you could see above, I'm using the new logging infrastructure in my app, so I will need to add the appropriate package to my project.json. I'll also add the DependencyInjection package and the Microsoft.Extensions.Logging.Console package so I can see the results of my logging:

{
  "dependencies": {
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.DependencyInjection": "1.0.0"
  }
}

Finally, I'll update my static void main to put all the pieces together. We'll walk through through it in a second.

using Microsoft.Extensions.DependencyInjection;  
using Microsoft.Extensions.Logging;

public class Program  
{
    public static void Main(string[] args)
    {
        //setup our DI
        var serviceProvider = new ServiceCollection()
            .AddLogging()
            .AddSingleton<IFooService, FooService>()
            .AddSingleton<IBarService, BarService>()
            .BuildServiceProvider();

        //configure console logging
        serviceProvider
            .GetService<ILoggerFactory>()
            .AddConsole(LogLevel.Debug);

        var logger = serviceProvider.GetService<ILoggerFactory>()
            .CreateLogger<Program>();
        logger.LogDebug("Starting application");

        //do the actual work here
        var bar = serviceProvider.GetService<IBarService>();
        bar.DoSomeRealWork();

        logger.LogDebug("All done!");

    }
}

The first thing we do is configure the dependency injection container by creating a ServiceCollection, adding our dependencies, and finally building an IServiceProvider. This process is equivalent to the ConfigureServices method in an ASP.NET Core project, and is pretty much what happens behind the scenes. You can see we are using the IServiceCollection extension method to add the logging services to our application, and then registering our own services. The serviceProvider is our container we can use to resolve services in our application.

In the next step, we need to configure the logging infrastructure with a provider, so the results are output somewhere. We first fetch an instance of ILoggerFactory from our newly constructed serviceProvider, and add a console logger.

The remainder of the program shows more dependency-injection in progress. We first fetch an ILogger<T> from the container, and then fetch an instance of IBarService. As per our registrations, the IBarService is an instance of BarService, which will have an instance of FooService injected in it.

If can then run our application and see all our beautifully resolved dependencies!

Using dependency injection in a .Net Core console application

Adding StructureMap to your console app

As described previously, the built-in container is useful for adding framework libraries using the extension methods, like we saw with AddLogging above. However it is much less fully featured than many third-party containers.

For completeness, I'll show how easy it is to update the application to use a hybrid approach, using the built in container to easily add any framework dependencies, and using StructureMap for your own code. If you want a more detailed description of adding StructureMap to and ASP.NET Core application, see the post here.

First you need to add StructureMap to your project.json dependencies:

{
  "dependencies": {
    "StructureMap.Microsoft.DependencyInjection": "1.2.0"
  }
}

Now we'll update our static void main to use StructureMap for registering our custom dependencies:

public static void Main(string[] args)  
{
    // add the framework services
    var services = new ServiceCollection()
        .AddLogging();

    // add StructureMap
    var container = new Container();
    container.Configure(config =>
    {
        // Register stuff in container, using the StructureMap APIs...
        config.Scan(_ =>
                    {
                        _.AssemblyContainingType(typeof(Program));
                        _.WithDefaultConventions();
                    });
        // Populate the container using the service collection
        config.Populate(services);
    });

    var serviceProvider = container.GetInstance<IServiceProvider>();

    // rest of method as before
}

At first glance this may seem more complicated than the previous version, and it is, but it is also far more powerful. In the StructureMap example, we didn't have to explicitly register our IFooService or IBarService services - they were automatically registered by convention. When your apps start to grow, this sort of convention-based registration becomes enormously powerful, especially when couple with the error handling and debugging capabilities available to you.

In this example I showed how to use StructureMap with the adapter to work with the IServiceCollection extension methods, but there's obviously no requirement to do that. Using StructureMap as your only registration source is perfectly valid, you'll just have to manually register any services added as part of the AddPLUGIN extension methods directly.

Summary

In this post I discussed why you might want to use the built-in container for dependency injection in a .NET Core application. I showed how you could add a new ServiceCollection to your project, register and configure the logging framework, and retrieve configured instances of services from it.

Finally, I showed how you could use a third-party container in combination with the built-in container to allow you to use more powerful registration features, such as convention based registration.

You can find an example project, along with many others for posts on my blog on Github here.


Pedro Félix: Accessing the HTTP Context on ASP.NET Core

TL;DR

On ASP.NET Core, the access to the request context can be done via the new IHttpContextAccessor interface, which provides a HttpContext property with this information. The IHttpContextAccessor is obtained via dependency injection or directly from the service locator. However, it requires an explicit service collection registration, mapping the IHttpContextInterface to the HttpContextAccessor concrete class, with singleton scope.

Not so short version

System.Web

On classical ASP.NET, the current HTTP context, containing both request and response information, can be accessed anywhere via the omnipresent System.Web.HttpContext.Current static property. Internally, this property uses information stored in the CallContext object representing the current call flow. This CallContext is preserved even when the same flow crosses multiple threads, so it can handle async methods.

ASP.NET Web API

On ASP.NET Web API, obtaining the current HTTP context without having to flow it explicitly on every call is typically achieved with the help of the dependency injection container.
For instance, Autofac provides the RegisterHttpRequestMessage extension method on the ContainerBuilder, which allows classes to have HttpRequestMessage constructor dependencies.
This extension method configures a delegating handler that registers the input HttpRequestMessage instance into the current lifetime scope.

ASP.NET Core

ASP.NET Core uses a different approach. The access to the current context is provided via a IHttpContextAccessor service, containing a single HttpContext property with both a getter and a setter. So, instead of directly injecting the context, the solution is based on injecting an accessor to the context.
This apparently superfluous indirection provides one benefit: the accessor can have singleton scope, meaning that it can be injected into singleton components.
Notice that injecting a per HTTP request dependency, such as the request message, directly into another component is only possible if the component has the same lifetime scope.

In the current ASP.NET Core 1.0.0 implementation, the IHttpContextAccessor service is implemented by the HttpContextAccessor concrete class and must be configured as a singleton.
 

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}

Notice that this registration is not done by default and must be explicitly performed. If not, any IHttpContextAccessor dependency will result in an activation exception.
On the other hand, no additional configuration is need to capture the context at the beginning of each request, because this is automatically done.

The following implementation details shed some light on this behavior:

  • Each time a new request starts to be handled, a common IHttpContextFactory reference is used to create the HttpContext. This common reference is obtained by the WebHost during startup and used for all requests.

  • The used HttpContextFactory concrete implementation is initialized with an optional IHttpContextAccessor implementation. When available, this accessor is assigned with each created context. This means that if any accessor is registered on the services, then it will automatically be used to set all created contexts.

  • How can the same accessor instance hold different contexts, one for each call flow? The answer lies in the HttpContextAccessor concrete implementation and its use of AsyncLocal to store the context separately for each logical call flow. It is this characteristics that allows a singleton scoped accessor to provide request scoped contexts.

To conclude:

  • Everywhere the HTTP context is needed, declare an IHttpContextAccessor dependency and use it to fetch the context.

  • Don’t forget to explicitly register the IHttpContextAccessor interface on the service collection, mapping it to the concrete HttpContextAccessor type.

  • Also, don’t forget to make this registration with singleton scope.



Andrew Lock: Adding Cache-Control headers to Static Files in ASP.NET Core

Adding Cache-Control headers to Static Files in ASP.NET Core

Thanks to the ASP.NET Core middleware pipeline, it is relatively simple to add additional HTTP headers to your application by using custom middleware. One common use case for this is to add caching headers.

Allowing clients and CDNs to cache your content can have a massive effect on your application's performance. By allowing caching, your application never sees these additional requests and never has to allocate resources to process them, so it is more available for requests that cannot be cached.

In most cases you will find that a significant proportion of the requests to your site can be cached. A typical site serves both dynamically generated content (e.g. in ASP.NET Core, the HTML generated by your Razor templates) and static files (CSS stylesheets, JS, images etc). The static files are typically fixed at the time of publish, and so are perfect candidates for caching.

In this post I'll show how you can add headers to the files served by the StaticFileMiddleware to increase your site's performance. I'll also show how you can add a version tag to your file links, to ensure you don't inadvertently serve stale data.

Note that this is not the only way to add cache headers to your site. You can also use the ResponseCacheAttribute in MVC to decorate Controllers and Actions if you are returning data which is safe to cache.

You could also consider adding caching at the reverse proxy level (e.g. in IIS or Nginx), or use a third party provider like CloudFlare.

Adding Caching to the StaticFileMiddleware

When you create a new ASP.NET Core project from the default template, you will find the StaticFileMiddleware is added early in the middleware pipeline, with a call to AddStaticFiles() in Startup.Configure():

public void Configure(IApplicationBuilder app)  
{
    // logging and exception handler removed for clarity

    app.UseStaticFiles();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

This enables serving files from the wwwroot folder in your application. The default template contains a number of static files (site.css, bootstrap.css, banner1.svg) which are all served by the middleware when running in development mode. It is these we wish to cache.

Don't we get caching by default?

Before we get to adding caching, lets investigate the default behaviour. The first time you load your application, your browser will fetch the default page, and will download all the linked assets. Assuming everything is configured correctly, these should all return a 200 - OK response with the file data:

Adding Cache-Control headers to Static Files in ASP.NET Core

As well as the file data, by default the response header will contain ETag and Last-Modified values:

HTTP/1.1 200 OK  
Date: Sat, 15 Oct 2016 14:15:52 GMT  
Content-Type: image/svg+xml  
Last-Modified: Sat, 15 Oct 2016 13:43:34 GMT  
Accept-Ranges: bytes  
ETag: "1d226ea1f827703"  
Server: Kestrel  

The second time a resource is requested from your site, your browser will send this ETag and Last-Modified value in the header as If-None-Match and If-Modified-Since. This tells the server that it doesn't need to send the data again if the file hasn't changed. If it hasn't changed, the server will send a 304 - Not Modified response, and the browser will use the data it received previously instead.

This level of caching comes out-of-the-box with the StaticFileMiddleware, and gives improved performance by reducing the amount of bandwidth required. However it is important to note that the client is still sending a request to your server - the response has just been optimised. This becomes particularly noticeable with high latency connections or pages with many files - the browser still has to wait for the response to come back as 304:

Adding Cache-Control headers to Static Files in ASP.NET Core

The image above uses Chrome's built in network throttling to emulate a GPRS connection with a very large latency of 500ms. You can see that the first Index page loads in 1.59s, after which the remaining static files are requested. Even though they all return 304 responses using only 250 bytes, the page doesn't actually finish loading until and additional 2.5s have passed!

Adding cache headers to static files

Rather than requiring the browser to always check if a file has changed, we now want it to assume that the file is the same, for a predetermined length of time. This is the purpose of the Cache-Control header.

In ASP.NET Core, you can easily add this this header when you configure the StaticfileMiddleware:

using Microsoft.Net.Http.Headers;

app.UseStaticFiles(new StaticFileOptions  
{
    OnPrepareResponse = ctx =>
    {
        const int durationInSeconds = 60 * 60 * 24;
        ctx.Context.Response.Headers[HeaderNames.CacheControl] =
            "public,max-age=" + durationInSeconds;
    }
});

One of the overloads of UseStaticFiles takes a StaticFileOptions parameter, which contains the property OnPrepareResponse. This action can be used to specify any additional processing that should occur before a response is sent. It is passed a single parameter, a StaticFileResponseContext, which contains the current HttpContext and also an IFileInfo property representing the current file.

If set, the Action<StaticFileResponseContext> is called before each successful response, whether a 200 or 304 response, but it won't be called if the file was not found (and instead returns a 404).

In the example provided above, we are setting the Cache-Control header (using the constant values defined in Microsoft.Net.Http.Headers) to cache our files for 24 hours. You can read up on the details of the various associated cache headers here. In this case, we marked the response as public as we want intermediate caches between our server and the user to store the cached file too.

If we run our high-latency scenario again, we can see our results in action:

Adding Cache-Control headers to Static Files in ASP.NET Core

Our index page still takes 1.58s to load, but as you can see, all our static files are loaded from the cache, which means no requests to our server, and consequently no latency! We're all done in 1.61s instead of the 4.17s we had previously.

Once the max-age duration we specified has expired, or after the browser evicts the files from its cache, we'll be back to making requests to the server, but until then we can see a massive improvement. What's more, if we use a CDN or there are intermediate cache servers between the user's browser and our server, then they will also be able to serve the cached content, rather than the request having to make it all the way to your server.

Note: Chrome is a bit funny with respect to cache behaviour - if you reload a page using F5 or the Reload button, it will generally not use cached assets. Instead it will pull them down fresh from the server. If you are struggling to see the fruits of your labour, navigate to a different page by clicking a link - you should see the correct caching behaviour then.

Cache busting for file changes

Before we added caching we saw that we return an ETag whenever we serve a static file. This is calculated based on the properties of the file such that if the file changes, the ETag will change. For those interested, this is the snippet of code that is used in ASP.NET Core:

_length = _fileInfo.Length;

DateTimeOffset last = _fileInfo.LastModified;  
// Truncate to the second.
_lastModified = new DateTimeOffset(last.Year, last.Month, last.Day, last.Hour, last.Minute, last.Second, last.Offset).ToUniversalTime();

long etagHash = _lastModified.ToFileTime() ^ _length;  
_etag = new EntityTagHeaderValue('\"' + Convert.ToString(etagHash, 16) + '\"');  

This works great before we add caching - if the ETag hasn't changed we return a 304, otherwise we return a 200 response with the new data.

Unfortunately, once we add caching, we are no longer making a request to the server. The file could have completely changed or have been deleted entirely, but if the browser doesn't ask, the server can't tell them!

One common solution around this is to append a querystring to the url when you reference the static file in your markup. As the browser determines uniqueness of requests including the querystring, it treats https://localhost/css/site.css?v=1 as a different file to https://localhost/css/site.css?v=2. You can use this approach by updating any references to the file in your markup whenever you change the file.

While this works, it requires you to find every reference to your static file anywhere on your site whenever you change the file, so it can be a burden to manage. A simpler technique is to have the querystring be calculated based on the content of the file itself, much like an ETag. That way, when the file changes, the querystring will automatically change.

This is not a new technique - Mads Kristensen describes one method of achieving it with ASP.NET 4.X here but with ASP.NET Core we can use the link, script and image Tag Helpers to do the work for us.

It is highly likely that you are actually already using these tag helpers, as they are used in the default templates for exactly this purpose! For example, in _Layout.cshtml, you will find the following link:

<script src="~/js/site.js" asp-append-version="true"></script>  

The Tag Helper is added with the markup asp-append-version="true" and ensures that when rendered, the link will be rendered with a hash of the file as a querysting:

<script src="/js/site.js?v=Ynfdc1vuMNOWZfqTj4N3SPcebazoGXiIPgtfE-b2TO4"></script>  

If the file changes, the SHA256 hash will also change, and the cache will be automatically bypassed! You can add this Tag Helper to img, script and link elements, though there is obviously a degree of overhead as a hash of the file has to be calculated on first request. For files which are very unlikely to ever change (e.g. some images) it may not be worth the overhead to add the helper, but for others it will no doubt prevent quirky behaviour once you add caching!

Summary

In this post we saw the built in caching using ETags provided out of the box with the StaticFileMiddleware. I then showed how to add caching to the requests to prevent unnecessary requests to the server. Finally, I showed how to break out of the cache when the file changes, by using Tag Helpers to add a version querystring to the file request.


Damien Bowden: Angular2 search with ASP.NET Core and Elasticsearch

This article shows how a website search could be implemented using Angular 2, ASP.NET Core and Elasticsearch. Most users expect autocomplete and a flexible search like some of known search websites. When the user enters a char in the search input field, an autocomplete using a shingle token filter with a terms aggregation used to suggest possible search terms. When a term is selected, a match query request is sent and uses an edge ngram indexed field to search for hits or matches. Server side paging is then implemented to iterate though the results.

Code: https://github.com/damienbod/Angular2AutoCompleteAspNetCoreElasticsearch

2017.01.07: Updated to csproj, webpack 2.2.0-rc.3, angular 2.4.1

ASP.NET Core server side search

The Elasticsearch index and queries was built using the ideas from these 2 excellent blogs, bilyachat and qbox.io. ElasticsearchCrud is used as the dotnet core client for Elasticsearch. To setup the index, a mapping needs to be defined as well as the index with the required settings analysis with filters, analyzers and tokenizers. See the Elasticsearch documentation for detailed information.

In this example, 2 custom analyzers are defined, one for the autocomplete and one for the search. The autocomplete analyzer uses a custom shingle token filter called autocompletefilter, a stopwords token filter, lowercase token filter and a stemmer token filter. The edge_ngram_search analyzer uses an edge ngram token filter and a lowercase filter.

private IndexDefinition CreateNewIndexDefinition()
{
	return new IndexDefinition
	{
		IndexSettings =
		{
			Analysis = new Analysis
			{
				Filters =
				{
					CustomFilters = new List<AnalysisFilterBase>
					{
						new StemmerTokenFilter("stemmer"),
						new ShingleTokenFilter("autocompletefilter")
						{
							MaxShingleSize = 5,
							MinShingleSize = 2
						},
						new StopTokenFilter("stopwords"),
						new EdgeNGramTokenFilter("edge_ngram_filter")
						{
							MaxGram = 20,
							MinGram = 2
						}
					}
				},
				Analyzer =
				{
					Analyzers = new List<AnalyzerBase>
					{
						new CustomAnalyzer("edge_ngram_search")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "edge_ngram_filter"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						},
						new CustomAnalyzer("autocomplete")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "autocompletefilter", "stopwords", "stemmer"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						},
						new CustomAnalyzer("default")
						{
							Tokenizer = DefaultTokenizers.Standard,
							Filter = new List<string> {DefaultTokenFilters.Lowercase, "stopwords", "stemmer"},
							CharFilter = new List<string> {DefaultCharFilters.HtmlStrip}
						}
						
					   
					}
				}
			}
		},
	};
}

The PersonCity is used to add and search for documents in Elasticsearch. The default index and type for this class using ElasticsearchCrud is personcitys and personcity.

public class PersonCity
{
	public long Id { get; set; }
	public string Name { get; set; }
	public string FamilyName { get; set; }
	public string Info { get; set; }
	public string CityCountry { get; set; }
	public string Metadata { get; set; }
	public string Web { get; set; }
	public string Github { get; set; }
	public string Twitter { get; set; }
	public string Mvp { get; set; }
}

A PersonCityMapping class is defined so that required mapping from the PersonCityMappingDto mapping class can be defined for the personcitys index and the personcity type. This class overrides the ElasticsearchMapping to define the index and type.

using System;
using ElasticsearchCRUD;

namespace SearchComponent
{
    public class PersonCityMapping : ElasticsearchMapping
    {
        public override string GetIndexForType(Type type)
        {
            return "personcitys";
        }

        public override string GetDocumentType(Type type)
        {
            return "personcity";
        }
    }
}

The PersonCityMapping class is then used to map the C# type PersonCityMappingDto to the default index from the PersonCity class using the PersonCityMapping. The PersonCityMapping maps to the default index of the PersonCity class.

public PersonCitySearchProvider()
{
	_elasticsearchMappingResolver.AddElasticSearchMappingForEntityType(typeof(PersonCityMappingDto), new PersonCityMapping());
	_context = new ElasticsearchContext(ConnectionString, new ElasticsearchSerializerConfiguration(_elasticsearchMappingResolver))
	{
		TraceProvider = new ConsoleTraceProvider()
	};
}

A specific mapping DTO class is used to define the mapping in Elasticsearch. This class is required, if a non default mapping is required in Elasticsearch. The class uses the ElasticsearchString attribute to define a copy mapping. The field in Elasticsearch should be copied to the autocomplete and the searchfield field when adding a new document. The searchfield and the autocomplete field uses the two analyzers which where defined in the index when adding data. This class is only used to define the type mapping in Elasticsearch.

using ElasticsearchCRUD.ContextAddDeleteUpdate.CoreTypeAttributes;

namespace SearchComponent
{
    public class PersonCityMappingDto
    {
        public long Id { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Name { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string FamilyName { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Info { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string CityCountry { get; set; }

        [ElasticsearchString(CopyToList = new[] { "autocomplete", "searchfield" })]
        public string Metadata { get; set; }

        public string Web { get; set; }

        public string Github { get; set; }

        public string Twitter { get; set; }

        public string Mvp { get; set; }

        [ElasticsearchString(Analyzer = "edge_ngram_search", SearchAnalyzer = "standard", TermVector = TermVector.yes)]
        public string searchfield { get; set; }

        [ElasticsearchString(Analyzer = "autocomplete")]
        public string autocomplete { get; set; }
    }
}

The IndexCreate method creates a new index and mapping in elasticsearch.

public void CreateIndex()
{			
	_context.IndexCreate<PersonCityMappingDto>(CreateNewIndexDefinition());
}

The Elasticsearch settings can be viewed using the HTTP GET:

http://localhost:9200/_settings

{
	"personcitys": {
		"settings": {
			"index": {
				"creation_date": "1477642409728",
				"analysis": {
					"filter": {
						"stemmer": {
							"type": "stemmer"
						},
						"autocompletefilter": {
							"max_shingle_size": "5",
							"min_shingle_size": "2",
							"type": "shingle"
						},
						"stopwords": {
							"type": "stop"
						},
						"edge_ngram_filter": {
							"type": "edgeNGram",
							"min_gram": "2",
							"max_gram": "20"
						}
					},
					"analyzer": {
						"edge_ngram_search": {
							"filter": ["lowercase",
							"edge_ngram_filter"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						},
						"autocomplete": {
							"filter": ["lowercase",
							"autocompletefilter",
							"stopwords",
							"stemmer"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						},
						"default": {
							"filter": ["lowercase",
							"stopwords",
							"stemmer"],
							"char_filter": ["html_strip"],
							"type": "custom",
							"tokenizer": "standard"
						}
					}
				},
				"number_of_shards": "5",
				"number_of_replicas": "1",
				"uuid": "TxS9hdy7SmGPr4FSSNaPiQ",
				"version": {
					"created": "2040199"
				}
			}
		}
	}
}

The Elasticsearch mapping can be viewed using the HTTP GET:

http://localhost:9200/_mapping

{
	"personcitys": {
		"mappings": {
			"personcity": {
				"properties": {
					"autocomplete": {
						"type": "string",
						"analyzer": "autocomplete"
					},
					"citycountry": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"familyname": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"github": {
						"type": "string"
					},
					"id": {
						"type": "long"
					},
					"info": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"metadata": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"mvp": {
						"type": "string"
					},
					"name": {
						"type": "string",
						"copy_to": ["autocomplete",
						"searchfield"]
					},
					"searchfield": {
						"type": "string",
						"term_vector": "yes",
						"analyzer": "edge_ngram_search",
						"search_analyzer": "standard"
					},
					"twitter": {
						"type": "string"
					},
					"web": {
						"type": "string"
					}
				}
			}
		}
	}
}

Now documents can be added using the PersonCity class which has no Elasticsearch definitions.

Autocomplete search

A terms aggregation search is used for the autocomplete request. The terms aggregation uses the autocomplete field which only exists in Elasticsearch. A list of strings is returned to the user from this request.

public IEnumerable<string> AutocompleteSearch(string term)
{
	var search = new Search
	{
		Size = 0,
		Aggs = new List<IAggs>
		{
			new TermsBucketAggregation("autocomplete", "autocomplete")
			{
				Order= new OrderAgg("_count", OrderEnum.desc),
				Include = new IncludeExpression(term + ".*")
			}
		}
	};

	var items = _context.Search<PersonCity>(search);
	var aggResult = items.PayloadResult.Aggregations.GetComplexValue<TermsBucketAggregationsResult>("autocomplete");
	IEnumerable<string> results = aggResult.Buckets.Select(t =>  t.Key.ToString());
	return results;
}

The request is sent to Elasticsearch as follows:

POST http://localhost:9200/personcitys/personcity/_search HTTP/1.1
Content-Type: application/json
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 124
Host: localhost:9200

{
	"size": 0,
	"aggs": {
		"autocomplete": {
			"terms": {
				"field": "autocomplete",
				"order": {
					"_count": "desc"
				},
				"include": {
					"pattern": "as.*"
				}
			}
		}
	}
}

Search Query

When an autocomplete string is selected, a search request is sent to Elasticsearch using a Match Query on the searchfield field which returns 10 hits from the 0 document. If the paging request is sent, the from value is a multiple of 10 depending on the page.

public PersonCitySearchResult Search(string term, int from)
{
	var personCitySearchResult = new PersonCitySearchResult();
	var search = new Search
	{
		Size = 10,
		From = from,
		Query = new Query(new MatchQuery("did_you_mean", term))
	};

	var results = _context.Search<PersonCity>(search);

	personCitySearchResult.PersonCities = results.PayloadResult.Hits.HitsResult.Select(t => t.Source);
	personCitySearchResult.Hits = results.PayloadResult.Hits.Total;
	personCitySearchResult.Took = results.PayloadResult.Took;
	return personCitySearchResult;
}

The search query as sent as follows:

POST http://localhost:9200/personcitys/personcity/_search HTTP/1.1
Content-Type: application/json
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 74
Host: localhost:9200

{
	"from": 0,
	"size": 10,
	"query": {
		"match": {
			"searchfield": {
				"query": "asp.net"
			}
		}
	}
}

Angular 2 client side search

The Angular 2 client uses an autocomplete input control and then uses a ngFor to display all the search results. Bootstrap paging is used if more than 10 results are found for the search term.

<div class="panel-group">

    <personcitysearch 
      *ngIf="IndexExists"
      (onTermSelectedEvent)="onTermSelectedEvent($event)"
      [disableAutocomplete]="!IndexExists">
    </personcitysearch>
    
    <em *ngIf="PersonCitySearchData.took > 0" style="font-size:smaller; color:lightgray;">
      <span>Hits: {{PersonCitySearchData.hits}}</span>
    </em><br /> 
    <br />

    <div  *ngFor="let personCity of PersonCitySearchData.personCities">  
        <b><span>{{personCity.name}} {{personCity.familyName}} </span></b> 
        <a *ngIf="personCity.twitter"  href="{{personCity.twitter}}">
          <img src="assets/socialTwitter.png" />
        </a>
        <a *ngIf="personCity.github" href="{{personCity.github}}">
          <img src="assets/github.png" />
        </a>
        <a *ngIf="personCity.mvp" href="{{personCity.mvp}}">
          <img src="assets/mvp.png" width="24" />
        </a><br />

        <em style="font-size:large"><a href="{{personCity.web}}">{{personCity.web}}</a></em><br />  
        <em><span>{{personCity.metadata}}</span></em><br />      
        <span>{{personCity.info}}</span><br />
        <br />
        <br />

    </div>

    <ul class="pagination" *ngIf="ShowPaging">
        <li><a (click)="PreviousPage()" >&laquo;</a></li>

        <li><a *ngFor="let page of Pages" (click)="LoadDataForPage(page)">{{page}}</a></li>

        <li><a (click)="NextPage()">&raquo;</a></li>
    </ul>
</div>

The personcitysearch Angular 2 component implements the autocomplete functionality using the ng2-completer component. When a char is entered into the input, a HTTP request is sent to the server which in turns sends a request to the Elasticsearch server.

import { Component, Inject, EventEmitter, Input, Output, OnInit, AfterViewInit, ElementRef } from '@angular/core';
import { Http, Response } from "@angular/http";

import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { Router } from  '@angular/router';

import { Configuration } from '../app.constants';
import { PersoncityautocompleteDataService } from './personcityautocompleteService';
import { PersonCity } from '../model/personCity';

import { CompleterService, CompleterItem } from 'ng2-completer';

import './personcityautocomplete.component.scss';

@Component({
    selector: 'personcityautocomplete',
  template: `
<ng2-completer [dataService]="dataService" (selected)="onPersonCitySelected($event)" [minSearchLength]="0" [disableInput]="disableAutocomplete"></ng2-completer>

`
})
    
export class PersoncityautocompleteComponent implements OnInit    {

    constructor(private completerService: CompleterService, private http: Http, private _configuration: Configuration) {

        this.dataService = new PersoncityautocompleteDataService(http, _configuration); ////completerService.local("name, info, familyName", 'name');
    }

    @Output() bindModelPersonCityChange = new EventEmitter<PersonCity>();
    @Input() bindModelPersonCity: PersonCity;
    @Input() disableAutocomplete: boolean = false;

    private searchStr: string;
    private dataService: PersoncityautocompleteDataService;

    ngOnInit() {
        console.log("ngOnInit PersoncityautocompleteComponent");
    }

    public onPersonCitySelected(selected: CompleterItem) {
        console.log(selected);
        this.bindModelPersonCityChange.emit(selected.originalObject);
    }
}

And the data service for the CompleterService which is used by the ng2-completer component:

import { Http, Response } from "@angular/http";
import { Subject } from "rxjs/Subject";

import { CompleterData, CompleterItem } from 'ng2-completer';
import { Configuration } from '../app.constants';

export class PersoncityautocompleteDataService extends Subject<CompleterItem[]> implements CompleterData {
    constructor(private http: Http, private _configuration: Configuration) {
        super();

        this.actionUrl = _configuration.Server + 'api/personcity/querystringsearch/';
    }

    private actionUrl: string;

    public search(term: string): void {
        this.http.get(this.actionUrl + term)
            .map((res: Response) => {
                // Convert the result to CompleterItem[]
                let data = res.json();
                let matches: CompleterItem[] = data.map((personcity: any) => {
                    return {
                        title: personcity.name,
                        description: personcity.familyName + ", " + personcity.cityCountry,
                        originalObject: personcity
                    }
                });
                this.next(matches);
            })
            .subscribe();
    }

    public cancel() {
        // Handle cancel
    }
}

The HomeSearchComponent implements the paging for the search results and and also displays the data. The SearchDataService implements the API calls to the MVC ASP.NET Core API service. The paging css uses bootstrap to display the data.

import { Observable } from 'rxjs/Observable';
import { Component, OnInit } from '@angular/core';
import { Http } from '@angular/http';

import { SearchDataService } from '../services/searchDataService';
import { PersonCity } from '../model/personCity';
import { PersonCitySearchResult } from '../model/personCitySearchResult';
import { PersoncitysearchComponent } from '../personcitysearch/personcitysearch.component';

@Component({
    selector: 'homesearchcomponent',
    templateUrl: 'homesearch.component.html',
    providers: [SearchDataService]
})

export class HomeSearchComponent implements OnInit {

    public message: string;
    public PersonCitySearchData: PersonCitySearchResult;
    public SelectedTerm: string;
    public IndexExists: boolean = false;

    constructor(private _dataService: SearchDataService, private _personcitysearchComponent: PersoncitysearchComponent) {
        this.message = "Hello from HomeSearchComponent constructor";
        this.SelectedTerm = "none";
        this.PersonCitySearchData = new PersonCitySearchResult();
    }

    public onTermSelectedEvent(term: string) {
        this.SelectedTerm = term; 
        this.findDataForSearchTerm(term, 0)
    }

    private findDataForSearchTerm(term: string, from: number) {
        console.log("findDataForSearchTerm:" + term);
        this._dataService.FindAllForTerm(term, from)
            .subscribe((data) => {
                console.log(data)
                this.PersonCitySearchData = data;
                this.configurePagingDisplay(this.PersonCitySearchData.hits);
            },
            error => console.log(error),
            () => {
                console.log('PersonCitySearch:findDataForSearchTerm completed');
            }
            );
    }

    ngOnInit() {
        this._dataService
            .IndexExists()
            .subscribe(data => this.IndexExists = data,
            error => console.log(error),
            () => console.log('Get IndexExists complete'));
    }

    public ShowPaging: boolean = false;
    public CurrentPage: number = 0;
    public TotalHits: number = 0;
    public PagesCount: number = 0;
    public Pages: number[] = [];

    public LoadDataForPage(page: number) {
        var from = page * 10;
        this.findDataForSearchTerm(this.SelectedTerm, from)
        this.CurrentPage = page;
    }

    public NextPage() {
        var page = this.CurrentPage;
        console.log("TotalHits" + this.TotalHits + "NextPage: " + ((this.CurrentPage + 1) * 10) + "CurrentPage" + this.CurrentPage );

        if (this.TotalHits > ((this.CurrentPage + 1) * 10)) {
            page = this.CurrentPage + 1;
        }

        this.LoadDataForPage(page);
    }

    public PreviousPage(page: number) {
        var page = this.CurrentPage;

        if (this.CurrentPage > 0) {
            page = this.CurrentPage - 1;
        }

        this.LoadDataForPage(page);
    }

    private configurePagingDisplay(hits: number) {
        this.PagesCount = Math.floor(hits / 10);

        this.Pages = [];
        for (let i = 0; i <= this.PagesCount; i++) {
            this.Pages.push((i));
        }
        
        this.TotalHits = hits;

        if (this.PagesCount <= 1) {
            this.ShowPaging = false;
        } else {
            this.ShowPaging = true;
        }
    }
}

Now when characters are entered into the search input, records are searched for and returned with the amount of hits for the term.

searchaspnetcoreangular2_01

The paging can also be used, to do the server side paging.

searchaspnetcoreangular2_02

The search functions like a web search with which we have come to expect. If different results, searches are required, the server side index creation, query types can be changed as needed. For example, the autocomplete suggestions could be replaced with a fuzzy search, or a query string search.

Links:

https://github.com/oferh/ng2-completer

https://github.com/damienbod/Angular2WebpackVisualStudio

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

https://www.elastic.co/products/elasticsearch

https://www.nuget.org/packages/ElasticsearchCRUD/

https://github.com/damienbod/ElasticsearchCRUD

http://www.bilyachat.com/2015/07/search-like-google-with-elasticsearch.html

http://stackoverflow.com/questions/29753971/elasticsearch-completion-suggest-search-with-multiple-word-inputs

http://rea.tech/implementing-autosuggest-in-elasticsearch/

https://qbox.io/blog/an-introduction-to-ngrams-in-elasticsearch

https://www.elastic.co/guide/en/elasticsearch/guide/current/_ngrams_for_partial_matching.html



Andrew Lock: Accessing services when configuring MvcOptions in ASP.NET Core

Accessing services when configuring MvcOptions in ASP.NET Core

This post is a follow on to an article by Steve Gordon I read the other day on how to HTML encode deserialized JSON content from a request body. It's an interesting post, but it spurred me thinking about a tangential issue - using injected services when configuring MvcOptions.

The setting - Steve's post in brief

I recommend you read Steve's post first, but the key points to this discussion are described below.

Steve wanted to ensure that HTML POSTed inside a JSON string property was automatically HTML encoded, so that potentially malicious script couldn't be stored in the database. This wouldn't necessarily be something you'd always want to do, but it worked for his use case. It ensured that a string such as

{
  "text": "<script>alert('got you!')</script>" 
}

was automatically converted to

{
  "text": "&lt;script&gt;alert(&#x27;got you!&#x27;)&lt;/script&gt;" 
}

by the time it was received in an Action method. He describes creating a custom ContractResolver and ValueProvider to override the CreateProperties method and automatically encode any string properties.

The section I am interested in is where he wires up his new resolver and provider using a small extension method UseHtmlEncodeJsonInputFormatter. This requires providing a number of services in order to correctly create the JsonInputFormatter. I have reproduced his extension method below:

ublic static class MvcOptionsExtensions  
{
    public static void UseHtmlEncodeJsonInputFormatter(this MvcOptions opts, ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        opts.InputFormatters.RemoveType<JsonInputFormatter>();

        var serializerSettings = new JsonSerializerSettings
        {
            ContractResolver = new HtmlEncodeContractResolver()
        };

        var jsonInputFormatter = new JsonInputFormatter(logger, serializerSettings, ArrayPool<char>.Shared, objectPoolProvider);

        opts.InputFormatters.Add(jsonInputFormatter);
    }
}

For the full details of this method, check out his post. For our discussion, all that's necessary is to appreciate that we are modifying the MvcOptions by adding a new JsonInputFormatter, and that to do so we need instances of an ILogger<T> and ObjectPoolProvider.

The need for these services is a little problematic - we will be calling this extension method when we are first configuring MVC, within the ConfigureServices method, but at that point, we don't have an easy method of accessing other configured services.

The approach Steve used was to build a service provider, and then create the required services using it, as shown below:

public void ConfigureServices(IServiceCollection services)  
{
    var sp = services.BuildServiceProvider();
    var logger = sp.GetService<ILoggerFactory>();
    var objectPoolProvider = sp.GetService<ObjectPoolProvider>();

    services
        .AddMvc(options =>
        {
            options.UseHtmlEncodeJsonInputFormatter(
                logger.CreateLogger<MvcOptions>(), 
                objectPoolProvider);
        });
}

This approach works, but it's not the cleanest, and luckily there's a handy alternative!

What does AddMvc actually do?

Before I get into the cleaned up approach, I just want to take a quick diversion into what the AddMvc method does. In particular, I'm interested in the overload that takes an Action<MvcOption> setup action.

Taking a look at the source code, you can see that it is actually pretty simple:

public static IMvcBuilder AddMvc(this IServiceCollection services, Action<MvcOptions> setupAction)  
{
    // precondition checks removed for brevity
    var builder = services.AddMvc();
    builder.Services.Configure(setupAction);

    return builder;
}

This overload calls AddMvc() without an action, which returns an IMvcBuilder. We then call Configure with the Action<> to configure an instance of MvcOptions.

ConfigureOptions to the rescue!

When I saw the Configure call, I immediately thought of a post I wrote previously, about using ConfigureOptions to inject services when configuring IOptions implementations.

Using this technique, we can avoid having to call BuildServiceProvider inside the ConfigureServices method, and can leverage dependency injection instead by creating an instance of IConfigureOptions<MvcOptions>.

Implementing the interface is simply a case of calling our already defined extension method, from within the required Configure method:

public class ConfigureMvcOptions : IConfigureOptions<MvcOptions>  
{
    private readonly ILogger<MvcOptions> _logger;
    private readonly ObjectPoolProvider _objectPoolProvider;
    public ConfigureMvcOptions(ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        _logger = logger;
        _objectPoolProvider = objectPoolProvider;
    }

    public void Configure(MvcOptions options)
    {
        options.UseHtmlEncodeJsonInputFormatter(_logger, _objectPoolProvider);
    }
}

We can then update our configuration method to use the basic AddMvc() method and inject our new configuration class:

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();
    services.AddSingleton<IConfigureOptions<MvcOptions>, ConfigureMvcOptions>();
}

With this configuration in place, we have the same behaviour as before, just with some nicer wiring in the Setup class! For a more detailed explanation of why this works, check out my previous post.

Summary

This post was a short follow-up to a post by Steve Gordon in which he showed how to create a custom JsonInputFormatter. I showed how you can use IConfigureOptions<> to use dependency injection when adding MvcOptions as part of your MVC configuration.


Andrew Lock: Resource-based authorisation in ASP.NET Core

Resource-based authorisation in ASP.NET Core

In this next post on authorisation in ASP.NET Core, we look at how you can secure resources based on properties of that resource itself.

In a previous post, we saw how you could create a policy that protects a resource based on properties of the user trying to access it. We used Claims-based identity to verify whether they had the appropriate claim values, and granted or denied access based on these as appropriate.

In some cases, it may not be possible to decide whether access is appropriate based on the current user's claims alone. For example, we may allow users to edit documents that they created, but only access a read-only view of documents created by others. In that case, we not only need an authenticated user, we also need to know who created the document we are attempting to access.

In this post I'll show how we can use the AuthorisationService to take into account the resource we are accessing when determining if a user is authorised to access it.

Previous posts in the authentication/authorisation series:

Resource-based Authorisation

As an example, we will consider the authorisation policy from a previous post, "CanAccessVIPArea", in which we created a policy using a custom AuthorizationRequirement with multiple AuthorizationHandlers to determine if you were allowed access to the protected action.

One of the handlers we created was to satisfy the requirement that employees of the Airline's Lounge were allowed to use the VIP area. In order to be able to verify this, we had to provide a fixed string to our policy when it was initially configured:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddAuthorization(options =>
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.AddRequirements(
                new IsVipRequirement("British Airways"));
    });
}

An obvious problem with this is that our policy only works for a single airline. We now need a separate policy for each new Airline, and the 'Lounge' method must secured by the correct policy. This may be acceptable if there are not many airlines, but it is an obvious source of potential errors.

Instead, it seems like a better solution would be able to take into consideration the Lounge that is being accessed when determining whether a particular employee can access it. This is a perfect use case for resource-based authorisation.

Defining the resource

First of all, we will need to define the 'Lounge' resource that we are attempting to protect:

public class Lounge  
{
    public string AirlineName {get; set;}
    public bool IsOpen {get; set;}
    public int SeatingCapacity {get; set;}
    public int NumberofOccupants {get; set;}
}

This is a fairly self-explanatory example - the Lounge belongs to the single airline defined in AirlineName.

Authorising using IAuthorisationService

Now we have a resource, we need some way of passing it to the authorisation handlers. Previously, we decorated our Controllers and Actions with [Authorize("CanAccessVIPArea")] to declaratively authorise the Action being executed. Unfortunately, we have no way of passing a Lounge object to the AuthoriseAttribute. Instead, we will use imperative authorisation by calling the IAuthorisationService directly.

In the previous post on UI modification I showed how you can inject the IAuthorisationService into your Views, to dynamically authorise a User for the purpose of hiding inaccessible UI elements. We can use a similar technique in our controllers whenever we need to do resource-based authorisation:

public class VIPLoungeControllerController : Controller  
{
    private readonly IAuthorizationService _authorizationService;

    public VIPLoungeControllerController(IAuthorizationService authorizationService)
    {
        _authorizationService = authorizationService;
    }

    [HttpGet]
    public async Task<IActionResult> ViewTheFancySeatsInTheLounge(int loungeId)
    {
       // get the lounge object from somewhere
       var lounge = LoungeRepository.Find(loungeId);

       if (await authorizationService.AuthorizeAsync(User, lounge, "CanAccessVIPArea"))
       {
           return View();
       }
       else
       {
           return new ChallengeResult();
       }
    }

We use dependency injection to inject an instance of the IAuthorizationService into our controller for use in our action method. Next we obtain an instance of our resource from somewhere (e.g. loaded from the database based on an id) and provide the Lounge object as a parameter to AuthorizeAsync, along with the policy we wish to apply. If the authorisation is successful, we display the View, otherwise we return a ChallengeResult. The ChallengeResult can return a 401 or 403 response, depending on the authentication state of the user, which in turn may be captured further down the pipeline and turned into a 302 redirect to the login page. For more details on authentication, check out my previous posts.

Note that we are no longer using the AuthorizeAttribute on our method; the authorisation is a part of the execution of our action, rather than occurring before it can run.

Updating the AuthorizationPolicy

Seeing as how we have switched to resource-based authorisation, we no longer need to define an Airline name on our AuthorizationRequirement, or when we configure the policy. We can simplify our requirement to being a simple marker class:

public class IsVipRequirement : IAuthorizationRequirement  { }  

and update our policy definition accordingly:

services.AddAuthorization(options =>  
    {
        options.AddPolicy(
            "CanAccessVIPArea",
            policyBuilder => policyBuilder.AddRequirements(
                new IsVipRequirement());
    });

Resource-based Authorisation Handlers

The last things we need to update are our AuthorizationHandlers, which can now make use of the provided resource. The only handler from the previous post that needs updating is the IsAirlineEmployeeAuthorizationHandler, which we can now modify to use the AirlineName defined on our Lounge object, instead of being hardcoded to the AuthorizationRequirement at startup:

public class IsAirlineEmployeeAuthorizationHandler : AuthorizationHandler<IsVipRequirement, Lounge>  
{
    protected override Task HandleRequirementAsync(
        AuthorizationHandlerContext context, 
        IsVipRequirement requirement
        Lounge lounge)
    {
        if (context.User.HasClaim(claim =>
            claim.Type == "EmployeeNumber" && claim.Issuer == lounge.AirlineName))
        {
            context.Succeed(requirement);
        }
        return Task.FromResult(0);
    }
}

Two things have changed here from our previous implementation. First, we are inheriting from AuthorizationHandler<IsVipRequirement, Lounge>, instead of AuthorizationHandler<IsVipRequirement>. This handles extracting the provided resource from the authorisation context. Secondly, the HandleRequirementAsync method now takes a Lounge parameter, which the base AuthorizationHandler<,> automatically provides from the context. We are then free to use the handler in our method to authorise the employee.

Now we have access to the resource object, we could also add handlers to check whether the Lounge is currently open, and whether it has reached seating capacity, but I'll leave that as an exercise for the dedicated!

When to use resource-based authorisation?

Now you have seen two techniques for performing authorisation - declarative, attribute based authorisation, and imperative, IAuthorisationService based authorisation - you may be wondering which approach to use and when. I think the simple answer is really to only use resource-based authorisation when you have to.

In our case, with the attribute-based approach, we had to hard-code the name of each airline into our AuthorizationRequirement, which was not very scalable and in practice meant that couldn't correctly protect our endpoint. In this case resource-based authorisation was pretty much required.

However, moving to the resource-based approach has some downsides. The code in our Action is more complicated and it is less obvious what it is doing. Also, when you use an AuthorizeAttribute, the authorisation code is run right at the beginning of the pipeline, before all other filters and model binding occurs. In contrast, the whole of the pipeline runs when using resource-based auth, even if it turns out the user is ultimately not authorised. This may not be a problem, but it is something to bear in mind and be aware of when choosing your approach.

In general, your application will probably need to use both techniques, it is just a matter of choosing the correct one for each instance.

Summary

In this post I updated an existing authorisation example to use resource-based authorisation. I showed how to call the IAuthorisationService to perform authorisation based on a document or resource that is being protected. Finally I updated an AuthorizationHandler to derive from the generic AuthorizationHandler<,> to access the resource at runtime.


Pedro Félix: Should I PUT or should I POST? (Darling you gotta let me know)

(yes, it doesn’t rhyme however I couldn’t resist the association)

Selecting the proper methods (e.g. GET, POST, PUT, …) to use when designing HTTP based APIS is typically a subject of much debate, and eventually some bike-shedding. In this post I briefly present the rules that I normally follow when presented with this design task.

Don’t go against the HTTP specification

First and foremost, make sure the properties of the chosen methods aren’t violated on the scenario under analysis. The typical offender is using GET for an interaction that requests a state change on the server.
This is because GET is defined to have the safe property, defined as

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.

Another example is choosing PUT for requests that aren’t idempotent, such as appending an item to a collection.
The idempotent property is defined by RFC 7231 as

A request method is considered “idempotent” if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.

Violating these properties is harmful because there may exist system components whose correct behavior depends on them being true. An example is a crawler program that freely follows all GET links in a document, assuming that no state change will be performed by these requests, and that ends up changing the system state.

Another example is an intermediary (e.g. reverse proxy) that automatically retries any failed PUT request (e.g. timeout), assuming they are idempotent. If the PUT is appending items to a collection (append is not idempotent), and the first PUT request was successfully performed and only the response message was lost, then the retry will end up adding two replicated items to the collection.

This violation can also have security implications. For instance, most server frameworks don’t protect GET requests agains CSRF (Cross-Site Request Forgery) because this method is not supposed to change state and reads are already protected by the same-origin browser policy.

Take advantage of the method properties

After ensuring the correctness concerns, i.e., ensuring requests don’t violate any property of chosen methods, we can revert our analysis and check if there aren’t any methods that best fit the intended functionality. After having ensured correctness, in this stage our main concern is going to be optimization.

For instance, if a request defines the complete state for a resource and is idempotent, perhaps a PUT is a best fit than a POST. This is not because a POST will produce incorrect behavior but because using a PUT may induce better system properties. For instance, an intermediary (e.g. reverse proxy or framework middleware) may automatically retry failed requests, and by this provide some fault recovery.

When nothing else fits, use POST

Contrary to some HTTP myths, the POST is not solely intended to create resources. In fact, the new RFC 7231 states

The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics

The “according to the resource’s own specific semantics” effectively allows us to use POST for requests with any semantics. However the fact that it allows us doesn’t mean that we always should. Again, if another method (e.g. GET or PUT) best fits the request purpose, not choosing it may mean throwing away interesting properties, such as caching or fault recovery.

Does my API look RESTful in this method?

One thing that I always avoid is deciding based on the apparent “RESTfullness” of the method – For instance, an API doesn’t have to use PUT to be RESTful.

First and foremost we should think in terms of system properties and use HTTP accordingly. That implies:

  • Not violating its rules – what can go wrong if I choose PUT for this request?
  • Taking advantage of its benefits – what do I loose if I don’t choose PUT for this request?

Hope this helps.
Cheers.



Damien Bowden: Angular2 autocomplete with ASP.NET Core and Elasticsearch

This article shows how autocomplete could be implemented in Angular 2 using ASP.NET Core MVC as a data service. The API uses Elasticsearch to query the data requests. ng2-completer is used to implement the Angular 2 autocomplete functionality.

Code: https://github.com/damienbod/Angular2AutoCompleteAspNetCoreElasticsearch

2017.01.07: Updated to csproj, webpack 2.2.0-rc.3, angular 2.4.1

To use autocomplete in the Angular 2 application, the ng2-completer package needs to be added to the dependencies in the npm packages.json file.

"ng2-completer": "^0.2.2"

This project uses Webpack to build the Angular 2 application and all vendor packages are added to the vendor.ts which can then be used throughout the application. The ng2-completer package is added to the vendor.ts file which is then built using Webpack.

import '@angular/platform-browser-dynamic';
import '@angular/platform-browser';
import '@angular/core';
import '@angular/http';
import '@angular/router';

import 'ng2-completer';

import 'bootstrap/dist/js/bootstrap';

import './css/bootstrap.css';
import './css/bootstrap-theme.css';

PersonCity is used as the data model for the autocomplete. The server side of the application uses the PersonCity model to store and search for data.

export class PersonCity {
    public id: number;
    public name: string;
    public info: string;
    public familyName: string;
}

The ng2-completer autocomplete is used within the PersonCityAutocompleteSearchComponent. This component returns a PersonCity object to the using component. When a new search request is finished, the @Output bindModelPersonCityChange is updated. The @Output is chained to the onPersonCitySelected event from ng2-completer.

A custom CompleterService, PersoncityautocompleteDataService, is used to request the data from the server.

import { Component, Inject, EventEmitter, Input, Output, OnInit, AfterViewInit, ElementRef } from '@angular/core';
import { Http, Response } from "@angular/http";

import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { Router } from  '@angular/router';

import { Configuration } from '../app.constants';
import { PersoncityautocompleteDataService } from './personcityautocompleteService';
import { PersonCity } from '../model/personCity';

import { CompleterService, CompleterItem } from 'ng2-completer';

import './personcityautocomplete.component.scss';

@Component({
    selector: 'personcityautocomplete',
  template: `
<ng2-completer [dataService]="dataService" (selected)="onPersonCitySelected($event)" [minSearchLength]="0" [disableInput]="disableAutocomplete"></ng2-completer>

`
})
    
export class PersoncityautocompleteComponent implements OnInit    {

    constructor(private completerService: CompleterService, private http: Http, private _configuration: Configuration) {

        this.dataService = new PersoncityautocompleteDataService(http, _configuration); ////completerService.local("name, info, familyName", 'name');
    }

    @Output() bindModelPersonCityChange = new EventEmitter<PersonCity>();
    @Input() bindModelPersonCity: PersonCity;
    @Input() disableAutocomplete: boolean = false;

    private searchStr: string;
    private dataService: PersoncityautocompleteDataService;

    ngOnInit() {
        console.log("ngOnInit PersoncityautocompleteComponent");
    }

    public onPersonCitySelected(selected: CompleterItem) {
        console.log(selected);
        this.bindModelPersonCityChange.emit(selected.originalObject);
    }
}


The PersonCityDataService extends the CompleterItem and implements the CompleterData as described in the ng-completer documentation. When PersonCity items are returned from the service, the results are mapped to CompleterItem items as required. This could also be done on the server and then the default remote service could be used. By using the custom service, it can easily be extended to add the security headers for the data service as required.

import { Http, Response } from "@angular/http";
import { Subject } from "rxjs/Subject";

import { CompleterData, CompleterItem } from 'ng2-completer';
import { Configuration } from '../app.constants';

export class PersoncityautocompleteDataService extends Subject<CompleterItem[]> implements CompleterData {
    constructor(private http: Http, private _configuration: Configuration) {
        super();

        this.actionUrl = _configuration.Server + 'api/personcity/querystringsearch/';
    }

    private actionUrl: string;

    public search(term: string): void {
        this.http.get(this.actionUrl + term)
            .map((res: Response) => {
                // Convert the result to CompleterItem[]
                let data = res.json();
                let matches: CompleterItem[] = data.map((personcity: any) => {
                    return {
                        title: personcity.name,
                        description: personcity.familyName + ", " + personcity.cityCountry,
                        originalObject: personcity
                    }
                });
                this.next(matches);
            })
            .subscribe();
    }

    public cancel() {
        // Handle cancel
    }
}

The PersonCityAutocompleteSearchComponent also implemented the specific styles using the personcityautocomplete.componentscss file. The ng-completer components comes with css classes which can be extended or overwritten.


.completer-input {
    width: 500px;
    display: block;
    height: 34px;
    padding: 6px 12px;
    font-size: 14px;
    line-height: 1.42857143;
    color: #555;
    background-color: #fff;
    background-image: none;
    border: 1px solid #ccc;
    border-radius: 4px;
  -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
          box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
  -webkit-transition: border-color ease-in-out .15s, -webkit-box-shadow ease-in-out .15s;
       -o-transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s;
          transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s;
}

.completer-dropdown {
    width: 480px !important;
}

ASP.NET Core MVC API

The PersonCityController MVC Controller implements the service which is used by the Angular 2 application. This service implements the Search action method which uses the IPersonCitySearchProvider to search for the data. Helper methods to create and add some documents to Elasticsearch are also implemented so that the search service can be tested.

using Microsoft.AspNetCore.Mvc;

namespace Angular2AutoCompleteAspNetCoreElasticsearch.Controllers
{
    [Route("api/[controller]")]
    public class PersonCityController : Controller
    {
        private readonly IPersonCitySearchProvider _personCitySearchProvider;

        public PersonCityController(IPersonCitySearchProvider personCitySearchProvider)
        {
            _personCitySearchProvider = personCitySearchProvider;
        }

        [HttpGet("search/{searchtext}")]
        public IActionResult Search(string searchtext)
        {
            return Ok(_personCitySearchProvider.QueryString(searchtext));
        }

        [HttpGet("createindex")]
        public IActionResult CreateIndex()
        {
            _personCitySearchProvider.CreateIndex();
            return Created("http://localhost:5000/api/PersonCity/createindex/", "index created");
        }

        [HttpGet("createtestdata")]
        public IActionResult CreateTestData()
        {
            _personCitySearchProvider.CreateTestData();
            return Created("http://localhost:5000/api/PersonCity/createtestdata/", "test data created");
        }

        [HttpGet("indexexists")]
        public IActionResult GetElasticsearchStatus()
        {
            return Ok(_personCitySearchProvider.GetStatus());
        }
    }
}

The ElasticsearchCrud Nuget package is used to access Elasticsearch. The PersonCitySearchProvider implements this logic. Nest could also be used, only the PersonCitySearchProvider implementation needs to be changed to support this.

"ElasticsearchCRUD":  "2.4.1.1"

The PersonCitySearchProvider class implements the IPersonCitySearchProvider interface which is used in the MVC controller. The IPersonCitySearchProvider needs to be added to the services in the Startup class. The search uses a QueryStringQuery search with wildcards. Any other query, aggregation could be used here, depending on the search requirements.

using System.Collections.Generic;
using System.Linq;
using ElasticsearchCRUD;
using ElasticsearchCRUD.ContextAddDeleteUpdate.IndexModel.SettingsModel;
using ElasticsearchCRUD.Model.SearchModel;
using ElasticsearchCRUD.Model.SearchModel.Queries;
using ElasticsearchCRUD.Tracing;

namespace Angular2AutoCompleteAspNetCoreElasticsearch
{
    public class PersonCitySearchProvider : IPersonCitySearchProvider
    {
        private readonly IElasticsearchMappingResolver _elasticsearchMappingResolver = new ElasticsearchMappingResolver();
        private const string ConnectionString = "http://localhost:9200";
        private readonly ElasticsearchContext _context;

        public PersonCitySearchProvider()
        {
            _context = new ElasticsearchContext(ConnectionString, new ElasticsearchSerializerConfiguration(_elasticsearchMappingResolver))
            {
                TraceProvider = new ConsoleTraceProvider()
            };
        }

        public IEnumerable<PersonCity> QueryString(string term)
        {
            var results = _context.Search<PersonCity>(BuildQueryStringSearch(term));

            return results.PayloadResult.Hits.HitsResult.Select(t => t.Source);
        }

        /// <summary>
        /// TODO protect against injection!
        /// </summary>
        /// <param name="term"></param>
        /// <returns></returns>
        private Search BuildQueryStringSearch(string term)
        {
            var names = "";
            if (term != null)
            {
                names = term.Replace("+", " OR *");
            }

            var search = new Search
            {
                Query = new Query(new QueryStringQuery(names + "*"))
            };

            return search;
        }

        public bool GetStatus()
        {
            return _context.IndexExists<PersonCity>();
        }

        public void CreateIndex()
        {
            _context.IndexCreate<PersonCity>(new IndexDefinition());
        }

        public void CreateTestData()
        {
            PersonCityData.CreateTestData();

            foreach (var item in PersonCityData.Data)
            {
                _context.AddUpdateDocument(item, item.Id);
            }

            _context.SaveChanges();
        }
    }
}

When the application is started, the autocomplete is deactivated as no index exists.

angular2autocompleteaspnetcoreelasticsearch_01

Once the index exists, data can be added to the Elasticsearch index.
angular2autocompleteaspnetcoreelasticsearch_02

And the autocomplete can be used.

angular2autocompleteaspnetcoreelasticsearch_03

Links:

https://github.com/oferh/ng2-completer

https://github.com/damienbod/Angular2WebpackVisualStudio

https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

https://www.elastic.co/products/elasticsearch

https://www.nuget.org/packages/ElasticsearchCRUD/

https://github.com/damienbod/ElasticsearchCRUD



Damien Bowden: Using SASS with Webpack, Angular2 and Visual Studio

This post shows how to use SASS with Webpack and Angular 2 in Visual Studio. I had various problems trying to get this to work from Visual Studio using a Webpack build. The following is a solution which works, but not the only one.

Code: https://github.com/damienbod/Angular2WebpackVisualStudio

2017.01.07: Updated webpack 2.2.0-rc.3, angular 2.4.1

Install node-sass globally, so that the package is available everywhere. The latest installed node-sass will then be available on the path.

npm install node-sass -g

Add the SASS packages as required in the project npm packages.json file.

 "devDependencies": {
        "node-sass": "3.10.1",
        "sass-loader": "^3.1.2",
        "style-loader": "^0.13.0",
        ...
}

The SASS configuration can then be added to the Webpack config file(s). The SASS scss files are built as part of the Webpack build.

{
  test: /\.scss$/,
  loaders: ["style-loader", "css-loader", "sass-loader"]
},

Now a SASS file can be created and appended to any Angular 2 component.

body {
    padding-top: 50px;
}

.starter-template {
    padding: 40px 15px;
    text-align: center;
}

.navigationLinkButton:hover {
    cursor: pointer;
}

a {
    color: #03A9F4;
}

The scss file or files can be used in the Angular 2 component typescript file using the @Component. The styles property defines an array of strings so each scss require method, needs to be converted to a string, otherwise it will not work. Thanks to Jackie Gleason for this solution.

import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';

import './app.component.scss';
import '../style/app.scss';

@Component({
    selector: 'my-app',
    templateUrl: 'app.component.html',
})

export class AppComponent {

    constructor(private router: Router) {
    }
}

If you have the following error message somewhere in your Webpack build: binding.node. Try reinstalling `node-sass`, try the following fixes:

1: Reinstall node-sass

npm install node-sass -g

2: Edit the Visual Studio project and settings

You can use the node from your path, and not the installed Visual Studio node by changing the Tools > Options > Projects and Solutions > External Web Tools. Move the path option to the top. This solution is from Mads Kristensen and explained here: solution

m1les gave this solution in Stack overflow.

You can then view the css styles created from SASS Visual Studio Webpack build.

Links

http://stackoverflow.com/questions/31301582/task-runner-explorer-cant-load-tasks/31444245

https://github.com/webpack/style-loader/issues/123

Customize external web tools in Visual Studio 2015

http://sass-lang.com/

https://www.bensmithett.com/smarter-css-builds-with-webpack/

https://github.com/jtangelder/sass-loader

http://eng.localytics.com/faster-sass-builds-with-webpack/



Dominick Baier: New in IdentityServer4: Multiple allowed Grant Types

In OAuth 2 some grant type combinations are insecure, that’s why we decided for IdentityServer3 that we’ll be defensive and allow only a single grant type per client.

During the last two years of implementing OAuth 2, it turned out that certain combinations of grant types actually do make sense and we adjusted IdentityServer3 to accommodate a couple of those scenarios. But there were still some common cases that either required you to create multiple client configurations for the same logical client – or configuration became a bit messy.

We fixed that in IdentityServer4 – we now allow almost all combinations of grant types for a single client – including the standard ones and extension grants that you add yourself.

We still check that the combination you choose will not result in a security problem – so we haven’t compromised security. Just made the configuration more flexible and easier to use.

See all the details here.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.