Taiseer Joudeh: Integrate Azure AD B2C with ASP.NET MVC Web App – Part 3

This is the third part of the tutorial which will cover Using Azure AD B2C tenant with ASP.NET Web API 2 and various front-end clients.

The source code for this tutorial is available on GitHub.

The MVC Web App has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net/)

I promise you that I won’t share your information with anyone, feel free to try the experience 🙂

Integrate Azure AD B2C with ASP.NET MVC Web App

In the previous post, we have configured our Web API to rely on our Azure AD B2C IdP to secure it so only calls which contain a token issued by our IdP will be accepted by our Web API.

In this post we will build our first front-end application (ASP.NET MVC 5 Web App) which will consume the API endpoints by sending a valid token obtained from the Azure AD b2C tenant, as well it will allow anonymous users to create profiles, and sign in against the Azure B2C tenant. The MVC Web app itself will be protected as well by the same Azure AD B2C tenant as we will share the same tenant Id between the Web API and MVC Web app.

So let’s start building the MVC Web App.

Step 1: Creating the MVC Web App Project

Let’s add a new ASP.NET Web application named “AADB2C.WebClientMvc” to the solution named “WebApiAzureAcitveDirectoryB2C.sln”, then add new MVC ASP.NET Web application, the selected template for the project will be “MVC”, and do not forget to change the “Authentication Mode” to “No Authentication” check the image below:

Azure B2C Web Mvc Template

Once the project has been created, click on it’s properties and set “SSL Enabled” to “True”, copy the “SSL URL” value and right lick on project, select “Properties”, then select the “Web” tab from the left side and paste the “SSL URL” value in the “Project Url” text field and click “Save”. We need to allow https scheme locally once we debug the application. Check the image below:

MvcWebSSLEnable

Step 2: Install the needed NuGet Packages to Configure the MVC App

We need to add bunch of NuGet packages, so Open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.Owin.Security.OpenIdConnect -Version 3.0.1
Install-Package Microsoft.Owin.Security.Cookies -Version 3.0.1
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.1
Update-package Microsoft.IdentityModel.Protocol.Extensions

The package “Microsoft.Owin.Security.OpenIdConnect” contains the middleware used to protect web apps with OpenId Connect, this package contains the logic for the heavy lifting happens when our MVC App will talk with Azure B2C tenant to request tokens and validate them.

The package “Microsoft.IdentityModel.Protocol.Extension” contains classes which represent OpenID Connect constants and messages, lastly the package “Microsoft.Owin.Security.Cookies” will be used to create a cookie based session after obtaining a valid token from our Azure AD B2C tenant. This cookie will be sent from the browser to the server with each subsequent request and get validate by the cookie middleware.

Step 3: Configure Web App to use Azure AD B2C tenant IDs and Policies

Now we need to modify the web.config for our MVC App  by adding the below keys, so open Web.config and add the below AppSettings keys:

<add key="ida:Tenant" value="BitofTechDemo.onmicrosoft.com" />
    <add key="ida:ClientId" value="bc348057-3c44-42fc-b4df-7ef14b926b78" />
    <add key="ida:AadInstance" value="https://login.microsoftonline.com/{0}/v2.0/.well-known/openid-configuration?p={1}" />
    <add key="ida:SignUpPolicyId" value="B2C_1_Signup" />
    <add key="ida:SignInPolicyId" value="B2C_1_Signin" />
    <add key="ida:UserProfilePolicyId" value="B2C_1_Editprofile" />
    <add key="ida:RedirectUri" value="https://localhost:44315/" />
    <add key="api:OrdersApiUrl" value="https://localhost:44339/" />

The usage for the each setting has been outlined in the previous post, the only 2 new settings keys are: “ida:RedirectUri” which will be used to set the OpenID connect “redirect_uri” property The value of this URI should be registered in Azure AD B2C tenant (we will do this next), this redirect URI will be used by the OpenID Connect middleware to return token responses or failures after authentication process, as well after the sign out process. The second setting key “api:OrdersApiUrl” will be used as a base URI for our Web API.

Now let’s register the new Redirect URI in Azure B2C tenant, to do so login to Azure Portal and navigate to the App “Bit of Tech Demo App” we already registered in the previous post, then add the value “https://localhost:44315/” in the Reply URL settings as the image below, note that I already published the MVC web App to Azure App Services to the URL (https://aadb2cmvcapp.azurewebsites.net/) so I’ve included this URL too.

B2C Mvc Reply URL

Step 4: Add Owin “Startup” Class

The default MVC template comes without a “Startup” class, but we need to configure our OWIN OpenID Connect middleware at the start of our Web App, so add a new class named “Startup” and paste the code below, there is a lot of code here so jump to the next paragraph as I will do my best to explain what we have included in this class.

public class Startup
    {
        // App config settings
        private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"];
        private static string aadInstance = ConfigurationManager.AppSettings["ida:AadInstance"];
        private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"];
        private static string redirectUri = ConfigurationManager.AppSettings["ida:RedirectUri"];

        // B2C policy identifiers
        public static string SignUpPolicyId = ConfigurationManager.AppSettings["ida:SignUpPolicyId"];
        public static string SignInPolicyId = ConfigurationManager.AppSettings["ida:SignInPolicyId"];
        public static string ProfilePolicyId = ConfigurationManager.AppSettings["ida:UserProfilePolicyId"];

        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }

        public void ConfigureAuth(IAppBuilder app)
        {
            app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

            app.UseCookieAuthentication(new CookieAuthenticationOptions() );

            // Configure OpenID Connect middleware for each policy
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(SignUpPolicyId));
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(ProfilePolicyId));
            app.UseOpenIdConnectAuthentication(CreateOptionsFromPolicy(SignInPolicyId));
        }

        // Used for avoiding yellow-screen-of-death
        private Task AuthenticationFailed(AuthenticationFailedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification)
        {
            notification.HandleResponse();
            if (notification.Exception.Message == "access_denied")
            {
                notification.Response.Redirect("/");
            }
            else
            {
                notification.Response.Redirect("/Home/Error?message=" + notification.Exception.Message);
            }

            return Task.FromResult(0);
        }

        private OpenIdConnectAuthenticationOptions CreateOptionsFromPolicy(string policy)
        {
            return new OpenIdConnectAuthenticationOptions
            {
                // For each policy, give OWIN the policy-specific metadata address, and
                // set the authentication type to the id of the policy
                MetadataAddress = String.Format(aadInstance, tenant, policy),
                AuthenticationType = policy,
              
                // These are standard OpenID Connect parameters, with values pulled from web.config
                ClientId = clientId,
                RedirectUri = redirectUri,
                PostLogoutRedirectUri = redirectUri,
                Notifications = new OpenIdConnectAuthenticationNotifications
                {
                    AuthenticationFailed = AuthenticationFailed
                },
                Scope = "openid",
                ResponseType = "id_token",

                // This piece is optional - it is used for displaying the user's name in the navigation bar.
                TokenValidationParameters = new TokenValidationParameters
                {
                    NameClaimType = "name",
                    SaveSigninToken = true //important to save the token in boostrapcontext
                }
            };
        }
    }

What we have implemented here is the following:

  • From line 4-12 we have read the app settings for the keys we have included in MVC App web.config where they represent Azure AD B2C tenant and policy names, note that policy names access modifiers are set to public as it will be referenced in another class.
  • Inside the method “ConfigureAuth” we have done different things as the following:
    • Line 
      app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType)
       will configure the OWIN security pipeline and inform the OpenID connect middleware that the default authentication type we will use is”Cookies”, and this means that the “Claims” encoded in the token we will receive from Azure AD B2C tenant will be stored in a Cookie (Session for the authenticated user).
    • Line 
      app.UseCookieAuthentication(new CookieAuthenticationOptions());
       will register a cookie authentication middleware instance with default options, this means that Authentication type here is equivalent to the same authentication type we set in the previous step. it will be “Cookies” too.
    • Lines 
      app.UseOpenIdConnectAuthentication
       are used to configure the OWIN security pipeline to use the authentication provider (Azure AD B2C) per policy, in our case, there will be 3 different policies we already defined.
  • The method 
    CreateOptionsFromPolicy
     will take the Policy name as input parameter and will return an object of type “OpenIdConnectAuthenticationOptions”, This object is responsible for controlling the OpenID Connect middleware. The properties we used to configure the instance of “OpenIdConnectAuthenticationOptions” as the below:
    • The
      MetadataAddress
       property will accept the address of the discovery document endpoint for our Azure AD B2C tenant per policy, so for example, the discovery endpoint for policy “B2C_1_Signup” will be “https://login.microsoftonline.com/BitofTechDemo.onmicrosoft.com/v2.0/.well-known/openid-configuration?p=B2C_1_Signup”. This discovery document will be used to get information from Azure AD B2C on how to generate authentication requests and validated incoming token responses.
    • The 
      AuthenticationType
       property will inform the middleware that authentication operation used is the policies we already defined, so for example if you defined a forth policy and you didn’t register it with the OpenID connect middleware, the tokens issues by this policy will be rejected.
    • The 
      ClientId
       property will tell Azure AD B2C which ID to use to match the requests originating from the Web App. This will represent the Azure AD B2C tenant we defined earlier in the previous posts.
    • The 
      RedirectUri
       property will inform the Azure AD B2C where your app wants the requested token response to be returned to, the value of this URL should be registered previously in the “ReplyURLs” values in Azure AD B2C App we defined earlier.
    • The 
      PostLogoutRedirectUri
       property will inform Azure AD B2C where to redirect the browser after a sign out operation completed successfully.
    • The 
      Scope
       property will be used to inform our Azure AD B2C tenant that our web app needs to use “OpenId Connect” protocol for authentication.
    • The 
      ResponseType
       property will indicate what our Web App needs from Azure AD B2C tenant after this authentication process, in our case, we only need an 
      id_token
    • The 
      TokenValidationParameters
       is used to store the information needed to validate the tokens, we only need to change 2 settings here, the 
      NameClaimType
       and the 
      SaveSigninToken
       . Setting the “NameClaimType” value to “name” will allow us to read the display name of the user by calling 
      User.Identity.Name
       , and setting the “SaveSigninToken” to “true” will allow us to save the token we received from the authentication process in the claims created (Inside the session cookie), this will be useful to retrieve the token from the claims when we want to call the Web API. Keep in mind that the cookie size will get larger as we are storing the token inside it.
    • Lastly, the property 
      Notifications
       will allow us to inject our custom code during certain phases of the authentication process, the phase we are interested in here is the 
      AuthenticationFailed
       phase, in this phase we want to redirect the user to the root directory of the Web App in case he/she clicked cancel on the sign on or sign in forms, and we need to redirect to the error view if we received any other exception during the authentication process.

This was the most complicated part in configuring our Web App to use our Azure AD B2C tenant. Now the next steps should be simpler and we will modify some views and add some new actions to issue requests to our Web API and call the Azure AD B2C polices.

Step 5: Call the Azure B2C Polices

Now we need to configure out Web App to invoke the policies we created, to do so we need to add a new controller named “AccountController”, so add it and paste the code below:

public class AccountController : Controller
    {
        public void SignIn()
        {
            if (!Request.IsAuthenticated)
            {
                // To execute a policy, you simply need to trigger an OWIN challenge.
                // You can indicate which policy to use by specifying the policy id as the AuthenticationType
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.SignInPolicyId);
            }
        }

        public void SignUp()
        {
            if (!Request.IsAuthenticated)
            {
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.SignUpPolicyId);
            }
        }

        public void Profile()
        {
            if (Request.IsAuthenticated)
            {
                HttpContext.GetOwinContext().Authentication.Challenge(
                    new AuthenticationProperties() { RedirectUri = "/" }, Startup.ProfilePolicyId);
            }
        }

        public void SignOut()
        {
            // To sign out the user, you should issue an OpenIDConnect sign out request
            if (Request.IsAuthenticated)
            {
                IEnumerable<AuthenticationDescription> authTypes = HttpContext.GetOwinContext().Authentication.GetAuthenticationTypes();
                HttpContext.GetOwinContext().Authentication.SignOut(authTypes.Select(t => t.AuthenticationType).ToArray());
            }
        }
    }

What we have implemented here is simple, and it is the same for actions 

SignIn
 , 
SignUp
 , and 
Profile
 , what we have done is a call to the 
Challenge
 method and specify the related Policy name for each action.

The “Challenge” method in the OWIN pipeline accepts an instance of the object

AuthenticationProperties()
  which is used to set the settings of the action we want to do (Sign in, Sign up, Edit Profile). We only set the “RedirectUri” here to the root path of our Web App, taking into consideration that this “RedirectUri” has nothing to do with the “RedirectUri” we have defined in Azure AD B2C. This can be a different URI where you want the browser to redirect the user only after a successful operation takes place.

Regarding the 

SignOut
 action, we need to Signout the user from different places, one by removing the app local session we created using the “Cookies” authentication and the other one by informing the OpenID connect middleware to send a Sign out request message to our Azure AD B2C tenant so the user is signed out from there too, that’s why we are retrieving all the Auth types available for our Web App and then we pass those different authentication types to the the “SignOut” method.

Now let’s add a partial view which renders the links to call those actions, so add a new partial view named “_LoginPartial.cshtml” under the “Shared” folder and paste the code below:

@if (Request.IsAuthenticated)
{
    <text>
        <ul class="nav navbar-nav navbar-right">
            <li>
                <a id="profile-link">@User.Identity.Name</a>
                <div id="profile-options" class="nav navbar-nav navbar-right">
                    <ul class="profile-links">
                        <li class="profile-link">
                            @Html.ActionLink("Edit Profile", "Profile", "Account")
                        </li>
                    </ul>
                </div>
            </li>
            <li>
                @Html.ActionLink("Sign out", "SignOut", "Account")
            </li>
        </ul>
    </text>
}
else
{
    <ul class="nav navbar-nav navbar-right">
        <li>@Html.ActionLink("Sign up", "SignUp", "Account", routeValues: null, htmlAttributes: new { id = "signUpLink" })</li>
        <li>@Html.ActionLink("Sign in", "SignIn", "Account", routeValues: null, htmlAttributes: new { id = "loginLink" })</li>
    </ul>
}

Notice that part of partial view will be rendered only if the user is authenticated and notice how we are displaying the user “Display Name” from the claim named “name” by only calling 

@User.Identity.Name

Now we need to reference this partial view in the “_Layout.cshtml” view, we need just to replace the last Div in the body section with the below section:

<div class="navbar-collapse collapse">
	<ul class="nav navbar-nav">
		<li>@Html.ActionLink("Home", "Index", "Home")</li>
		<li>@Html.ActionLink("Orders List", "Index", "Orders")</li>
	</ul>
	@Html.Partial("_LoginPartial")
</div>

Step 6: Call the Web API from the MVC App

Now we want to add actions to start invoking the protected API we’ve created by passing the token obtained from Azure AD B2C tenant in the “Authorization” header for each protected request. We will add support for creating a new order and listing all the orders related to the authenticated user. If you recall from the previous post, we will depend on the claim named “objectidentifer” to read the User ID value encoded in the token as a claim.

To do so we will add a new controller named “OrdersController” under folder “Controllers” and will add 2 actions methods named “Index” and “Create”, add the file and paste the code below:

[Authorize]
    public class OrdersController : Controller
    {
        private static string serviceUrl = ConfigurationManager.AppSettings["api:OrdersApiUrl"];

        // GET: Orders
        public async Task<ActionResult> Index()
        {
            try
            {

                var bootstrapContext = ClaimsPrincipal.Current.Identities.First().BootstrapContext as System.IdentityModel.Tokens.BootstrapContext;

                HttpClient client = new HttpClient();

                client.BaseAddress = new Uri(serviceUrl);

                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bootstrapContext.Token);

                HttpResponseMessage response = await client.GetAsync("api/orders");

                if (response.IsSuccessStatusCode)
                {

                    var orders = await response.Content.ReadAsAsync<List<OrderModel>>();

                    return View(orders);
                }
                else
                {
                    // If the call failed with access denied, show the user an error indicating they might need to sign-in again.
                    if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                    {
                        return new RedirectResult("/Error?message=Error: " + response.ReasonPhrase + " You might need to sign in again.");
                    }
                }

                return new RedirectResult("/Error?message=An Error Occurred Reading Orders List: " + response.StatusCode);
            }
            catch (Exception ex)
            {
                return new RedirectResult("/Error?message=An Error Occurred Reading Orders List: " + ex.Message);
            }
        }

        public ActionResult Create()
        {
            return View();
        }

        [HttpPost]
        public async Task<ActionResult> Create([Bind(Include = "ShipperName,ShipperCity")]OrderModel order)
        {

            try
            {
                var bootstrapContext = ClaimsPrincipal.Current.Identities.First().BootstrapContext as System.IdentityModel.Tokens.BootstrapContext;

                HttpClient client = new HttpClient();

                client.BaseAddress = new Uri(serviceUrl);

                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bootstrapContext.Token);

                HttpResponseMessage response = await client.PostAsJsonAsync("api/orders", order);

                if (response.IsSuccessStatusCode)
                {
                    return RedirectToAction("Index");
                }
                else
                {
                    // If the call failed with access denied, show the user an error indicating they might need to sign-in again.
                    if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
                    {
                        return new RedirectResult("/Error?message=Error: " + response.ReasonPhrase + " You might need to sign in again.");
                    }
                }

                return new RedirectResult("/Error?message=An Error Occurred Creating Order: " + response.StatusCode);
            }
            catch (Exception ex)
            {
                return new RedirectResult("/Error?message=An Error Occurred Creating Order: " + ex.Message);
            }

        }

    }

    public class OrderModel
    {
        public string OrderID { get; set; }
        [Display(Name = "Shipper")]
        public string ShipperName { get; set; }
        [Display(Name = "Shipper City")]
        public string ShipperCity { get; set; }
        public DateTimeOffset TS { get; set; }
    }

What we have implemented here is the following:

  • We have added an 
    [Authorize]
     attribute on the controller so any unauthenticated (anonymous) request (Session cookie doesn’t exist) to any of the actions in this controller will result into a redirect to the Sign in policy we have configured.
  • Notice how we are reading the 
    BootstrapContext
     from the current “ClaimsPrincipal” object, this context will contain a property named “Token” which we will send in the “Authorization” header for the Web API. Note that if you forgot to set the property “SaveSigninToken” of the “TokenValidationParameters” to “true” then this will return “null”.
  • We are using HTTP Client to craft the requests and call the Web API endpoints we defined earlier. There is no need to pay attention to the User ID property in the MVC App as this property is encoded in the token itself, and the Web API will take the responsibility to decode it and store it in the Azure table storage along with order information.

Step 7: Add views for the Orders Controller

I will not dive into details here, as you know we need to add 2 views to support rendering the list of orders and creating a new order, for sake of completeness I will paste the cshtml for each view, so open a new folder named “Orders” under “Views” folder, then add 2 new views named “Index.cshtml” and “Create.cshtml” and paste the code as the below:

@model IEnumerable<AADB2C.WebClientMvc.Controllers.OrderModel>
@{
    ViewBag.Title = "Orders";
}
<h2>Orders</h2>
<br />
<p>
    @Html.ActionLink("Create New", "Create")
</p>

<table class="table table-bordered table-striped table-hover table-condensed" style="table-layout: auto">
    <thead>
        <tr>
            <td>Order Id</td>
            <td>Shipper</td>
            <td>Shipper City</td>
            <td>Date</td>
        </tr>
    </thead>
    @foreach (var item in Model)
    {
        <tr>
            <td>
                @Html.DisplayFor(modelItem => item.OrderID)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.ShipperName)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.ShipperCity)
            </td>
            <td>
                @Html.DisplayFor(modelItem => item.TS)
            </td>
        </tr>
    }
</table>

@model AADB2C.WebClientMvc.Controllers.OrderModel
@{
    ViewBag.Title = "New Order";
}
<h2>Create Order</h2>
@using (Html.BeginForm())
{
    <div class="form-horizontal">
        <hr />

        <div class="form-group">
            @Html.LabelFor(model => model.ShipperName, htmlAttributes: new { @class = "control-label col-md-2" })
            <div class="col-md-10">
                @Html.EditorFor(model => model.ShipperName, new { htmlAttributes = new { @class = "form-control" } })
            </div>
        </div>

        <div class="form-group">
            @Html.LabelFor(model => model.ShipperCity, htmlAttributes: new { @class = "control-label col-md-2" })
            <div class="col-md-10">
                @Html.EditorFor(model => model.ShipperCity, new { htmlAttributes = new { @class = "form-control" } })
            </div>
        </div>

        <div class="form-group">
            <div class="col-md-offset-2 col-md-10">
                <input type="submit" value="Save Order" class="btn btn-default" />
            </div>
        </div>
    </div>

    <div>
        @Html.ActionLink("Back to Orders", "Index")
    </div>
}

Step 8: Lastly, let’s test out the complete flow

To test this out the user will click on “Orders List” link from the top navigation menu, then he will be redirected to the Azure AD B2C tenant where s/he can enter the app local credentials, if the crednetials provided are valid then a successful authentication will take place and a token will be obtained and stored in the claims identity for the authenticated user, then the orders view are displayed the token is sent in the authorization header to get all orders for this user. It should be something as the animated image below:

Azure AD B2C animation

That’s it for now folks, I hope you find it useful 🙂 In the next post, I will cover how to integrate MSAL with Azure AD B2C and use it in a desktop application. If you find the post useful; then do not forget to share it 🙂

The Source code for this tutorial is available on GitHub.

The MVC Web App has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net/)

Follow me on Twitter @tjoudeh

Resources

The post Integrate Azure AD B2C with ASP.NET MVC Web App – Part 3 appeared first on Bit of Technology.


Damien Bowden: Implementing UNDO, REDO in ASP.NET Core

The article shows how to implement UNDO, REDO functionality in an ASP.NET Core application using EFCore and MS SQL Server.

This is the first blog in a 3 part series. The second blog will implement the UI using Angular 2 and the third article will improve the concurrent stacks with max limits to prevent memory leaks etc.

Code: https://github.com/damienbod/Angular2AutoSaveCommands

The application was created using the ASP.NET Core Web API template. The CommandDto class is used for all commands sent from the UI. The class is used for the create, update and delete requests. The class has 4 properties. The CommandType property defines the types of commands which can be sent. The supported CommandType values are defined as constants in the CommandTypes class. The PayloadType is used to define the type for the Payload JObject. The server application can then use this, to convert the JObject to a C# object. The ActualClientRoute is required to support the UNDO and REDO logic. Once the REDO or UNDO is executed, the client needs to know where to navigate to. The values are strings and are totally controlled by the client SPA application. The server just persists these for each command.

using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Models
{
    public class CommandDto
    {
        public string CommandType { get; set; }
        public string PayloadType { get; set; }
        public JObject Payload { get; set; }
        public string ActualClientRoute { get; set;}
    }
	
    public static  class CommandTypes
    {
        public const string ADD = "ADD";
        public const string UPDATE = "UPDATE";
        public const string DELETE = "DELETE";
        public const string UNDO = "UNDO";
        public const string REDO = "REDO";
    }
	
    public static class PayloadTypes
    {
        public const string Home = "HOME";
        public const string ABOUT = "ABOUT";
        public const string NONE = "NONE";
    }
}

The CommandController is used to provide the Execute, UNDO and REDO support for the UI, or any other client which will use the service. The controller injects the ICommandHandler which implements the logic for the HTTP POST requests.

using Angular2AutoSaveCommands.Models;
using Angular2AutoSaveCommands.Providers;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Controllers
{
    [Route("api/[controller]")]
    public class CommandController : Controller
    {
        private readonly ICommandHandler _commandHandler;
        public CommandController(ICommandHandler commandHandler)
        {
            _commandHandler = commandHandler;
        }

        [HttpPost]
        [Route("Execute")]
        public IActionResult Post([FromBody]CommandDto value)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest("Model is invalid");
            }

            if (!validateCommandType(value))
            {
                return BadRequest($"CommandType: {value.CommandType} is invalid");
            }

            if (!validatePayloadType(value))
            {
                return BadRequest($"PayloadType: {value.CommandType} is invalid");
            }

            _commandHandler.Execute(value);
            return Ok(value);
        }

        [HttpPost]
        [Route("Undo")]
        public IActionResult Undo()
        {
            var commandDto = _commandHandler.Undo();
            return Ok(commandDto);
        }

        [HttpPost]
        [Route("Redo")]
        public IActionResult Redo()
        {
            var commandDto = _commandHandler.Redo();
            return Ok(commandDto);
        }

        private bool validateCommandType(CommandDto value)
        {
            return true;
        }

        private bool validatePayloadType(CommandDto value)
        {
            return true;
        }
    }
}

The ICommandHandler has three methods, Execute, Undo and Redo. The Undo and the Redo methods return a CommandDto class. This class contains the actual data and the URL for the client routing.

using Angular2AutoSaveCommands.Models;

namespace Angular2AutoSaveCommands.Providers
{
    public interface ICommandHandler 
    {
        void Execute(CommandDto commandDto);
        CommandDto Undo();
        CommandDto Redo();
    }
}

The CommandHandler class implements the ICommandHandler interface. This class provides the two ConcurrentStack fields for the REDO and the UNDO stack. The stacks are static and so need to be thread safe. The UNDO and the REDO return a CommandDTO which contains the relevant data after the operation which has been executed.

The Execute method just calls the execution depending on the payload. This method then creates the appropriate command, adds the command to the database for the history, executes the logic and adds the command to the UNDO stack.

The undo method pops a command from the undo stack, calls the Unexecute method, adds the command to the redo stack, and saves everything to the database.

The redo method pops a command from the redo stack, calls the Execute method, adds the command to the undo stack, and saves everything to the database.

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using Angular2AutoSaveCommands.Models;
using Angular2AutoSaveCommands.Providers.Commands;
using Microsoft.Extensions.Logging;

namespace Angular2AutoSaveCommands.Providers
{
    public class CommandHandler : ICommandHandler
    {
        private readonly ICommandDataAccessProvider _commandDataAccessProvider;
        private readonly DomainModelMsSqlServerContext _context;
        private readonly ILoggerFactory _loggerFactory;
        private readonly ILogger _logger;

        // TODO remove these and used persistent stacks
        private static ConcurrentStack<ICommand> _undocommands = new ConcurrentStack<ICommand>();
        private static ConcurrentStack<ICommand> _redocommands = new ConcurrentStack<ICommand>();

        public CommandHandler(ICommandDataAccessProvider commandDataAccessProvider, DomainModelMsSqlServerContext context, ILoggerFactory loggerFactory)
        {
            _commandDataAccessProvider = commandDataAccessProvider;
            _context = context;
            _loggerFactory = loggerFactory;
            _logger = loggerFactory.CreateLogger("CommandHandler");
        }

        public void Execute(CommandDto commandDto)
        {
            if (commandDto.PayloadType == PayloadTypes.ABOUT)
            {
                ExecuteAboutDataCommand(commandDto);
                return;
            }

            if (commandDto.PayloadType == PayloadTypes.Home)
            {
                ExecuteHomeDataCommand(commandDto);
                return;
            }

            if (commandDto.PayloadType == PayloadTypes.NONE)
            {
                ExecuteNoDataCommand(commandDto);
                return;
            }
        }

        // TODO add return object for UI
        public CommandDto Undo()
        {  
            var commandDto = new CommandDto();
            commandDto.CommandType = CommandTypes.UNDO;
            commandDto.PayloadType = PayloadTypes.NONE;
            commandDto.ActualClientRoute = "NONE";

            if (_undocommands.Count > 0)
            {
                ICommand command;
                if (_undocommands.TryPop(out command))
                {
                    _redocommands.Push(command);
                    command.UnExecute(_context);
                    commandDto.Payload = command.ActualCommandDtoForNewState(CommandTypes.UNDO).Payload;
                    _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                    _commandDataAccessProvider.Save();
                    return command.ActualCommandDtoForNewState(CommandTypes.UNDO);
                }   
            }

            return commandDto;
        }

        // TODO add return object for UI
        public CommandDto Redo()
        {
            var commandDto = new CommandDto();
            commandDto.CommandType = CommandTypes.REDO;
            commandDto.PayloadType = PayloadTypes.NONE;
            commandDto.ActualClientRoute = "NONE";

            if (_redocommands.Count > 0)
            {
                ICommand command;
                if(_redocommands.TryPop(out command))
                { 
                    _undocommands.Push(command);
                    command.Execute(_context);
                    commandDto.Payload = command.ActualCommandDtoForNewState(CommandTypes.REDO).Payload;
                    _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                    _commandDataAccessProvider.Save();
                    return command.ActualCommandDtoForNewState(CommandTypes.REDO);
                }
            }

            return commandDto;
        }

        private void ExecuteHomeDataCommand(CommandDto commandDto)
        {
            if (commandDto.CommandType == CommandTypes.ADD)
            {
                ICommandAdd command = new AddHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                command.UpdateIdforNewItems();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.UPDATE)
            {
                ICommand command = new UpdateHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.DELETE)
            {
                ICommand command = new DeleteHomeDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }
        }

        private void ExecuteAboutDataCommand(CommandDto commandDto)
        {
            if(commandDto.CommandType == CommandTypes.ADD)
            {
                ICommandAdd command = new AddAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                command.UpdateIdforNewItems();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.UPDATE)
            {
                ICommand command = new UpdateAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }

            if (commandDto.CommandType == CommandTypes.DELETE)
            {
                ICommand command = new DeleteAboutDataCommand(_loggerFactory, commandDto);
                command.Execute(_context);
                _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
                _commandDataAccessProvider.Save();
                _undocommands.Push(command);
            }
        }

        private void ExecuteNoDataCommand(CommandDto commandDto)
        {
            _commandDataAccessProvider.AddCommand(CommandEntity.CreateCommandEntity(commandDto));
            _commandDataAccessProvider.Save();
        }

    }
}

The ICommand interface contains the public methods required for the commands in this application. The DBContext is used as a parameter in the Execute and the Unexecute method because the context from the HTTP request is used, and not the original context from the Execute HTTP request.

using Angular2AutoSaveCommands.Models;

namespace Angular2AutoSaveCommands.Providers.Commands
{
    public interface ICommand
    {
        void Execute(DomainModelMsSqlServerContext context);
        void UnExecute(DomainModelMsSqlServerContext context);

        CommandDto ActualCommandDtoForNewState(string commandType);
    }
}

The UpdateAboutDataCommand class implements the ICommand interface. This command supplies the logic to update and also to undo an update in the execute and the unexecute methods. For the undo, the previous state of the entity is saved in the command.

 
using System;
using System.Linq;
using Angular2AutoSaveCommands.Models;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;

namespace Angular2AutoSaveCommands.Providers.Commands
{
    public class UpdateAboutDataCommand : ICommand
    {
        private readonly ILogger _logger;
        private readonly CommandDto _commandDto;
        private AboutData _previousAboutData;

        public UpdateAboutDataCommand(ILoggerFactory loggerFactory, CommandDto commandDto)
        {
            _logger = loggerFactory.CreateLogger("UpdateAboutDataCommand");
            _commandDto = commandDto;
        }

        public void Execute(DomainModelMsSqlServerContext context)
        {
            _previousAboutData = new AboutData();

            var aboutData = _commandDto.Payload.ToObject<AboutData>();
            var entity = context.AboutData.First(t => t.Id == aboutData.Id);

            _previousAboutData.Description = entity.Description;
            _previousAboutData.Deleted = entity.Deleted;
            _previousAboutData.Id = entity.Id;

            entity.Description = aboutData.Description;
            entity.Deleted = aboutData.Deleted;
            _logger.LogDebug("Executed");
        }

        public void UnExecute(DomainModelMsSqlServerContext context)
        {
            var aboutData = _commandDto.Payload.ToObject<AboutData>();
            var entity = context.AboutData.First(t => t.Id == aboutData.Id);

            entity.Description = _previousAboutData.Description;
            entity.Deleted = _previousAboutData.Deleted;
            _logger.LogDebug("Unexecuted");
        }

        public CommandDto ActualCommandDtoForNewState(string commandType)
        {
            if (commandType == CommandTypes.UNDO)
            {
                var commandDto = new CommandDto();
                commandDto.ActualClientRoute = _commandDto.ActualClientRoute;
                commandDto.CommandType = _commandDto.CommandType;
                commandDto.PayloadType = _commandDto.PayloadType;
            
                commandDto.Payload = JObject.FromObject(_previousAboutData);
                return commandDto;
            }
            else
            {
                return _commandDto;
            }
        }
    }
}

The startup class adds the interface/class pairs to the built-in IoC. The MS SQL Server is defined here using the appsettings to read the database connection string. EFCore migrations are used to create the database.

using System;
using System.Linq;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Angular2AutoSaveCommands.Providers;
using Microsoft.EntityFrameworkCore;

namespace Angular2AutoSaveCommands
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            var sqlConnectionString = Configuration.GetConnectionString("DataAccessMsSqlServerProvider");

            services.AddDbContext<DomainModelMsSqlServerContext>(options =>
                options.UseSqlServer(  sqlConnectionString )
            );

            services.AddMvc();

            services.AddScoped<ICommandDataAccessProvider, CommandDataAccessProvider>();
            services.AddScoped<ICommandHandler, CommandHandler>();
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var angularRoutes = new[] {
                 "/home",
                 "/about"
             };

            app.Use(async (context, next) =>
            {
                if (context.Request.Path.HasValue && null != angularRoutes.FirstOrDefault(
                    (ar) => context.Request.Path.Value.StartsWith(ar, StringComparison.OrdinalIgnoreCase)))
                {
                    context.Request.Path = new PathString("/");
                }

                await next();
            });

            app.UseDefaultFiles();

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The application api can be tested using fiddler. The following HTTP POST requests are sent in this order, execute(ADD), execute(UPDATE), Undo, Undo, Redo

http://localhost:5000/api/command/execute
User-Agent: Fiddler
Host: localhost:5000
Content-Type: application/json

{
  "commandType":"ADD",
  "payloadType":"ABOUT",
  "payload":
   { 
      "Id":0,
      "Description":"add a new about item",
      "Deleted":false
    },
   "actualClientRoute":"https://damienbod.com/add"
}

http://localhost:5000/api/command/execute
User-Agent: Fiddler
Host: localhost:5000
Content-Type: application/json

{
  "commandType":"UPDATE",
  "payloadType":"ABOUT",
  "payload":
   { 
      "Id":10003,
      "Description":"update the existing about item",
      "Deleted":false
    },
   "actualClientRoute":"https://damienbod.com/update"
}

http://localhost:5000/api/command/undo
http://localhost:5000/api/command/undo
http://localhost:5000/api/command/redo

The data is sent in this order and the undo, redo works as required.
undoRedofiddler_01

The data can also be validated in the database using the CommandEntity table.

undoRedosql_02

Links:

http://www.codeproject.com/Articles/33384/Multilevel-Undo-and-Redo-Implementation-in-Cshar



Andrew Lock: An introduction to OAuth 2.0 using Facebook in ASP.NET Core

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

This is the next post in a series on authentication and authorisation in ASP.NET Core. In this post I look in moderate depth at the OAuth 2.0 protocol as it pertains to ASP.NET Core applications, walking through the protocol as seen by the user of your website as well as the application itself. Finally, I show how you can configure your application to use a Facebook social login when you are using ASP.NET Core Identity.

OAuth 2.0

OAuth 2.0 is an open standard for authorisation. It is commonly used as a way for users to login to a particular website (say, catpics.com) using a third party account such as a Facebook or Google account, without having to provide catpics.com the password for their Facebook account.

While it is often used for authentication, being used to log a user in to a site, it is actually an authorisation protocol. We'll discuss the detail of the flow of requests in the next sections, but in essence, you as a user are providing permission for the catpics.com website to access some sort of personal information from the OAuth provider website (Facebook). So catpics.com is able to access your personal Facebook cat pictures, without having full access to your account, and without requiring you to provide your password directly.

There are a number of different ways you can use OAuth 2.0, each of which require different parameters and different user interactions. Which one you should use depends on the nature of the application you are developing, for example:

  • Resource Owner Grant - Requires the user to directly enter their username and password to the application. Useful when you are developing a 1st party application to authenticate with your own servers, e.g. the Facebook mobile app might use a Resource Owner Grant to authenticate with Facebook's servers.
  • Implicit Grant - Authenticating with a server returns an access token to the browser which can then be used to access resources. Useful for Single Page Applications (SPA) where communication cannot be private.
  • Authorisation Code Grant - The typical OAuth grant used by web applications, such as you would use in your ASP.NET apps. This is the flow I will focus on for the rest of the article.

The Authorisation Code Grant

Before explaining the flow fully, we need to clarify some of the terminology. This is where I often see people getting confused with the use of overloaded terms like 'Client'. Unfortunately, these are taken from the official spec, so I will use them here as well, but for the remainder of the article I'll try and use disambiguated names instead.

We will consider an ASP.NET application that finds cats in your Facebook photos by using Facebook's OAuth authorisation.

  • Resource owner (e.g. the user) - This technically doesn't need to be a person as OAuth allows machine-to-machine authorisation, but for our purposes it is the end-user who is using your application.
  • Resource service (e.g. the Facebook API server) - This is the endpoint your ASP.NET application will call to access Facebook photos once it has been given an access token.
  • Client (e.g. your app) - This is the application which is actually making the requests to the Resource service. So in this case it is the ASP.NET application.
  • Authorisation server (e.g. the Facebook authorisation server) - This is the server that allows the user to login to their Facebook account.
  • Browser (e.g. Chrome, Safari) - Not required by OAuth in general, but for our example, the browser is the user-agent that the resource owner/user is using to navigate your ASP.NET application.

The flow

Now we have nailed some of the terminology, we can think about the actual flow of events and data when OAuth 2.0 is in use. The image below gives a detailed overview of the various interactions, from the user first requesting access to a protected resource, to them finally gaining access to it. The flow looks complicated, but the key points to notice are the three calls to Facebook's servers.

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

As we go through the flow, we'll illustrate it from a user's point of view, using the default MVC template with ASP.NET Core Identity, configured to use Facebook as an external authentication mechanism.

Before you can use OAuth in your application, you first need to register your application with the Authorisation server (Facebook). There you will need to provide a REDIRECT_URI and you will be provided a CLIENT_ID and CLIENT_SECRET. The process is different for each Authorisation server so it is best to consult their developer docs for how to go about this. I'll cover how to register your application with Facebook later in this article.

Authorising to obtain an authorisation code

When the user requests a page on your app that requires authorisation, they will be redirected to the login page. Here they can either login using a username and password to create an account directly with the site, or they can choose to login with an external provider - in this case just Facebook.

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

When the user clicks on the Facebook button, the ASP.NET application sends a 302 to the user's browser, with a url similar to the following:

https://www.facebook.com/v2.6/dialog/oauth?client_id=CLIENT_ID&scope=public_profile,email&response_type=code&redirect_uri=REDIRECT_URI&state=STATE_TOKEN  

This url points to the Facebook Authorisation server, and contains a number of replacement fields. The CLIENT_ID and REDIRECT_URI are the ones we registered and were provided when we registered our app in Facebook. The STATE_TOKEN is a CSRF token generated automatically by our application for security reasons (that I won't go into). Finally, the scope field indicates what resources we have requested access to - namely public_profile and their email.

Following this link, the user is directed in their browser to their Facebook login page. Once they have logged in, or if they are already logged in, they must grant authorisation to our registered ASP.NET application to access the requested fields:

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

If the user clicks OK, then Facebook sends another 302 response to the browser, with a url similar to the following:

http://localhost:5000/signin-facebook?code=AUTH_CODE&state=STATE_TOKEN  

Facebook has provided an AUTH_CODE, along with the STATE_TOKEN we supplied with the initial redirect. The state can be verified to ensure that requests are not being forged by comparing it to the version stored in our session state in the ASP.NET application. The AUTH_CODE however is only temporary, and cannot be directly used to access the user details we need. Instead, we need to exchange it for an access token with the Facebook Authorisation server.

Exchanging for an access token

This next portion of the flow occurs entirely server side - communication occurs directly between our ASP.NET application and the Facebook authorisation server.

Our ASP.NET application constructs a POST request to the Facebook Authorization server, to an Access token endpoint. The request sends our app's registered details, including the CLIENT_SECRET and the AUTH_TOKEN to the Facebook endpoint:

POST /v2.6/oauth/access_token HTTP/1.1  
Host: graph.facebook.com  
Content-Type: application/x-www-form-urlencoded

grant_type=authorization_code&  
code=AUTH_CODE&  
redirect_uri=REDIRECT_URI&  
client_id=CLIENT_ID&  
client_secret=CLIENT_SECRET  

If the token is accepted by Facebook's Authorisation server, then it will respond with (among other things) an ACCESS_TOKEN. This access token allows our ASP.NET application to access the resources (scopes) we requested at the beginning of the flow, but we don't actually have the details we need in order to create the Claims for our user yet.

Accessing the protected resource

After receiving and storing the access token, our app can now contact Facebook's Resource server. We are still completely server-side at this point, communicating directly with Facebook's user information endpoint.

Our application constructs a GET request, providing the ACCESS_TOKEN and a comma separated (and URL encoded) list of requested fields in the querystring:

GET /v2.6/me?access_token=ACCESS_TOKEN&fields=name%2Cemail%2Cfirst_name%2Clast_name  
Host: graph.facebook.com  

Assuming all is good, Facebook's resource server should respond with the requested fields. Your application can then add the appropriate Claims to the ClaimsIdentity and your user is authenticated!

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

The description provided here omits a number of things such as handling expiration and refresh tokens, as well as the ASP.NET Core Identity process or associating the login to an email, but hopefully it provides an intermediate view of what is happening as part of a social login.

Example usage in ASP.NET Core

If you're anything like me, when you first start looking at how to implement OAuth in your application, it all seems a bit daunting. There's so many moving parts, different grants and backchannel communication that it seems like it will be a chore to setup.

Luckily, the ASP.NET Core team have solved a massive amount of the headache for you! If you are using ASP.NET Core Identity, then adding external providers is a breeze. The ASP.NET Core documentation provides a great walkthrough to creating your application and getting it all setup.

Essentially, if you have an app that uses ASP.NET Core Identity, all that is required to add facebook authentication is to install the package in your project.json:

{
  "dependencies": {
    "Microsoft.AspNetCore.Authentication.Facebook": "1.0.0"
  }
}

and configure the middleware in your Startup.Configure method:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)  
{

    app.UseStaticFiles();

    app.UseIdentity();

    app.UseFacebookAuthentication(new FacebookOptions
    {
        AppId = Configuration["facebook:appid"],
        AppSecret = Configuration["facebook:appsecret"],
        Scope = { "email" },
        Fields = { "name", "email" },
        SaveTokens = true,
    });

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

You can see we are loading the AppId and AppSecret (our CLIENT_ID and CLIENT_SECRET) from configuration. On a development machine, these should be stored using the user secrets manager or environment variables (never commit them directly to your repository).

If you want to use a different external OAuth provider then you have several options. Microsoft provide a number of packages similar to the Facebook package shown which make integrating external logins simple. There are currently providers for Google, Twitter and (obviously) Microsoft accounts.

In addition, there are a number of open source libraries that provide similar handling of common providers. In particular, the AspNet.Security.OAuth.Providers repository has middleware for providers like GitHub, Foursquare, Dropbox and many others.

Alternatively, if a direct provider is not available, you can use the generic Microsoft.AspNetCore.Authentication.OAuth package on which these all build. For example Jerrie Pelser has an excellent post on configuring your ASP.NET Core application to use LinkedIn.

Registering your application with Facebook Graph API

As discussed previously, before you can use an OAuth provider, you must register your application with the provider to obtain the CLIENT_ID and CLIENT_SECRET, and to register your REDIRECT_URI. I will briefly show how to go about doing this for Facebook.

First, navigate to https://developers.facebook.com and login. If you have not already registered as a developer, you will need to register and agree to Facebook's policies.

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

Once a developer, you can create a new web application by following the prompts or navigating to https://developers.facebook.com/quickstarts/?platform=web. Here you will be prompted to provide a name for your web application, and then to configure some basic details about it.

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

Once created, navigate to https://developers.facebook.com/apps and click on your application's icon. You will be taken to your app's basic details. Here you can obtain the App Id and App Secret you will need in your application. Make a note of them (store them using your secrets manager).

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

The last step is to configure the redirect URI for your application. Click on '+ Add Product' at the bottom of the menu and choose Facebook Login. This will enable OAuth for your application, and allow you to set the REDIRECT_URI for your application.

The redirect path for the Facebook middleware is /signin-facebook. In my case, I was only running the app locally, so my full redirect url was http://localhost:5000/signin-facebook.

An introduction to OAuth 2.0 using Facebook in ASP.NET Core

Assuming everything is setup correctly, you should now be able to use OAuth 2.0 to login to your ASP.NET Core application with Facebook!

Final thoughts

In this post I showed how you could use OAuth 2.0 to allow users to login to your ASP.NET Core application with Facebook and other OAuth 2.0 providers.

One point which is often overlooked is the fact that OAuth 2.0 is a protocol for performing authorisation, not authentication. The whole process is aimed at providing access to protected resources, rather than proving the identity of a user, which has some subtle security implications.

Luckily there is an another protocol OpenId Connect, which deals with many of these issues, which essentially provides and additional layer on top of the OAuth 2.0 protocol. I'll be doing a post on OpenId Connect soon, but if you want to learn more, I've provided some additional details below.

In the mean time, enjoy your social logins!


Pedro Félix: On contracts and HTTP APIs

Reading the twitter conversation started by this tweet

made me put in written words some of the ideas that I have about HTTP APIs, contracts and “out-of-band” information.
Since it’s vacations time, I’ll be brief and incomplete.

  • On any interface, it is impossible to avoid having contracts (i.e. shared “out-of-band” information) between provider and consumer. On a HTTP API, the syntax and semantics of HTTP itself is an example of this shared information. If JSON is used as a base for the representation format, then its syntax and semantics rules are another example of shared “out-of-band” information.
  • However not all contracts are equal in the generality, flexibility and evolvability they allow. Having the contract include a fixed resource URI is very different from having the contract defining a link relation. The former prohibits any change on the URI structure (e.g. host name, HTTP vs HTTPS, embedded information), while the later one enables it. Therefore, designing the contract is a very important task when creating HTTP APIs. And since the transfer contract is already rather well defined by HTTP, most of the design emphasis should be on the representation contract, include the hypermedia components.
  • Also, not all contracts have the same cost to implement (e.g. having hardcoded URIs is probably simpler than having to find links on representations), so (as usual) trade-offs have to be taken into account.
  • When implementing HTTP APIs is also very important to have the contract-related areas clearly identified. For me, this typically involves being able to easily answering questions such as: – Will I be breaking the contract if
    • I change this property name on this model?
    • I add a new property to this model?
    • I change the routing rules (e.g. adding a new path segment)?

Hope this helps
Looking forward for feedback

 



Damien Bowden: ASP.NET Core 1.0 with MySQL and Entity Framework Core

This article shows how to use MySQL with ASP.NET Core 1.0 using Entity Framework Core.

Code: https://github.com/damienbod/AspNet5MultipleProject

Thanks to Noah Potash for creating this example and adding his code to this code base.

The Entity Framework MySQL package can be downloaded using the NuGet package SapientGuardian.EntityFrameworkCore.MySql. At present no official provider from MySQL exists for Entity Framework Core which can be used in an ASP.NET Core application.

The SapientGuardian.EntityFrameworkCore.MySql package can be added to the project.json file.

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.0.0",
      "type": "platform"
    },
    "DomainModel": "*",
    "SapientGuardian.EntityFrameworkCore.MySql": "7.1.4"
  },

  "frameworks": {
    "netcoreapp1.0": {
      "imports": [
        "dotnet5.6",
        "dnxcore50",
        "portable-net45+win8"
      ]
    }
  }
}

An EfCore DbContext can be added like any other context supported by Entity Framework Core.

using System;
using System.Linq;
using DomainModel.Model;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;

namespace DataAccessMySqlProvider
{ 
    // >dotnet ef migration add testMigration
    public class DomainModelMySqlContext : DbContext
    {
        public DomainModelMySqlContext(DbContextOptions<DomainModelMySqlContext> options) :base(options)
        { }
        
        public DbSet<DataEventRecord> DataEventRecords { get; set; }

        public DbSet<SourceInfo> SourceInfos { get; set; }

        protected override void OnModelCreating(ModelBuilder builder)
        {
            builder.Entity<DataEventRecord>().HasKey(m => m.DataEventRecordId);
            builder.Entity<SourceInfo>().HasKey(m => m.SourceInfoId);

            // shadow properties
            builder.Entity<DataEventRecord>().Property<DateTime>("UpdatedTimestamp");
            builder.Entity<SourceInfo>().Property<DateTime>("UpdatedTimestamp");

            base.OnModelCreating(builder);
        }

        public override int SaveChanges()
        {
            ChangeTracker.DetectChanges();

            updateUpdatedProperty<SourceInfo>();
            updateUpdatedProperty<DataEventRecord>();

            return base.SaveChanges();
        }

        private void updateUpdatedProperty<T>() where T : class
        {
            var modifiedSourceInfo =
                ChangeTracker.Entries<T>()
                    .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified);

            foreach (var entry in modifiedSourceInfo)
            {
                entry.Property("UpdatedTimestamp").CurrentValue = DateTime.UtcNow;
            }
        }
    }
}

In an ASP.NET Core web application, the DbContext is added to the application in the startup class. In this example, the DbContext is defined in a different class library. The MigrationsAssembly needs to be defined, so that the migrations will work. If the context and the migrations are defined in the same assembly, this is not required.

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
		.AddJsonFile("config.json", optional: true, reloadOnChange: true);

	Configuration = builder.Build();
}
		
public void ConfigureServices(IServiceCollection services)
{	
	var sqlConnectionString = Configuration.GetConnectionString("DataAccessMySqlProvider");

	services.AddDbContext<DomainModelMySqlContext>(options =>
		options.UseMySQL(
			sqlConnectionString,
			b => b.MigrationsAssembly("AspNet5MultipleProject")
		)
	);
}

The application uses the configuration from the config.json. This file is used to get the MySQL connection string, which is used in the Startup class.

{
    "ConnectionStrings": {  
        "DataAccessMySqlProvider": "server=localhost;userid=damienbod;password=1234;database=damienbod;"
        }
    }
}

MySQL workbench can be used to add the schema ‘damienbod’ to the MySQL database. The user ‘damienbod’ is also required, which must match the defined user in the connection string. If you configure the MySQL database differently, then you need to change the connection string in the config.json file.

mySql_ercore_aspnetcore_01

Now the database migrations can be created and the database can be updated.

>
> dotnet ef migrations add testMySql
>
> dotnet ef database update
>

If successful, the tables are created.

mySql_ercore_aspnetcore_02

The MySQL provider can be used in a MVC 6 controller using construction injection.

using System.Collections.Generic;
using DomainModel;
using DomainModel.Model;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;

namespace AspNet5MultipleProject.Controllers
{
    [Route("api/[controller]")]
    public class DataEventRecordsController : Controller
    {
        private readonly IDataAccessProvider _dataAccessProvider;

        public DataEventRecordsController(IDataAccessProvider dataAccessProvider)
        {
            _dataAccessProvider = dataAccessProvider;
        }

        [HttpGet]
        public IEnumerable<DataEventRecord> Get()
        {
            return _dataAccessProvider.GetDataEventRecords();
        }

        [HttpGet]
        [Route("SourceInfos")]
        public IEnumerable<SourceInfo> GetSourceInfos(bool withChildren)
        {
            return _dataAccessProvider.GetSourceInfos(withChildren);
        }

        [HttpGet("{id}")]
        public DataEventRecord Get(long id)
        {
            return _dataAccessProvider.GetDataEventRecord(id);
        }

        [HttpPost]
        public void Post([FromBody]DataEventRecord value)
        {
            _dataAccessProvider.AddDataEventRecord(value);
        }

        [HttpPut("{id}")]
        public void Put(long id, [FromBody]DataEventRecord value)
        {
            _dataAccessProvider.UpdateDataEventRecord(id, value);
        }

        [HttpDelete("{id}")]
        public void Delete(long id)
        {
            _dataAccessProvider.DeleteDataEventRecord(id);
        }
    }
}

The controller api can be called using Fiddler:

POST http://localhost:5000/api/dataeventrecords HTTP/1.1
User-Agent: Fiddler
Host: localhost:5000
Content-Length: 135
Content-Type: application/json;
 
{
  "DataEventRecordId":3,
  "Name":"Funny data",
  "Description":"yes",
  "Timestamp":"2015-12-27T08:31:35Z",
   "SourceInfo":
  { 
    "SourceInfoId":0,
    "Name":"Beauty",
    "Description":"second Source",
    "Timestamp":"2015-12-23T08:31:35+01:00",
    "DataEventRecords":[]
  },
 "SourceInfoId":0 
}

The data is added to the database as required.

mySql_ercore_aspnetcore_03

Links:

https://github.com/SapientGuardian/SapientGuardian.EntityFrameworkCore.MySql

http://dev.mysql.com/downloads/mysql/

Experiments with Entity Framework Core and ASP.NET Core 1.0 MVC

https://docs.efproject.net/en/latest/miscellaneous/connection-strings.html



Anuraj Parameswaran: Using MySql in ASP.NET Core

This post is about using MySql in ASP.NET Core. Few days back MySql team announced release of Official MySql driver for ASP.NET Core. You can find more details about the announcement here. In this post we will explore how to use MySql driver and EF Migrations for MySql. Here I have created a Web API project using yoman aspnet generator. And you need to add MySql drivers for ASP.NET Core in the project.json file. Here is the project.json file.


Andrew Lock: An introduction to Session storage in ASP.NET Core

An introduction to Session storage in ASP.NET Core

A common requirement of web applications is the need to store temporary state data. In this article I discuss the use of Session storage for storing data related to a particular user or browser session.

Options for storing application state

When building ASP.NET Core applications, there are a number of options available to you when you need to store data that is specific to a particular request or session.

One of the simplest methods is to use querystring parameters or post data to send state to subsequent requests. However doing so requires sending that data to the user's browser, which may not be desirable, especially for sensitive data. For that reason, extra care must be taken when using this approach.

Cookies can also be used to store small bits of data, though again, these make a roundtrip to the user's browser, so must be kept small, and if sensitive, must be secured.

For each request there exists a property Items on HttpContext. This is an IDictionary<string, object> which can be used to store arbitrary objects against a string key. The data stored here lasts for just a single request, so can be useful for communicating between middleware components and storing state related to just a single request.

Files and database storage can obviously be used to store state data, whether related to a particular user or the application in general. However they are typically slower to store and retrieve data than other available options.

Session state relies on a cookie identifier to identify a particular browser session, and stores data related to the session on the server. This article focuses on how and when to use Session in your ASP.NET Core application.

Session in ASP.NET Core

ASP.NET Core supports the concept of a Session out of the box - the HttpContext object contains a Session property of type ISession. The get and set portion of the interface is shown below (see the full interface here):

public interface ISession  
{
    bool TryGetValue(string key, out byte[] value);
    void Set(string key, byte[] value);
    void Remove(string key);
}

As you can see, it provides a dictionary-like wrapper over the byte[] data, accessing state via string keys. Generally speaking, each user will have an individual session, so you can store data related to a single user in it. However you cannot technically consider the data secure as it may be possible to hijack another user's session, so it is not advisable to store user secrets in it. As the documentation states:

You can’t necessarily assume that a session is restricted to a single user, so be careful what kind of information you store in Session.

Another point to consider is that the session in ASP.NET Core is non-locking, so if multiple requests modify the session, the last action will win. This is an important point to consider, but should provide a significant performance increase over the locking session management used in the previous ASP.NET 4.X framework.

Under the hood, Session is built on top of IDistributedCache, which can be used as a more generalised cache in your application. ASP.NET Core ships with a number of IDistributedCache implementations, the simplest of which is an in-memory implementation, MemoryCache, which can be found in the Microsoft.Extensions.Caching.Memory package.

MVC also exposes a TempData property on a Controller which is an additional wrapper around Session. This can be used for storing transient data that only needs to be available for a single request after the current one.

Configuring your application to use Session

In order to be able to use Session storage in your application, you must configure the required Session services, the Session middleware, and an IDistributedCache implementation. In this example I will be using the in-memory distributed cache as it is simple to setup and use, but the documentation states that this should only be used for development and testing sites. I suspect this reticence is due it not actually being distributed and the fact that app restarts will clear the session.

First, add the IDistributedCache implementation and Session state packages to your project.json:

dependencies: {  
  "Microsoft.Extensions.Caching.Memory" : "1.0.0",
  "Microsoft.AspNetCore.Session": "1.0.0"
}

Next, add the required services to Startup in ConfigureServices:

public void ConfigureServices(IServiceCollection services)  
{
    services.AddMvc();

    services.AddDistributedMemoryCache();
    services.AddSession();
}

Finally, configure the session middleware in the Startup.Configure method. As with all middleware, order is important in this method, so you will need to enable the session before you try and access it, e.g. in your MVC middleware:

public void Configure(IApplicationBuilder app)  
{
    app.UseStaticFiles();

    //enable session before MVC
    app.UseSession();

    app.UseMvc(routes =>
    {
        routes.MapRoute(
            name: "default",
            template: "{controller=Home}/{action=Index}/{id?}");
    });
}

With all this in place, the Session object can be used to store our data.

Storing data in Session

As shown previously, objects must be stored in Session as a byte[], which is obviously not overly convenient. To alleviate the need to work directly with byte arrays, a number of extensions exist for fetching and setting int and string. Storing more complex objects requires serialising the data.

As an example, consider the simple usage of session below.

public IActionResult Index()  
{
    const string sessionKey = "FirstSeen";
    DateTime dateFirstSeen;
    var value = HttpContext.Session.GetString(sessionKey);
    if (string.IsNullOrEmpty(value))
    {
        dateFirstSeen = DateTime.Now;
        var serialisedDate = JsonConvert.SerializeObject(dateFirstSeen);
        HttpContext.Session.SetString(sessionKey, serialisedDate);
    }
    else
    {
        dateFirstSeen = JsonConvert.DeserializeObject<DateTime>(value);
    }

    var model = new SessionStateViewModel
    {
        DateSessionStarted = dateFirstSeen,
        Now = DateTime.Now
    };

    return View(model);
}

This action simply simply returns a view with a model that shows the current time, and the time the session was initialised.

First, the Session is queried using GetString(key). If this is the first time that action has been called, the method will return null. In that case, we record the current date, serialise it to a string using Newtonsoft.Json, and store it in the session using SetString(key, value).

On subsequent requests, the call to GetString(key) will return our serialised DateTime which we can set on our view model for display. After the first request to our action, the DateSessionStarted property will differ from the Now property on our model:

An introduction to Session storage in ASP.NET Core

This was a very trivial example, but you can store any data that is serialisable to a byte[] in the Session. The JSON serialisation used here is an easy option as it is likely already used in your project. Obviously, serialising and deserialising large objects on every request could be a performance concern, so be sure to think about the implications of using Session storage in your application.

Customising Session configuration

When configuring your session in Startup, you can provide an instance of StartupOptions or a configuration lambda to either the UseSession or AddSession calls respectively. This allows you to customise details about the session cookie that is used to track the session in the browser. For example you can customise the cookie name, domain, path and how long the session may be idle before the session expires. You will likely not need to change the defaults, but it may be necessary in some cases:

services.AddSession(opts =>  
    {
        opts.CookieName = ".NetEscapades.Session";
        opts.IdleTimeout = TimeSpan.FromMinutes(5);
    });

Note the cookie name is not the default .AspNetCore.Session:

An introduction to Session storage in ASP.NET Core

It's also worth noting that in ASP.NET Core 1.0, you cannot currently mark the cookie as Secure. This has been fixed here so should be in the 1.1.0 release (probably Q4 206/ Q1 2017).

Summary

In this post we saw an introduction to using Session storage in an ASP.NET Core application. We saw how to configure the required services and middleware, and to use it to store and retrieve simple strings to share state across requests.

As mentioned previously, it's important to not store sensitive user details in Session due to potential security issues, but otherwise it is a useful location for storage of serialisable data.

Further Reading


Taiseer Joudeh: Azure Active Directory B2C Overview and Policies Management – Part 1

Prior joining Microsoft I was heavily involved in architecting and building a large scale HTTP API which will be consumed by a large number of mobile application consumers on multiple platforms (iOS, Android, and Windows Phone). Securing the API and architecting the Authentication and Authorization part for the API was one of the large and challenging features which we built from scratch as we needed only to support local database account (allowing users to login using their own existing email/username and password). As well writing a proprietary code for each platform to consume the Authentication and Authorization end points, storing the tokens, and refresh them silently was a bit challenging and required skilled mobile apps developers to implement it securely on the different platforms. Don’t ask me why we didn’t use Xamarin for cross-platform development, it is a long story 🙂 During developing the back-end API I have learned that building identity management solution is not a trivial feature, and it is better to outsource it to a cloud service provider if this is a feasible option and you want your dev team to focus on building what matters; your business features!

Recently Microsoft has announced the general availability in North America data centers of a service named “Azure Active Directory B2C” which in my humble opinion will fill the gap of having a cloud identity and access management service targeted especially for mobile apps and web developers who need to build apps for consumers; consumers who want to sign in with their existing email/usernames, create new app-specific local accounts, or use their existing social accounts (Facebook, Google, LinkedIn, Amazon, Microsoft account) to sign in into the mobile/web app.

Azure Active Directory B2C

The Azure Active Directory B2C will allow backend developers to focus on the core business of their services while they outsource the identity management to Azure Active Directory B2C including (Signing-in, Signing-up, Password reset, Edit Profile, etc..). One important feature to mention here that the service can run on Azure cloud while your HTTP API is hosted on-premise, there is no need to have everything in the cloud if your use case requires hosting your services on-premise.  You can read more about all the features of Azure Active Directory B2C by visiting their official page.

The Azure Active Directory B2C can integrate seamlessly with the new unified authentication library named MSAL (Microsoft Authentication Library), this library will help developers to obtain tokens from Active Directory, Azure Active Directory B2C, and MSA for accessing protected resources. The library will support different platforms covering: .NET 4.5 + (Desktop Apps and Web apps), Windows Universal Apps, Windows Store apps (Windows 8 and above), iOS (via Xamarin), Android (via Xamarin), and .Net Core. Library still in preview, it should not be used in production application yet.

So during this series of posts, I will be covering different aspects of Azure Active Directory B2C as well integrating it with MSAL (Microsoft Authentication Library) in different front-end platforms (Desktop Application and Web Application).

Azure Active Directory B2C Overview and Policies Management

The source code for this tutorial is available on GitHub.

The MVC APP has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net)

I broke down this series into multiple posts which I’ll be posting gradually, posts are:

What we’ll build in this tutorial?

During this post we will build a Web API 2 HTTP API which will be responsible for managing shipping orders (i.e. Listing orders, adding new ones, etc…), the orders data will be stored in Azure Table Storage, while we will outsource all the identity management to Azure Active Directory B2C, where service users/consumers will rely on AAD B2C to signup new accounts using their app-specific email/password, then allow them to login using their app-specific accounts.

Saying this we need a front-end apps to manipulate orders and communicate with the HTTP API, We will build a different type of apps during the series of posts where some of them will use MSAL.

So the components that all the tutorials will be built from are:

  • Azure Active Directory B2C tenant for identity management, it will act as our IdP (Identity Provider).
  • ASP.NET Web API 2 acting as HTTP API Service and secured by the Azure Active Directory B2C tenant.
  • Different front end apps which will communicate with Azure Active Directory B2C to sign-in users, obtain tokens, send them to the protected HTTP API, and retrieve results from the HTTP API and project it on the front end applications.

So let’s get our hands dirty and start building the tutorial.

Building the Back-end Resource (Web API)

Step 1: Creating the Web API Project

In this tutorial, I’m using Visual Studio 2015 and .Net framework 4.5.2, to get started create an empty solution and name it “WebApiAzureAcitveDirectoryB2C.sln”, then add new empty ASP.NET Web application named “AADB2C.Api”, the selected template for the project will be “Empty” template with no core dependencies, check the image below:

VS2015 Web Api Template

Once the project has been created, click on it’s properties and set “SSL Enabled” to “True”, copy the “SSL URL” value and right lick on project, select “Properties”, then select the “Web” tab from the left side and paste the “SSL URL” value in the “Project Url” text field and click “Save”. We need to allow https scheme locally once we debug the application. Check the image below:

Web Api SSL Enable

Note: If this is the first time you enable SSL locally, you might get prompted to install local IIS Express Certificate, click “Yes”.

Step 2: Install the needed NuGet Packages to bootstrap the API

This project is empty so we need to install the NuGet packages needed to setup our Owin server and configure ASP.NET Web API 2 to be hosted within an Owin server, so open NuGet Package Manager Console and install the below packages:

Install-Package Microsoft.AspNet.WebApi -Version 5.2.3
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.3
Install-Package Microsoft.Owin.Host.SystemWeb -Version 3.0.1

Step 3: Add Owin “Startup” Class

We need to build the API components because we didn’t use a ready made template, this way is cleaner and you understand the need and use for each component you install in your solution, so add a new class named “Startup”. It will contain the code below, please note that the method “ConfigureAuth” is left empty intentionally as we will visit this class many times after we create our Azure Active Directory B2C tenant, what I need to do now is to build the API without anything protection then protect with our new Azure Active Directory B2C IdP:

public class Startup
    {

        public void Configuration(IAppBuilder app)
        {
            HttpConfiguration config = new HttpConfiguration();

            // Web API routes
            config.MapHttpAttributeRoutes();

            ConfigureOAuth(app);

            app.UseWebApi(config);

        }

        public void ConfigureOAuth(IAppBuilder app)
        {
           
        }
    }

Step 4: Add support to store data on Azure Table Storage

Note: I have decided to store the fictitious data about customers orders in Azure table storage as this service will be published online and I need to demonstrate the features on how to distinguish users data based on the signed-in user, feel free to use whatever permanent storage you like to complete this tutorial, the implementation here is simple so you can replace it with a SQL Server, MySQL, or any other NoSQL store.

So let’s add the needed NuGet packages which allow us to access the Azure Table Storage in a .NET client, I recommend you to refer to the official documentation if you need to read more about Azure Table Storage.

Install-Package WindowsAzure.Storage
Install-Package Microsoft.WindowsAzure.ConfigurationManager

Step 5: Add Web API Controller responsible for orders management

Now we want to add a controller which is responsible for orders management (Adding orders, listing all orders which belong to a certain user) . So add new controller named “OrdersController” inside a folder named “Controllers” and paste the code below:

[RoutePrefix("api/Orders")]
    public class OrdersController : ApiController
    {
        CloudTable cloudTable = null;

        public OrdersController()
        {
            // Retrieve the storage account from the connection string.
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));

            // Create the table client.
            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();

            // Retrieve a reference to the table.
            cloudTable = tableClient.GetTableReference("orders");

            // Create the table if it doesn't exist.
            // Uncomment the below line if you are not sure if the table has been created already
            // No need to keep checking that table exixts or not.
            //cloudTable.CreateIfNotExists();
        }

        [Route("")]
        public IHttpActionResult Get()
        {
         
            //This will be read from the access token claims.
            var userId = "TaiseerJoudeh";

            TableQuery <OrderEntity> query = new TableQuery<OrderEntity>()
                .Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, userId));

            var orderEntitis = cloudTable.ExecuteQuery(query).Select(
                o => new OrderModel() {
                OrderID = o.RowKey,
                ShipperName = o.ShipperName,
                ShipperCity = o.ShipperCity,
                TS = o.Timestamp
                });

            return Ok(orderEntitis);
        }

        [Route("")]
        public IHttpActionResult Post (OrderModel order)
        {
            //This will be read from the access token claims.
            var userId = "TaiseerJoudeh";

            OrderEntity orderEntity = new OrderEntity(userId);

            orderEntity.ShipperName = order.ShipperName;
            orderEntity.ShipperCity = order.ShipperCity;

            TableOperation insertOperation = TableOperation.Insert(orderEntity);

            // Execute the insert operation.
            cloudTable.Execute(insertOperation);

            order.OrderID = orderEntity.RowKey;

            order.TS = orderEntity.Timestamp;

            return Ok(order);
        }
    }

    #region Classes

    public class OrderModel
    {
        public string OrderID { get; set; }
        public string ShipperName { get; set; }
        public string ShipperCity { get; set; }
        public DateTimeOffset TS { get; set; }
    }

    public class OrderEntity : TableEntity
    {
        public OrderEntity(string userId)
        {
            this.PartitionKey = userId;
            this.RowKey = Guid.NewGuid().ToString("N");
        }

        public OrderEntity() { }

        public string ShipperName { get; set; }

        public string ShipperCity { get; set; }

    }

    #endregion

What we have implemented above is very straight forward, in the constructor of the controller, we have read the connection string for the Azure Table Storage from the web.config and created a cloud table instance which references the table named “Orders”. This table will hold the Orders data.

The structure of the table is you are thinking in SQL context even Azure Table Storage is NoSQL store is simple and it is represented in the class named “OrderEntity”, the “PartitionKey” will represent the “UserId”, and the “RowKey” will represent the “OrderId”. The “OrderId” will always contain an auto generated value.

Please note the following: a) You should not store the connection string for the table storage in web.config, it is better to use Azure Key Vault for a secure way to store your keys or you can set from Azure App Settings if you are going to host the Api on Azure. b) The “UserId” now is fixed, but eventually, it will read the authenticated UserId from the access token claims once we establish the IdP provider and configure our API to rely on Azure Active Directory B2C to protect it.

By taking a look at the “POST” action, you will notice that we are adding a new record to the table storage, and the “UserId” is fixed for now and we will visit this and fix it. The same applies to the “GET” action where we read the data from Azure Table Storage for a fixed user.

Now the API is ready for testing, you can issue a GET request or POST request and the data will be stored under the fixed “UserId” which is “TaiseerJoudeh”. Note that there is no Authorization header set as the API still publicly available for anyone. Below is a reference for the POST request:

POST Request:

POST /api/orders HTTP/1.1
Host: localhost:44339
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: 6f1164fa-8560-98fd-6566-892517f1003e

{
    "shipperName" :"Nike",
    "shipperCity": "Clinton"
}

Configuring the Azure Active Directory B2C Tenant

Step 5: Create an Azure Active Directory B2C tenant

Now we need to create the Azure Active Directory B2C tenant, for the mean time you can create it from the Azure Classic Portal and you will be able to manage all the settings from the new Azure Preview Portal.

  • To start the creation process login to the classic portal and navigate to: New > App Services > Active Directory > Directory > Custom Create as the image below:

Azure AD B2C Directory

  • A new popup will appear as the image below asking you to fill some information, note that if you selected one of the following countries (United States, Canada, Costa Rica, Dominican Republic, El Salvador, Guatemala, Mexico, Panama, Puerto Rico and Trinidad and Tobago) your Azure AD B2C will be Production-Scale tenant, as Azure AD B2C is GA only in the countries listed (North America). This will change in the coming months and more countries will be announced as GA. You can read more about the road map of Azure AD B2C here. Do not forget to check “This is a B2C directory” for sure 🙂

Azure AD B2C New Directory

  • After your tenant has been created, it will appear in the Active Directory extension bar, as the image below; select the tenant and click on “Configure” tab,  then click on “Manage B2C Settings” as the image below. This will open the new Azure Preview Portal where we will start registering the App and managing policies there.

Azure AD B2C Manage Settings

Step 6: Register our application in Azure AD B2C tenant

Now we need to register the application under the tenant we’ve created, this will allow us to add the sign-in, sign-up, edit profile features in our app, to do so follow the below steps:

  • Select “Applications” from the “Settings” blade for the B2C tenant we’ve created, then click on the “Add” Icon on the top
  • A new blade will open asking you to fill the following information
    • Name: This will be the application name that will describe your application to consumers. In our case I have used “BitofTech Demo App”
    • Web API/Web APP: we need to turn this on as we are protecting a Web Api and Web app.
    • Allow implicit flow: We will turn this on as well as we need to use OpenId connect protocol to obtain an id token
    • Reply URL: those are the registered URLs where the Azure Active Directory B2C will send the authentication response to (tokens) or error responses to. The client applications calling the API can specify the Reply URL, but it should be registered in the tenant by the administrator in order to work. In our case I will put the Reply URL now to the Web API URL which is “https://localhost:44339/” this will be good for testing purposes but in the next post I will add another URL for the Web application we will build to consume the API. As you notice you can register many Reply URLs so you can support different environments (Dev, staging, production, etc…)
    • Native Client: You need to turn this on if you are building mobile application or desktop application client, for the mean time there is no need to turn it on as we are building web application (Server side app) but we will visit this again in the coming posts and enable this once we build a desktop app to consume the API.
    • App key or App secret: This will be used to generate a “Client Secret” for the App which is needed to authenticate the App in the Authorization/Hybrid OAuth 2.0 flow. We will need this in the future posts once I describe how we can obtain access tokens, open id tokens and refresh tokens using Raw HTTP requests. For the mean time, there is no need to generate an App key.
  • One you fill all the information, click “Save” and the application will be created and Application ID will be generated, copy this value and keep it on the notepad as we will use it later on.
  • below an image which shows the App after filling the needed information:

Azure AD B2C New App

Step 7: Selecting Identity Providers

Azure Active Directory B2C offers multiple social identity providers Microsoft, Google, Amazon, LinkedIn and Facebook in addition to the local App-specific accounts. The local account can be configured to use a “Username” or “Email” as a unique attribute for the account, we will use the “Email” and we will use only the local accounts in this tutorial to keep things simple and straight forward.

You can change the “Identity Providers” by selecting the “Identity providers” blade. This link will be helpful if you need to configure it.

Step 8: Add custom attributes

Azure AD B2C directory comes with a set of “built-in” attributes that represents information about the user, attributes such as (Email, First name, Last name, etc…) those attributes can be extended in case you needed to add extra information about the user upon signing up (creating a profile) or editing it.

At the mean time you can create an attribute and set the datatype for it as “String”, I believe that this limitation would be resolved in the coming releases.

To do so select “User attributes” blade and click on the “Add” icon, a new blade will open asking you to fill the attribute name, data type and description. In our case, I’ve added an attribute named “Gender” to capture the gender of the user during the registration process (profile creation or sign up). Below an image which represents this process:

B2C Custom Attribute

We will see in the next steps how we can retrieve this custom attribute value in our application, there are 2 ways to do so, first one is to include it in claims encoded in the token and the second one is to use Azure AD Graph API. We will use the first method.

In the next step, I will show you how to include this custom attribute in the sign-up policy.

Step 9: Creating different policies

The unique thing about Azure Active Directory B2C is using the extensible policy framework, which allows the developers to define an easy and reusable way to build the identity experience that they want to provide for application consumers (end users). So for example to enroll a new user in your app and create a app-specific local account, you need to create a Signup Policy where you configure the attributes needed to capture it from the user, you configure the attributes (claims) you need to retrieve after successfully executing the policy, you can configure which identity providers consumers are allowed to use, as well you can configure the look and feel for the signup page by doing simple modifications such as changing label names, the order of the fields, or replace the UI entirety (more about this in future post). All this applies to other policies used to implement identity features such as signing in, editing profile.

As well by using the extensible policies framework we can create multiple policies of different types in our tenant and use them in our applications as needed. Policies can be reused across applications, as well they can be exported and uploaded for easier management. This allows us to define and modify identity experiences with minimal or no changes to application code.

Now let’s create the first policy which is the “Signup” policy which will build the experience for users during the signup process and I show you how to test it out. to do so follow the below steps:

  • Select the “Sign-up” policies.
  • Click on the “Add” icon at the top of the blade.
  • Select a name for the policy, picking up a clear name is important as we will reference the name in our application, in our case I’ve used “signup”.
  • Select the “Identity providers” and select “Email signup”. In our case this the only provider we have configured for this tenant so far.
  • Select the “Sign-up” attributes. Now we have the chance to choose the attributes we want to collect from the user during the signup process. I have selected 6 attributes as the image below.
  • Select the “Application claims”. Now we have the chance to choose the claims we want to return in the tokens sent back to out application after a successful signup process, remember that those claims are encoded within the token so do not get crazy about adding many claims as the token size will increase. I have selected 9 claims as the image below.
  • Finally, click on “Create” button.

Signup Policy Attribute

Notes:

  • The policy that will be created will be named as “B2C_1_signup” all the policies will be prefixed by “B2C_1_” fragment, do not ask me why but it seems its implementation detail 🙂
  • You can change the attribute label names (Surname -> Last Name) as well change the order of the fields by dragging the attributes, and set if the field is mandatory or not. Notice how I changed the custom attribute “Gender” to display as a drop down list and have a fixed items such as “Male” and “Female”. All this can be done by selecting the “Page UI customization” section.
  • Once the policy has been created you can configure the ID token, and refresh token expiration date time by selecting the section “Toke, session & SSO config”. I will cover this in the coming posts, for now we will keep the defaults for all policies we will create, and you can read more about this here.
  • Configuring ID token and refresh token expiration times is done pair policy not tenant, IMO I do not know why this was not done per tenant, not per policy, this for sure gives you better flexibility and finer grained control on how to manage policies, but I can not think of a use case where you want to have a different expiration dates for different policies. We will keep them the same for all policies we will create unless we are testing out something.
  • Below an image on how to change the custom attribute “Gender” order between other fields as well how to the “User input type” to use Drop down list:

Azure B2C Edit Attribute

Step 10: Creating the Sign in and Edit Profile policies

I won’t bore you with the repeated details for creating the other 2 policies which will be using during this tutorial, they all follow the same approach I have illustrated in the previous step. Please note the below about the newly created policies:

  • The policy which will be used to sign in the user (login) will be named “Signin“, so after creating it will be named “B2C_1_Signin“.
  • The policy which will be used to edit the created profile will be named “Editprofile“, so after creating it will be named “B2C_1_Editprofile“.
  • Do not forget to configure the Gender custom attribute for the “Editprofile” policy, as we need to display the values in the drop-down list instead of a text box.
  • Select the same claims we have already selected for the signup policy (8 claims)
  • You can click “Run now” button and test the new policies using a user that you already created from the sign up policy (Jump to next step before).
  • At the mean time the only way to execute those policies and test them out in this post and the coming one is to use the “Run now” button until I build a web application which communicates with the Web API and Azure Active Directory B2C tenant.

Step 11: Testing the created signup policy in Azure AD B2C tenant

Azure Active Directory B2C provide us with the ability to test the policies locally without leaving the azure portal, to do so all you need to click on is the “Run now” button and select the preferred Reply URL in case you registered many Reply URL once you registered the App, in our case we will have only a single app and a single reply URL. The Reply URL will be used to return the Id token in hash fragment to the Reply URL selected.

Once you click “Run now” button a new window will open and you will be able to test the sign up policy by filling up the needed information, notice that you need to use a real email in order to send activation code to it and verify that you own this email, I believe the Azure AD team implemented by verifying the email before creating the account to avoid creating many unreal emails that will never get verified. Smart decision.

Once you receive the verification email with the six digit code, you need to enter it in the verification code text box and click on “verify”, if all is good the “Create” button is enabled and you can complete filling the profile. You can change the content of the email by following this link

The password policy (complexity) used here is the same one used in Azure Active Directory, you can read more about it here.

After you fill all the mandatory attributes as the image below click create and you will notice that a redirect took place to the Reply URL and there is an Id Token returned as a hash fragment. This Id token contains all the claims specified in the policy, you can test it out by using a JWT debugging tool such as calebb.net so if we tried to debug the token we’ve received after running the sign up policy, you will see all the claims we asked for encoded in this JWT token.

Azure AD B2C Signup Test

Notes about the claims:

  • The newly “Gender” custom attribute we have added is returned under a claim named “extension_Gender“. It seems that all the custom attributed are prefixed by the phrase “extension”, I need to validate this with Azure AD team.
  • The globally user unique identifier is returned in the claim named “oid”, we will depend on this claim value to distinguish between registered users.
  • This token is generated based on the policy named “B2C_1_signup”, note the claim named “tfp”.

To have a better understanding of each claim meaning, please check this link.

{
  "exp": 1471954089,
  "nbf": 1471950489,
  "ver": "1.0",
  "iss": "https://login.microsoftonline.com/tfp/3d960283-c08d-4684-b378-2a69fa63966d/b2c_1_signup/v2.0/",
  "sub": "Not supported currently. Use oid claim.",
  "aud": "bc348057-3c44-42fc-b4df-7ef14b926b78",
  "nonce": "defaultNonce",
  "iat": 1471950489,
  "auth_time": 1471950489,
  "oid": "31ef9c5f-6416-48b8-828d-b6ce8db77d61",
  "emails": [
    "ahmad.hasan@gmail.com"
  ],
  "newUser": true,
  "given_name": "Ahmad",
  "family_name": "Hasan",
  "extension_Gender": "M",
  "name": "Ahmad Hasan",
  "country": "Jordan",
  "tfp": "B2C_1_signup"
}

This post turned out to be longer than anticipated so I will complete in the coming post , in the next post where I will show you how to reconfigure Our Web Api project to rely on out Azure AD B2C IdP and validate those tokens.

The source code for this tutorial is available on GitHub.

The MVC APP has been published on Azure App Services, so feel free to try it out using the Base URL (https://aadb2cmvcapp.azurewebsites.net)

Follow me on Twitter @tjoudeh

Resources

The post Azure Active Directory B2C Overview and Policies Management – Part 1 appeared first on Bit of Technology.


Andrew Lock: A look behind the JWT bearer authentication middleware in ASP.NET Core

A look behind the  JWT bearer authentication middleware in ASP.NET Core

This is the next in a series of posts about Authentication and Authorisation in ASP.NET Core. In the first post we had a general introduction to authentication in ASP.NET Core, and then in the previous post we looked in more depth at the cookie middleware, to try and get to grips with the process under the hood of authenticating a request.

In this post, we take a look at another middleware, the JwtBearerAuthenticationMiddleware, again looking at how it is implemented in ASP.NET Core as a means to understanding authentication in the framework in general.

What is Bearer Authentication?

The first concept to understand is Bearer authentication itself, which uses bearer tokens. According to the specification, a bearer token is:

A security token with the property that any party in possession of the token (a "bearer") can use the token in any way that any other party in possession of it can. Using a bearer token does not require a bearer to prove possession of cryptographic key material (proof-of-possession).

In other words, by presenting a valid token you will be automatically authenticated, without having to match or present any additional signature or details to prove it was granted to you. It is often used in the OAuth 2.0 authorisation framework, such as you might use when signing in to a third-party site using your Google or Facebook accounts for example.

In practice, a bearer token is usually presented to the remote server using the HTTP Authorization header:

Authorization: Bearer BEARER_TOKEN  

where BEARER_TOKEN is the actual token. An important point to bear in mind is that bearer tokens entitle whoever is in it's possession to access the resource it protects. That means you must be sure to only use tokens over SSL/TLS to ensure they cannot be intercepted and stolen.

What is a JWT?

A JSON Web Token (JWT) is a web standard that defines a method for transferring claims as a JSON object in such a way that they can be cryptographically signed or encrypted. It is used extensively in the internet today, in particular in many OAuth 2 implementations.

JWTs consist of 3 parts:

  1. Header: A JSON object which indicates the type of the token (JWT) and the algorithm used to sign it
  2. Payload: A JSON object with the asserted Claims of the entity
  3. Signature: A string created using a secret and the combined header and payload. Used to verify the token has not been tampered with.

These are then base64Url encoded and separated with a .. Using JSON Web Tokens allows you to send claims in a relatively compact way, and to protect them against modification using the signature. One of their main advantages is that they can allow stateless applications by including the storing the required claims in the token, rather than server side in a session store.

I won't go into all the details of JWT tokens, or the OAuth framework here, as that is a huge topic on it's own. In this post I'm more interested in how the middleware and handlers interact with ASP.NET Core authentication framework. If you want to find out more about JSON web tokens, I recommend you check out jwt.io and auth0.com as they have some great information and tutorials.

Just to give a vague idea of what JSON Web Tokens looks like in practice, the payload and header given below:

{
  "alg": "HS256",
  "typ": "JWT"
}
{
  "name": "Andrew Lock"
}

could be encoded in the following header:

Authorisation: bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoiQW5kcmV3IExvY2sifQ.RJJq5u9ITuNGeQmWEA4S8nnzORCpKJ2FXUthuCuCo0I  

JWT bearer authentication in ASP.NET Core

You can add JWT bearer authentication to your ASP.NET Core application using the Microsoft.AspNetCore.Authentication.JwtBearer package. This provides middleware to allow validating and extracting JWT bearer tokens from a header. There is currently no built-in mechanism for generating the tokens from your application, but if you need that functionality, there are a number of possible projects and solutions to enable that such as IdentityServer 4. Alternatively, you could create your own token middleware as is shown in this post.

Once you have added the package to your project.json, you need to add the middleware to your Startup class. This will allow you to validate the token and, if valid, create a ClaimsPrinciple from the claims it contains.

You can add the middleware to your application using the UseJwtBearerAuthentication extension method in your Startup.Configure method, passing in a JwtBearerOptions object:

app.UseJwtBearerAuthentication(new JwtBearerOptions  
{
    AutomaticAuthenticate = true,
    AutomaticChallenge = true,
    TokenValidationParameters = new TokenValidationParameters
    {
        ValidateIssuer = true,
        ValidIssuer = "https://issuer.example.com",

        ValidateAudience = true,
        ValidAudience = "https://yourapplication.example.com",

        ValidateLifetime = true,
    }
});

There are many options available on the JwtBearerOptions - we'll cover some of these in more detail later.

The JwtBearerMiddleware

in the previous post we saw that the CookieAuthenticationMiddleware inherits from the base AuthenticationMiddleware<T>, and the JwtBearerMiddleware is no different. When created, the middleware performs various precondition checks, and initialises some default values. The most important check is to initialise the ConfigurationManager, if it has not already been set.

The ConfigurationManager object is responsible for retrieving, refreshing and caching the configuration metadata required to validate JWTs, such as the issuer and signing keys. These can either be provided directly to the ConfigurationManager by configuring the JwtBearerOptions.Configuration property, or by using a back channel to fetch the required metadata from a remote endpoint. The details of this configuration is outside the scope of this article.

As in the cookie middleware, the middleware implements the only required method from the base class, CreateHandler(), and returns a newly instantiated JwtBearerHandler.

The JwtBearerHandler HandleAuthenticateAsync method

Again, as with the cookie authentication middleware, the handler is where all the work really takes place. JwtBearerHandler derives from AuthenticationHandler<JwtBearerOptions>, overriding the required HandleAuthenticateAsync() method.

This method is responsible for deserialising the JSON Web Token, validating it, and creating an appropriate AuthenticateResult with an AuthenticationTicket (if the validation was successful). We'll walk through the bulk of it in this section, but it is pretty long, so I'll gloss over some of it!

On MessageReceived

The first section of the HandleAuthenticateAsync method allows you to customise the whole bearer authentication method.

// Give application opportunity to find from a different location, adjust, or reject token
var messageReceivedContext = new MessageReceivedContext(Context, Options);

// event can set the token
await Options.Events.MessageReceived(messageReceivedContext);  
if (messageReceivedContext.CheckEventResult(out result))  
{
    return result;
}

// If application retrieved token from somewhere else, use that.
token = messageReceivedContext.Token;  

This section calls out to the MessageReceived event handler on the JwtBearerOptions object. You are provided the full HttpContext, as well as the JwtBearerOptions object itself. This allows you a great deal of flexibility in how your applications uses tokens. You could validate the token yourself, using any other side information you may require, and set the AuthenticateResult explicitly. If you take this approach and handle the authentication yourself, the method will just directly return the AuthenticateResult after the call to messageReceivedContext.CheckEventResult.

Alternatively, you could obtain the token from somewhere else, such as a different header, or even a cookie. In that case, the handler will use the provided token for all further processing.

Read Authorization header

In the next section, assuming a token was not provided by the messageReceivedContext, the method tries to read the token from the Authorization header:

if (string.IsNullOrEmpty(token))  
{
    string authorization = Request.Headers["Authorization"];

    // If no authorization header found, nothing to process further
    if (string.IsNullOrEmpty(authorization))
    {
        return AuthenticateResult.Skip();
    }

    if (authorization.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase))
    {
        token = authorization.Substring("Bearer ".Length).Trim();
    }

    // If no token found, no further work possible
    if (string.IsNullOrEmpty(token))
    {
        return AuthenticateResult.Skip();
    }
}

As you can see, if the header is not found, or it does not start with the string "Bearer ", then the remainder of the authentication is skipped. Authentication would pass to the next handler until it finds a middleware to handle it.

Update TokenValidationParameters

At this stage we have a token, but we still need to validate and deserialise it to a ClaimsPrinciple. The next section of HandleAuthenticationAsync uses the ConfigurationManager object created when the middleware was instantiated to update the issuer and signing keys that will be used to validate the token:

if (_configuration == null && Options.ConfigurationManager != null)  
{
    _configuration = await Options.ConfigurationManager.GetConfigurationAsync(Context.RequestAborted);
}

var validationParameters = Options.TokenValidationParameters.Clone();  
if (_configuration != null)  
{
    if (validationParameters.ValidIssuer == null && !string.IsNullOrEmpty(_configuration.Issuer))
    {
        validationParameters.ValidIssuer = _configuration.Issuer;
    }
    else
    {
        var issuers = new[] { _configuration.Issuer };
        validationParameters.ValidIssuers = (validationParameters.ValidIssuers == null ? issuers : validationParameters.ValidIssuers.Concat(issuers));
    }

    validationParameters.IssuerSigningKeys = (validationParameters.IssuerSigningKeys == null ? _configuration.SigningKeys : validationParameters.IssuerSigningKeys.Concat(_configuration.SigningKeys));
}

First _configuration, a private field, is updated with the latest (cached) configuration details from the ConfigurationManager. The TokenValidationParameters specified when configuring the middleware are then cloned for this request, and augmented with the additional configuration. Any other validation specified when the middleware was added will also be validated (for example, we included ValidateIssuer, ValidateAudience and ValidateLifetime requirements in the example above).

Validating the token

Everything is now set for validating the provided token. The JwtBearerOptions object contains a list of ISecurityTokenValidator so you can potentially use custom token validators, but the default is to use the built in JwtSecurityTokenHandler. This will validate the token, confirm it meets all the requirements and has not been tampered with, and then return a ClaimsPrinciple.

List<Exception> validationFailures = null;  
SecurityToken validatedToken;  
foreach (var validator in Options.SecurityTokenValidators)  
{
    if (validator.CanReadToken(token))
    {
        ClaimsPrincipal principal;
        try
        {
            principal = validator.ValidateToken(token, validationParameters, out validatedToken);
        }
        catch (Exception ex)
        {
            //... Logging etc

            validationFailures = validationFailures ?? new List<Exception>(1);
            validationFailures.Add(ex);
            continue;
        }

        // See next section - returning a success result.
    }
}

So for each ISecurityTokenValidator in the list, we check whether it can read the token, and if so attempt to validate and deserialise the principal. If that is successful, we continue on to the next section, if not, the call to ValidateToken will throw.

Thankfully, the built in JwtSecurityTokenHandler handles all the complicated details of implementing the JWT specification correctly, so as long as the ConfigurationManager is correctly setup, you should be able to validate most types of token.

I've glossed over the catch block somewhat, but we log the error, add it to the validationFailures error collection, potentially refresh the configuration from ConfigurationManager and try the next handler.

When validation is successful

If we successfully validate a token in the loop above, then we can create an authentication ticket from the principal provided.

Logger.TokenValidationSucceeded();

var ticket = new AuthenticationTicket(principal, new AuthenticationProperties(), Options.AuthenticationScheme);  
var tokenValidatedContext = new TokenValidatedContext(Context, Options)  
{
    Ticket = ticket,
    SecurityToken = validatedToken,
};

await Options.Events.TokenValidated(tokenValidatedContext);  
if (tokenValidatedContext.CheckEventResult(out result))  
{
    return result;
}
ticket = tokenValidatedContext.Ticket;

if (Options.SaveToken)  
{
    ticket.Properties.StoreTokens(new[]
    {
        new AuthenticationToken { Name = "access_token", Value = token }
    });
}

return AuthenticateResult.Success(ticket);  

Rather than returning a success result straight away, the handler first calls the TokenValidated event handler. This allows us to fully customise the extracted ClaimsPrincipal, even replacing it completely, or rejecting it at this stage by creating a new AuthenticateResult.

Finally the handler optionally stores the extracted token in the AuthenticationProperties of the AuthenticationTicket for use elsewhere in the framework, and returns the authenticated ticket using AuthenticateResult.Success.

When validation fails

If the security token could not be validated by any of the ISecurityTokenValidators, the handler gives one more chance to customise the result.

if (validationFailures != null)  
{
    var authenticationFailedContext = new AuthenticationFailedContext(Context, Options)
    {
        Exception = (validationFailures.Count == 1) ? validationFailures[0] : new AggregateException(validationFailures)
    };

    await Options.Events.AuthenticationFailed(authenticationFailedContext);
    if (authenticationFailedContext.CheckEventResult(out result))
    {
        return result;
    }

    return AuthenticateResult.Fail(authenticationFailedContext.Exception);
}

return AuthenticateResult.Fail("No SecurityTokenValidator available for token: " + token ?? "[null]");  

The AuthenticationFailed event handler is invoked, and again can set the AuthenticateResult directly. If the handler does not directly handle the event, or if there were no configured ISecurityTokenValidators that could handle the token, then authentication has failed.

Also worth noting is that any unexpected exceptions thrown from event handlers etc will result in a similar call to Options.Events.AuthenticationFailed before the exception bubbles up the stack.

The JwtBearerHandler HandleUnauthorisedAsync method

The other significant method in the JwtBearerHandler is HandleUnauthorisedAsync, which is called when a request requires authorisation but is unauthenticated. In the CookieAuthenticationMiddleware, this method redirects to a logon page, while in the JwtBearerHandler, a 401 will be returned, with the WWW-Authenticate header indicating the nature of the error, as per the specification.

Prior to returning a 401, the Options.Event handler gets one more attempt to handle the request with a call to Options.Events.Challenge. As before, this provides a great extensibility point should you need it, allowing you to customise the behaviour to your needs.

SignIn and SignOut

The last two methods in the JwtBearerHandler, HandleSignInAsync and HandleSignOutAsync simply throw a NotSupportedException when called. This makes sense when you consider that the tokens have to come from a different source.

To effectively 'sign in', a client must request a token from the (remote) issuer and provide it when making requests to your application. Signing out from the handler's point of view would just require you to discard the token, and not send it with future requests.

Summary

In this post we looked in detail at the JwtBearerHandler as a means to further understanding how authentication works in the ASP.NET Core framework. It is rare you would need to dive into this much detail when simply using the middleware, but hopefully it will help you get to grips of what is going on under the hood when you add it to your application.


Pedro Félix: Focus on the representation semantics, leave the transfer semantics to HTTP

A couple of days ago I was reading the latest OAuth 2.0 Authorization Server Metadata document version and my eye got caught on one sentence. On section 3.2, the document states

A successful response MUST use the 200 OK HTTP status code and return a JSON object using the “application/json” content type (…)

My first reaction was thinking that this specification was being redundant: of course a 200 OK HTTP status should be returned on a successful response. However, that “MUST” in the text made me think: is a 200 really the only acceptable response status code for a successful response? In my opinion, the answer is no.

For instance, if caching and ETags are being used, the client can send a conditional GET request (see Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests) using the If-None-Match header, for which a 304 (Not Modified) status code is perfectly acceptable. Another example is if the metadata location changes and the server responds with a 301 (Moved Permanently) or a 302 (Found) status code.Does that means the request was unsuccessful? In my opinion, no. It just means that the request should be followed by a subsequent request to another location.

So, why does this little observation deserve a blog post?
Well, mainly because it reflects two common tendencies when designing HTTP APIs (or HTTP interfaces):

  • First, the tendency to redefine transfer semantics that are already defined by HTTP.
  • Secondly, a very simplistic view of HTTP, ignoring parts such as caching and optimistic concurrency.

The HTTP specification already defines a quite rich set of mechanisms for representation transfer, and HTTP related specifications should take advantage of that. What HTTP does not define is the semantics of the representation itself. That should be the focus of specifications such as the OAuth 2.0 Authorization Server Metadata.

When defining HTTP APIs, focus on the representation semantics. The transfer semantics is already defined by the HTTP protocol.

 



Dominick Baier: Why does my Authorize Attribute not work?

Sad title, isn’t it? The alternative would have been “The complicated relationship between claim types, ClaimsPrincipal, the JWT security token handler and the Authorize attribute role checks” – but that wasn’t very catchy.

But the reality is, that many people are struggling with getting role-based authorization (e.g. [Authorize(Roles = “foo”)]) to work – especially with external authentication like IdentityServer or other identity providers.

To fully understand the internals I have to start at the beginning…

IPrincipal
When .NET 1.0 shipped, it had a very rudimentary authorization API based on roles. Microsoft created the IPrincipal interface which specified a bool IsInRole(string roleName). They also created a couple of implementations for doing role-based checks against Windows groups (WindowsPrincipal) and custom data stores (GenericPrincipal).

The idea behind putting that authorization primitive into a formal interface was to create higher level functionality for doing role-based authorization. Examples of that are the PrincipalPermissionAttribute, the good old web.config Authorization section…and the [Authorize] attribute.

Moving to Claims
In .NET 4.5 the .NET team did a radical change and injected a new base class into all existing principal implementations – ClaimsPrincipal. While claims were much more powerful than just roles, they needed to maintain backwards compatibility. In other words, what was supposed to happen if someone moved a pre-4.5 application to 4.5 and called IsInRole? Which claim will represent roles?

To make the behaviour configurable they introduced the RoleClaimType (and also NameClaimType) property on ClaimsIdentity. So practically speaking, when you call IsInRole, ClaimsPrincipal check its identities if a claim of whatever type you set on RoleClaimType with the given value is present. As a default value they decided on re-using a WS*/SOAP -era proprietary type they introduced with WIF (as part of the ClaimTypes class): http://schemas.microsoft.com/ws/2008/06/identity/claims/role.

So to summarize, if you call IsInRole, by default the assumption is that your claims representing roles have the type mentioned above – otherwise the role check will not succeed.

When you are staying within the Microsoft world and their guidance, you will probably always use the ClaimTypes class which has a Role member that maps to the above claim type. This will make role checks automagically work.

Fast forward to modern Applications and OpenID Connect
When you are working with external identity providers, the chance is quite low that they will use the Microsoft legacy claim types. They will rather use the more modern standard OpenID Connect claim types.

In that case you need to be aware of the default behaviour of ClaimsPrincipal – and either set the NameClaimType and RoleClaimType to the right values manually – or transform the external claims types to Microsoft’s claim types.

The latter approach is what Microsoft implemented (of course) in their JWT validation library. The JWT handler tries to map all kinds of external claim types to the corresponding values on the ClaimTypes class – e.g. role to http://schemas.microsoft.com/ws/2008/06/identity/claims/role.

I personally don’t like that, because I think that claim types are an explicit contract in your application, and changing them should be part of application logic and claims transformation – and not a “smart” feature of token validation. That’s why you will always see the following line in my code:

JwtSecurityTokenHandler.InboundClaimTypeMap.Clear();

..which turns the mapping off. Newer versions of the handler call it DefaultInboundClaimTypeMap.

Setting the claim types manually
The constructor of ClaimsIdentity allows setting the claim types explicitly:

var id = new ClaimsIdentity(claims, “authenticationType”, “name”, “role”);
var p = new ClaimsPrincipal(id);

Also the token validation parameters object used by the JWT library has that feature. It bubbles up to e.g. the OpenID Connect authentication middleware like this:

var oidcOptions = new OpenIdConnectOptions
{
    AuthenticationScheme = "oidc",
    SignInScheme = "cookies",
 
    Authority = Clients.Constants.BaseAddress,
    ClientId = "mvc.implicit",
    ResponseType = "id_token",
    SaveTokens = true,
 
    TokenValidationParameters = new TokenValidationParameters
    {
        NameClaimType = "name",
        RoleClaimType = "role",
    }
};

Other JWT related libraries have the same capabilities – just have a look around.

Summary
Role checks are legacy – they only exist in the (Microsoft) claims world because of backwards compatibility with IPrincipal. There’s no need for them anymore – and you shouldn’t do role checks. If you want to check for the existence of specific claims – simply query the claims collection for what you are looking for.

If you need to bring old code that uses role checks forward, either let the JWT handler do some magic for you, or take control over the claim types yourself. You probably know by now what I would do ;)

 

…oh – and just in case you were looking for some practical advice here. The next time your [Authorize] attribute does not behave as expected – bring up the debugger, inspect your ClaimsPrincipal (e.g. Controller.User) and compare the RoleClaimType property with the claim type that holds your roles. If they are different – there’s your answer.

Screenshot 2016-08-21 14.20.28

 

 


Filed under: .NET Security, OAuth, OpenID Connect, WebAPI


Damien Bowden: ASP.NET Core logging with NLog and Elasticsearch

This article shows how to Log to Elasticsearch using NLog in an ASP.NET Core application. NLog is a free open-source logging for .NET.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch

NLog.Extensions.Logging is required to use NLog in an ASP.NET Core application. This is added to the dependencies of the project. NLog.Targets.ElasticSearch is also added to the dependencies. This project is at present NOT the NuGet package from ReactiveMarkets, but the source code from ReactiveMarkets and updated to dotnetcore. Thanks to ReactiveMarkets for this library, hopefully the NuGet package will be updated and the NuGet package can be used directly.

The NLog configuration file also needs to be added to the publishOptions in the project.json file.

"dependencies": {
	"Microsoft.NETCore.App": {
		"version": "1.0.0",
		"type": "platform"
	},
	"Microsoft.AspNetCore.Mvc": "1.0.0",
	"Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
	"Microsoft.AspNetCore.Diagnostics": "1.0.0",
	"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
	"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
	"Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
	"Microsoft.Extensions.Configuration.Json": "1.0.0",
	"Microsoft.Extensions.Logging": "1.0.0",
	"Microsoft.Extensions.Logging.Console": "1.0.0",
	"Microsoft.Extensions.Logging.Debug": "1.0.0",
	"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
	"NLog.Extensions.Logging": "1.0.0-rtm-alpha4",
	"NLog.Targets.ElasticSearch": "1.0.0-*"
},

"publishOptions": {
    "include": [
        "wwwroot",
        "Views",
        "Areas/**/Views",
        "appsettings.json",
        "web.config",
        "nlog.config"
    ]
},

The NLog configuration is added to the Startup.cs class in the Configure method.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddNLog();

	var configDir = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

	if (configDir != string.Empty)
	{
		var logEventInfo = NLog.LogEventInfo.CreateNullEvent();


		foreach (FileTarget target in LogManager.Configuration.AllTargets.Where(t => t is FileTarget))
		{
			var filename = target.FileName.Render(logEventInfo).Replace("'", "");
			target.FileName = Path.Combine(configDir, filename);
		}

		LogManager.ReconfigExistingLoggers();
	}

	//env.ConfigureNLog("nlog.config");

	//loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	//loggerFactory.AddDebug();

	app.UseMvc();
}

The nlog.config target and rules can be configured to log to Elasticsearch. NLog.Targets.ElasticSearch is an extension and needs to be added using the extensions tag.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
    
    <extensions>
        <add assembly="NLog.Targets.ElasticSearch"/>
    </extensions>
            
  <targets>

    <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000">
      <target xsi:type="ElasticSearch"/>
    </target>
   
  </targets>

  <rules>
    <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
      
  </rules>
</nlog>

The NLog.Targets.ElasticSearch package Elasticsearch URL can be configured using the ElasticsearchUrl property. This can be defined in the appsettings configuration file.

{
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    },
    "ElasticsearchUrl": "http://localhost:9200"
}

NLog.Targets.ElasticSearch ( ReactiveMarkets )

The existing NLog.Targets.ElasticSearch project from ReactiveMarkets is updated to a NETStandard Library. This class library requires Elasticsearch.Net, NLog and Newtonsoft.Json. The dependencies are added to the project.json file. The library supports both netstandard1.6 and also net451.

{
  "version": "1.0.0-*",

    "dependencies": {
        "NETStandard.Library": "1.6.0",
        "NLog": "4.4.0-betaV15",
        "Newtonsoft.Json": "9.0.1",
        "Elasticsearch.Net": "2.4.3",
        "Microsoft.Extensions.Configuration": "1.0.0",
        "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
        "Microsoft.Extensions.Configuration.Json": "1.0.0"
    },

    "frameworks": {
        "netstandard1.6": {
            "imports": "dnxcore50"
        },
        "net451": {
            "frameworkAssemblies": {
                "System.Runtime.Serialization": "",
                "System.Runtime": ""
            }
        }
    }
}

The StringExtensions class is extended to make it possible to define the Elasticsearch URL in a configuration file.
( original code from ReactiveMarkets )

using System;
using System.IO;
#if NET45
#else
using Microsoft.Extensions.Configuration;
#endif

namespace NLog.Targets.ElasticSearch
{
    internal static class StringExtensions
    {
        public static object ToSystemType(this string field, Type type)
        {
            switch (type.FullName)
            {
                case "System.Boolean":
                    return Convert.ToBoolean(field);
                case "System.Double":
                    return Convert.ToDouble(field);
                case "System.DateTime":
                    return Convert.ToDateTime(field);
                case "System.Int32":
                    return Convert.ToInt32(field);
                case "System.Int64":
                    return Convert.ToInt64(field);
                default:
                    return field;
            }
        }

        public static string GetConnectionString(this string name)
        {
            var value = GetEnvironmentVariable(name);
            if (!string.IsNullOrEmpty(value))
                return value;
#if NET45
            var connectionString = ConfigurationManager.ConnectionStrings[name];
            return connectionString?.ConnectionString;
#else
            IConfigurationRoot configuration;
            var builder = new Microsoft.Extensions.Configuration.ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);

            configuration = builder.Build();
            return configuration["ElasticsearchUrl"];
#endif

        }

        private static string GetEnvironmentVariable(this string name)
        {
            return string.IsNullOrEmpty(name) ? null : Environment.GetEnvironmentVariable(name);
        }
    }
}

When the application is started the logs are written to Elasticsearch. These logs can be viewed in Elasticsearch

http://localhost:9200/logstash-‘date’/_search

{
	"took": 2,
	"timed_out": false,
	"_shards": {
		"total": 5,
		"successful": 5,
		"failed": 0
	},
	"hits": {
		"total": 18,
		"max_score": 1.0,
		"hits": [{
			"_index": "logstash-2016.08.19",
			"_type": "logevent",
			"_id": "AVaiJHPycDWw4BKmTWqP",
			"_score": 1.0,
			"_source": {
				"@timestamp": "2016-08-19T09:31:44.5790894Z",
				"level": "Debug",
				"message": "2016-08-19 11:31:44.5790|DEBUG|Microsoft.AspNetCore.Hosting.Internal.WebHost|Hosting starting"
			}
		},
		{
			"_index": "logstash-2016.08.19",
			"_type": "logevent",
			"_id": "AVaiJHPycDWw4BKmTWqU",
			"_score": 1.0,
			"_source": {
				"@timestamp": "2016-08-19T09:31:45.4788003Z",
				"level": "Info",
				"message": "2016-08-19 11:31:45.4788|INFO|Microsoft.AspNetCore.Hosting.Internal.WebHost|Request starting HTTP/1.1 DEBUG http://localhost:55423/  0"
			}
		},
		{
			"_index": "logstash-2016.08.19",
			"_type": "logevent",
			"_id": "AVaiJHPycDWw4BKmTWqW",
			"_score": 1.0,
			"_source": {
				"@timestamp": "2016-08-19T09:31:45.6248512Z",
				"level": "Debug",
				"message": "2016-08-19 11:31:45.6248|DEBUG|Microsoft.AspNetCore.Server.Kestrel|Connection id \"0HKU82EHFC0S9\" completed keep alive response."
			}
		},

Links

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/ReactiveMarkets/NLog.Targets.ElasticSearch

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://github.com/nlog/NLog/wiki/Database-target

https://www.elastic.co/products/elasticsearch

https://github.com/elastic/logstash

https://github.com/elastic/elasticsearch-net

https://www.nuget.org/packages/Elasticsearch.Net/

https://github.com/nlog/NLog/wiki/File-target#size-based-file-archival

http://www.danesparza.net/2014/06/things-your-dad-never-told-you-about-nlog/



Anuraj Parameswaran: Using NancyFx in ASP.NET Core

This post is about using NancyFx in ASP.NET Core. NancyFx is lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. The goal of the framework is to stay out of the way as much as possible and provide a super-duper-happy-path to all interactions. Nancy is designed to handle DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH requests and provides a simple, elegant, Domain Specific Language (DSL) for returning a response with just a couple of keystrokes. Integration with NancyFx was available on earlier days (k days) for ASP.NET Core, but in DNX days it lost the support and now it is back.


Andrew Lock: How to set the hosting environment in ASP.NET Core

How to set the hosting environment in ASP.NET Core

When running ASP.NET Core apps, the WebHostBuilder will automatically attempt to determine which environment it is running in. By convention, this will be one of Development, Staging or Production but you can set it to any string value you like.

The IHostingEnvironment allows you to programatically retrieve the current environment so you can have environment-specific behaviour. For example, you could enable bundling and minification of assets when in the Production environment, while serving files unchanged in the Development environment.

In this post I'll show how to change the current hosting environment used by ASP.NET Core using environment variables on Windows and OS X, using Visual Studio and Visual Studio Code, or by using command line arguments.

Changing the hosting environment

ASP.NET Core uses the ASPNETCORE_ENVIRONMENT environment variable to determine the current environment. By default, if you run your application without setting this value, it will automatically default to the Production environment.

When you run your application using dotnet run, the console output lists the current hosting environment in the output:

> dotnet run
Project TestApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.

Hosting environment: Production  
Content root path: C:\Projects\TestApp  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

There are a number of ways to set this environment variable, the method that is best depends on how you are building and running your applications.

Setting the environment variable in Windows

The most obvious way to change the environment is to update the environment variable on your machine. This is useful if you know, for example, that applications run on that machine will always be in a given environment, whether that is Development, Staging or Production.

On Windows, there are a number of ways to change the environment variables, depending on what you are most comfortable with.

At the command line

You can easily set an environment variable from a command prompt using the setx.exe command included in Windows since Vista. You can use it to easily set a user variable:

>setx ASPNETCORE_ENVIRONMENT "Development"

SUCCESS: Specified value was saved.

Note that the environment variable is not set in the current open window. You will need to open a new command prompt to see the updated environment. It is also possible to set system variables (rather than just user variables) if you open an administrative command prompt and add the /M switch:

>setx ASPNETCORE_ENVIRONMENT "Development" /M

SUCCESS: Specified value was saved.

Using PowerShell

Alternatively, you can use PowerShell to set the variable. In PowerShell, as well as the normal user and system variables, you can also create a temporary variable using the $Env: command:

$Env:ASPNETCORE_ENVIRONMENT = "Development"

The variable created lasts just for the duration of your PowerShell session - once you close the window the environment reverts back to its default value.

Alternatively, you could set the user or system environment variables directly. This method does not change the environment variables in the current session, so you will need to open a new PowerShell window to see your changes. As before, changing the system (Machine) variables will require administrative access

[Environment]::SetEnvironmentVariable("ASPNETCORE_ENVIRONMENT", "Development", "User")
[Environment]::SetEnvironmentVariable("ASPNETCORE_ENVIRONMENT", "Development", "Machine")

Using the windows control panel

If you're not a fan of the command prompt, you can easily update your variables using your mouse!Click the windows start menu button (or press the Windows key), search for environment variables, and choose Edit environment variables for your account:

How to set the hosting environment in ASP.NET Core

Selecting this option will open the System Properties dialog:

How to set the hosting environment in ASP.NET Core

Click Environment Variables to view the list of current environment variables on your system.

How to set the hosting environment in ASP.NET Core

Assuming you do not already have a variable called ASPNETCORE_ENVIRONMENT, click the New... button and add a new account environment variable:

How to set the hosting environment in ASP.NET Core

Click OK to save all your changes. You will need to re-open any command windows to ensure the new environment variables are loaded.

Setting the environment variables on OS X

You can set an environment variable on OS X by editing or creating the .bash_profile file in your favourite editor (I'm using nano):

$ nano ~/.bash_profile

You can then export the ASPNETCORE_ENVIRONMENT variable. The variable will not be set in the current session, but will be updated when you open a new terminal window:

export ASPNETCORE_ENVIRONMENT=development  

Important, the command must be as is written above - there must be no spaces either side of the =. Also note that my bash knowledge is pretty poor, so if this approach doesn't work for you, I encourage you to go googling for one that does:)

Configuring the hosting environment using your IDE

Instead of updating the user or system environment variables, you can also configure the environment from your IDE, so that when you run or debug the application from there, it will use the correct environment.

Visual studio launchSettings.json

When you create an ASP.NET Core application using the Visual Studio templates, it automatically creates a launchSettings.json file. This file serves as the provider for the Debug targets when debugging with F5 in Visual Studio:

How to set the hosting environment in ASP.NET Core

When running with one of these options, Visual Studio will set the environment variables specified. In the file below, you can see the ASPNETCORE_ENVIRONMENT variable is set to Development.

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:53980/",
      "sslPort": 0
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "TestApp": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

You can also edit this file using the project Properties window. Just double click the Properties node in your solution, and select the Debug tab:

How to set the hosting environment in ASP.NET Core

Visual Studio Code launch.json

If you are using Visual Studio Code, there is a similar file, launch.json which is added when you first debug your application. This file contains a number of configurations one of which should be called ".NET Core Launch (web)". You can set additional environment variables when launching with this command by adding keys to the env property:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": ".NET Core Launch (web)",
            "type": "coreclr",
            "request": "launch",
            "preLaunchTask": "build",
            "program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/TestApp.dll",
            "args": [],
            "cwd": "${workspaceRoot}",
            "stopAtEntry": false,
            "launchBrowser": {
                "enabled": true,
                "args": "${auto-detect-url}",
                "windows": {
                    "command": "cmd.exe",
                    "args": "/C start ${auto-detect-url}"
                },
                "osx": {
                    "command": "open"
                },
                "linux": {
                    "command": "xdg-open"
                }
            },
            "env": {
                "ASPNETCORE_ENVIRONMENT": "Development"
            },
            "sourceFileMap": {
                "/Views": "${workspaceRoot}/Views"
            }
        }
    ]
}

Setting hosting environment using command args

Depending on how you have configured your WebHostBuilder, you may also be able to specify the environment by providing a command line argument. To do so, you need to use a ConfigurationBuilder which uses the AddCommandLine() extension method from the Microsoft.Extensions.Configuration.CommandLine package. You can then pass your configuration to the WebHostBuilder using UseConfiguration(config):

var config = new ConfigurationBuilder()  
    .AddCommandLine(args)
    .Build();

var host = new WebHostBuilder()  
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseKestrel()
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

This allows you to specify the hosting environment at run time using the --environment argument:

> dotnet run --environment "Staging"

Project TestApp (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.

Hosting environment: Staging  
Content root path: C:\Projects\Repos\Stormfront.Support\src\Stormfront.Support  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

Summary

In this post I showed a number of ways you can specify which environment you are currently running in. Which method is best will depend on your setup and requirements. However you choose, if you change the environment variable you will need to restart the Kestrel server, as the environment is determined as part of the server start up.

Altering the hosting environment allows you to configure your application differently at run time, enabling debugging tools in a development setting or optimisations in a production environment. For details on using the IHostingEnvironment service, checkout the documentation here.

One final point - environment variables are case insensitive, so you can use "Development", "development" or "DEVELOPMENT" to your heart's content.


Damien Bowden: ASP.NET Core logging with NLog and Microsoft SQL Server

This article shows how to setup logging in an ASP.NET Core application which logs to a Microsoft SQL Server using NLog.

Code: https://github.com/damienbod/AspNetCoreNlog

NLog posts in this series:

  1. ASP.NET Core logging with NLog and Microsoft SQL Server
  2. ASP.NET Core logging with NLog and Elasticsearch

The NLog.Extensions.Logging is required to add NLog to a ASP.NET Core application. This package as well as the System.Data.SqlClient are added to the dependencies in the project.json file.

 "dependencies": {
        "Microsoft.NETCore.App": {
            "version": "1.0.0",
            "type": "platform"
        },
        "Microsoft.AspNetCore.Mvc": "1.0.0",
        "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
        "Microsoft.AspNetCore.Diagnostics": "1.0.0",
        "Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
        "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
        "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
        "Microsoft.Extensions.Configuration.Json": "1.0.0",
        "Microsoft.Extensions.Logging": "1.0.0",
        "Microsoft.Extensions.Logging.Console": "1.0.0",
        "Microsoft.Extensions.Logging.Debug": "1.0.0",
        "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
        "NLog.Extensions.Logging": "1.0.0-rtm-alpha4",
        "System.Data.SqlClient": "4.1.0"
  },

Now a nlog.config file is created and added to the project. This file contains the configuration for NLog. In the file, the targets for the logs are defined as well as the rules. An internal log file is also defined, so that if something is wrong with the logging configuration, you can find out why.

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\damienbod\AspNetCoreNlog\Logs\internal-nlog.txt">
    
  <targets>
    <target xsi:type="File" name="allfile" fileName="nlog-all.log"
                layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

    <target xsi:type="File" name="ownFile-web" fileName="nlog-own.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}" />

    <target xsi:type="Null" name="blackhole" />

    <target name="database" xsi:type="Database" >

    <connectionString>
        Data Source=N275\MSSQLSERVER2014;Initial Catalog=Nlogs;Integrated Security=True;
    </connectionString>
<!--
  Remarks:
    The appsetting layouts require the NLog.Extended assembly.
    The aspnet-* layouts require the NLog.Web assembly.
    The Application value is determined by an AppName appSetting in Web.config.
    The "NLogDb" connection string determines the database that NLog write to.
    The create dbo.Log script in the comment below must be manually executed.

  Script for creating the dbo.Log table.

  SET ANSI_NULLS ON
  SET QUOTED_IDENTIFIER ON
  CREATE TABLE [dbo].[Log] (
      [Id] [int] IDENTITY(1,1) NOT NULL,
      [Application] [nvarchar](50) NOT NULL,
      [Logged] [datetime] NOT NULL,
      [Level] [nvarchar](50) NOT NULL,
      [Message] [nvarchar](max) NOT NULL,
      [Logger] [nvarchar](250) NULL,
      [Callsite] [nvarchar](max) NULL,
      [Exception] [nvarchar](max) NULL,
    CONSTRAINT [PK_dbo.Log] PRIMARY KEY CLUSTERED ([Id] ASC)
      WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
  ) ON [PRIMARY]
-->

          <commandText>
              insert into dbo.Log (
              Application, Logged, Level, Message,
              Logger, CallSite, Exception
              ) values (
              @Application, @Logged, @Level, @Message,
              @Logger, @Callsite, @Exception
              );
          </commandText>

          <parameter name="@application" layout="AspNetCoreNlog" />
          <parameter name="@logged" layout="${date}" />
          <parameter name="@level" layout="${level}" />
          <parameter name="@message" layout="${message}" />

          <parameter name="@logger" layout="${logger}" />
          <parameter name="@callSite" layout="${callsite:filename=true}" />
          <parameter name="@exception" layout="${exception:tostring}" />
      </target>
      
  </targets>

  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <logger name="*" minlevel="Trace" writeTo="database" />
      
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

The nlog.config also needs to be added to the publishOptions in the project.json file.

 "publishOptions": {
    "include": [
        "wwwroot",
        "Views",
        "Areas/**/Views",
        "appsettings.json",
        "web.config",
        "nlog.config"
    ]
  },

Now the database can be setup. You can create a new database, or use and existing one and add the dbo.Log table to it using the script below.

  SET ANSI_NULLS ON
  SET QUOTED_IDENTIFIER ON
  CREATE TABLE [dbo].[Log] (
      [Id] [int] IDENTITY(1,1) NOT NULL,
      [Application] [nvarchar](50) NOT NULL,
      [Logged] [datetime] NOT NULL,
      [Level] [nvarchar](50) NOT NULL,
      [Message] [nvarchar](max) NOT NULL,
      [Logger] [nvarchar](250) NULL,
      [Callsite] [nvarchar](max) NULL,
      [Exception] [nvarchar](max) NULL,
    CONSTRAINT [PK_dbo.Log] PRIMARY KEY CLUSTERED ([Id] ASC)
      WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
  ) ON [PRIMARY]

The table in the database must match the configuration defined in the nlog.config file. The database target defines the connection string, the command used to add a log and also the parameters required.

You can change this as required. As yet, most of the NLog parameters, do not work with ASP.NET Core, but this will certainly change as it is in early development. The NLog.Web Nuget package, when completed will contain the ASP.NET Core parameters.

Now NLog can be added to the application in the Startup class in the configure method. The AddNLog extension method is used and the logging directory can be defined.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddNLog();

    var configDir = "C:\\git\\damienbod\\AspNetCoreNlog\\Logs";

    if (configDir != string.Empty)
    {
        var logEventInfo = NLog.LogEventInfo.CreateNullEvent();


        foreach (FileTarget target in LogManager.Configuration.AllTargets.Where(t =&gt; t is FileTarget))
        {
            var filename = target.FileName.Render(logEventInfo).Replace("'", "");
            target.FileName = Path.Combine(configDir, filename);
        }

        LogManager.ReconfigExistingLoggers();
    }

    //env.ConfigureNLog("nlog.config");

    //loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    //loggerFactory.AddDebug();

    app.UseMvc();
}

Now the logging can be used, using the default logging framework from ASP.NET Core.

An example of an ActionFilter

using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.Extensions.Logging;

namespace AspNetCoreNlog
{
    public class LogFilter : ActionFilterAttribute
    {
        private readonly ILogger _logger;

        public LogFilter(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger("LogFilter");
        }

        public override void OnActionExecuting(ActionExecutingContext context)
        {
            _logger.LogInformation("OnActionExecuting");
            base.OnActionExecuting(context);
        }

        public override void OnActionExecuted(ActionExecutedContext context)
        {
            _logger.LogInformation("OnActionExecuted");
            base.OnActionExecuted(context);
        }

        public override void OnResultExecuting(ResultExecutingContext context)
        {
            _logger.LogInformation("OnResultExecuting");
            base.OnResultExecuting(context);
        }

        public override void OnResultExecuted(ResultExecutedContext context)
        {
            _logger.LogInformation("OnResultExecuted");
            base.OnResultExecuted(context);
        }
    }
}

The action filter is added in the Startup ConfigureServices services.

public void ConfigureServices(IServiceCollection services)
{

    // Add framework services.
    services.AddMvc();

    services.AddScoped<LogFilter>();
}

And some logging can be added to a MVC controller.

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace AspNetCoreNlog.Controllers
{

    [ServiceFilter(typeof(LogFilter))]
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        private  ILogger<ValuesController> _logger;

        public ValuesController(ILogger<ValuesController> logger)
        {
            _logger = logger;
        }

        [HttpGet]
        public IEnumerable Get()
        {
            _logger.LogCritical("nlog is working from a controller");
            throw new ArgumentException("way wrong");
            return new string[] { "value1", "value2" };
        }

When the application is started, the logs are written to a local file in the Logs folder and also to the database.

sqlaspnetdatabselogger_01

Notes

NLog for ASP.NET Core is in early development, and the documentation is for .NET and not for dotnetcore, so a lot of parameters, layouts, targets, etc do not work. This project is open source, so you can extend it and contribute to if if you want.

Links

https://github.com/NLog/NLog.Extensions.Logging

https://github.com/NLog

https://docs.asp.net/en/latest/fundamentals/logging.html

https://msdn.microsoft.com/en-us/magazine/mt694089.aspx

https://github.com/nlog/NLog/wiki/Database-target



Dominick Baier: Trying IdentityServer

We have a demo instance of IdentityServer3 on https://demo.identityserver.io.

I already used this for various samples (e.g. the OpenID Connect native clients) – and it makes it easy to try IdentityServer with your clients without having to deploy and configure anything yourself.

The Auth0 guys just released a nice OpenID Connect playground website that allows you to interact with arbitrary spec compliant providers. If you want to try it yourself with IdentityServer – click on the configuration link and use these settings:

Screenshot 2016-08-17 10.09.34

In essence you only need to provide the URL of the discovery document, the client ID and the secret. The rest gets configured automatically for you.

Pressing Start will bring you to our standard login page:

Screenshot 2016-08-17 11.22.56

You can either use bob / bob (or alice / alice) to log in – or use your Google account.

Logging in will bring you to the consent screen – and then back to the playground:

Screenshot 2016-08-17 11.24.24

Now you can exercise the code to token exchange as well as the validation. As a last step you can even jump directly to jwt.io for inspecting the identity token:

Screenshot 2016-08-17 11.27.05

The source code for the IdentityServer demo web site can be found here.

We also have a more client types preconfigured, e.g. OpenID Connect hybrid flow, implicit flow as well as clients using PKCE. You can see the full list here.

You can request the typical OpenID Connect scopes – as well as a scope called api. The resulting access token can then be used to call https://demo.identityserver.io/api/identity which in turn will echo back the token claims as a JSON document.

Screenshot 2016-08-17 11.45.50

Have fun!

 


Filed under: ASP.NET, IdentityServer, OpenID Connect, OWIN, Uncategorized, WebAPI


Dominick Baier: Commercial Support Options for IdentityServer

Many customers have asked us for production support for IdentityServer. While this is something we would love to provide, Brock and I can’t do that on our own because we can’t guarantee the response times.

I am happy to announce that we have now partnered with our good friends at Rock Solid Knowledge to provide commercial support for IdentityServer!

RSK has excellent people with deep IdentityServer knowledge and Brock and I will help out as 2nd level support if needed.

Head over to https://www.identityserver.com/ and get in touch with them!


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: Access services inside ConfigureServices using IConfigureOptions in ASP.NET Core

Access services inside ConfigureServices using IConfigureOptions in ASP.NET Core

In a recent post I showed how you could populate an IOptions<T> object from the database for the purposes of caching the query result. It wasn't the most flexible solution or really recommended but it illustrated the point.

However one of the issues I had with the solution was the need to access configured services from within the IOptions<T> configuration lambda, inside ConfigureServices itself.

The solution I came up with was to use the injected IServiceCollection to build an ISerivceProvider to get the configured service I needed. As I pointed out at the time, this serivce-locator pattern felt icky and wrong, but I couldn't see any other way of doing it.

Thankfully, and inspired by this post from Ben Collins, there is a much better solution to be had by utilising the IConfigureOptions<T> interface

The previous version

In my post, I had this (abbreviated) code, which was trying to access an Entity Framework Core DbContext in the Configure method to setup the MultitenancyOptions class:

public class Startup  
{
    public Startup(IHostingEnvironment env) { /* ... build configuration */ }

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // add MVC, connection string etc

        services.Configure<MultitenancyOptions>(  
            options =>
            {
                var scopeFactory = services
                    .BuildServiceProvider()
                    .GetRequiredService<IServiceScopeFactory>();

                using (var scope = scopeFactory.CreateScope())
                {
                    var provider = scope.ServiceProvider;
                    using (var dbContext = provider.GetRequiredService<ApplicationDbContext>())
                    {
                        options.AppTenants = dbContext.AppTenants.ToList();
                    }
                }
            });

        // add other services
    }

    public void Configure(IApplicationBuilder app) { /* ... configure pipeline */ }
}

Yuk. As you can see, the call to Configure is a mess. In order to obtain a scoped lifetime DbContext it has to build the service collection to produce an IServiceProvider, to then obtain an IServiceScopeFactory. From there it can create the correct scoping, create another IServiceProvider, and finally find the DbContext we actually need. This lambda has way too much going on, and 90% of it is plumbing.

If you're wondering why you shouldn't just fetch a DbContext directly from the first service provider, check out this twitter discussion between Julie Lerman, David Fowler and Shawn Wildermuth.

The new improved answer

So, now we know what we're working with, how do we improve it? Luckily, the ASP.NET team anticipated this issue - instead of providing a lambda for configuring the MultitenancyOptions object, we implement the IConfigureOptions<TOptions> interface, where TOptions: MultitenancyOptions. This interface has a single method, Configure, which is passed a constructed MultitenancyOptions object for you to update:

public class ConfigureMultitenancyOptions : IConfigureOptions<MultitenancyOptions>  
{
    private readonly IServiceScopeFactory _serviceScopeFactory;
    public ConfigureMultitenancyOptions(IServiceScopeFactory serivceScopeFactory)
    {
        _serviceScopeFactory = serivceScopeFactory;
    }

    public void Configure(MultitenancyOptions options)
    {
        using (var scope = _serviceScopeFactory.CreateScope())
        {
            var provider = scope.ServiceProvider;
            using (var dbContext = provider.GetRequiredService<ApplicationDbContext>())
            {
                options.AppTenants = dbContext.AppTenants.ToList();
            }
        }
    }
}

We then just need to register our configuration class in the normal ConfigureServices method, which becomes:

public void ConfigureServices(IServiceCollection services)  
{
    // add MVC, connection string etc

    services.AddSingleton<IConfigureOptions<MultitenancyOptions>, ConfigureMultitenancyOptions>();

    // add other services
}

The advantage of this approach is that the configuration class is created through the usual DI container, so can have dependencies injected simply through the constructor. There is still a slight complexity introduced by the fact we want MultitenancyOptions to have a singleton lifecycle. To prevent leaking a lifetime scope, we must inject an IServiceScopeFactory and create an explicit scope before retrieving our DbContext. Again, check out Julie Lerman's twitter conversation and associated post for more details on this.

The most important point here is that we are no longer calling BuildServiceProvider() in our Configure method, just to get a service we need. So just try and forget that I ever mentioned doing that ;)

Under the hood

In hindsight, I really should have guessed that this approach was possible, as the lambda approach is really just a specialised version of the IConfigureOptions approach.

Taking a look at the Options source code really shows how these two methods tie together. The Configure extension method on IServiceCollection that takes a lambda looks like the following (with precondition checks etc removed)

public static IServiceCollection Configure<TOptions>(  
    this IServiceCollection services, Action<TOptions> configureOptions)
{
    services.AddSingleton<IConfigureOptions<TOptions>>(new ConfigureOptions<TOptions>(configureOptions));
    return services;
}

All this method is doing is creating an instance of the ConfigureOptions<TOptions> class, passing in the configuration lambda, and registering that as a singleton. That looks suspiciously like our tidied up approach, the difference being that we left the instantiation of our ConfigureMultitenancyOptions to the DI system, instead of new-ing it up directly.

As is to be expected, the ConfigureOptions<TOptions>, which implements IConfigureOptions<TOptions> just calls the provided lambda in it's Configure method:

public class ConfigureOptions<TOptions> : IConfigureOptions<TOptions> where TOptions : class  
{
    public ConfigureOptions(Action<TOptions> action)
    {
        Action = action;
    }

    public Action<TOptions> Action { get; }

    public virtual void Configure(TOptions options)
    {
        Action.Invoke(options);
    }
}

So again, the only substantive difference between using the lambda approach and the IConfigureOptions approach is that the latter allows you to inject services into your options class to be used during configuration.

One final useful point to be ware of: you can register multiple instances of IConfigureOptions<TOptions> for the same TOptions. They will all be applied, and in the order they were added to the service collection in ConfigureServices. That allows you to do simple configuration in ConfigureServices using the Configure lambda, while using a separate implementation of IConfigureOptions elsewhere, if you're so inclined.


Andrew Lock: Exploring the cookie authentication middleware in ASP.NET Core

Exploring the cookie authentication middleware in ASP.NET Core

This is the second in a series of posts looking at authentication and authorisation in ASP.NET Core. In the previous post, I talked about authentication in general and how claims-based authentication works. In this post I'm going to go into greater detail about how an AuthenticationMiddleware is implemented in ASP.NET Core, using the CookieAuthenticationMiddleware as a case study. Note that it focus on 'how the middleware is built' rather than 'how to use it in your application'.

Authentication in ASP.NET Core

Just to recap, authentication is the process of determining who a user is, while authorisation revolves around what they are allowed to do. In this post we are dealing solely with the authentication side of the pipeline.

Hopefully you have an understanding of claims-based authentication in ASP.NET Core at a high level. If not I recommend you check out my previous post. We ended that post by signing in a user with a call to AuthenticationManager.SignInAsync, in which I stated that this would call down to the cookie middleware in our application.

The Cookie Authentication Middleware

In this post we're going to take a look at some of that code in the CookieAuthenticationMiddleware, to see how it works under the hood and to get a better understanding of the authentication pipeline in ASP.NET Core. We're only looking at the authentication side of security at the moment, and just trying to show the basic mechanics of what's happening, rather than look in detail at how cookies are built and how they're encrypted etc. We're just looking at how the middleware and handlers interact with the ASP.NET Core framework.

So first of all, we need to add the CookieAuthentiationMiddleware to our pipeline, as per the documentation. As always, middleware order is important, so you should include it before you need to authenticate a user:

app.UseCookieAuthentication(new CookieAuthenticationOptions()  
{
    AuthenticationScheme = "MyCookieMiddlewareInstance",
    LoginPath = new PathString("/Account/Unauthorized/"),
    AccessDeniedPath = new PathString("/Account/Forbidden/"),
    AutomaticAuthenticate = true,
    AutomaticChallenge = true
});

As you can see, we set a number of properties on the CookieAuthenticationOptions when configuring our middleware, most of which we'll come back to later.

So what does the cookie middleware actually do? Well, looking through the code, surprisingly little actually - it sets up some default options and it derives from the base class AuthenticationMiddleware<T>. This class just requires that you return an AuthenticationHandler<T> from the overloaded method CreateHandler(). It's in this handler where all the magic happens. We'll come back to the middleware itself later and focus on the handler for now.

AuthenticateResult and AuthenticationTicket

Before we get in to the meaty stuff, there are a couple of supporting classes we will use in the authentication handler which we should understand: AuthenticateResult and AuthenticationTicket, outlined below:

public class AuthenticationTicket  
{
    public string AuthenticationScheme { get; }
    public ClaimsPrincipal Principal{ get; }
    public AuthenticationProperties Properties { get; }
}

AuthenticationTicket is a simple class that is returned when authentication has been successful. It contains the authenticated ClaimsPrinciple, the AuthenticationScheme indicating which middleware was used to authenticate the request, and an AuthenticationProperties object containing optional additional state values for the authentication session.

public class AuthenticateResult  
{
    public bool Succeeded
    {
        get
        {
            return Ticket != null;
        }
    }
    public AuthenticationTicket Ticket { get; }
    public Exception Failure { get; }
    public bool Skipped { get; }

    public static AuthenticateResult Success(AuthenticationTicket ticket)
    {
        return new AuthenticateResult() { Ticket = ticket };
    }

    public static AuthenticateResult Skip()
    {
        return new AuthenticateResult() { Skipped = true };
    }

    public static AuthenticateResult Fail(Exception failure)
    {
        return new AuthenticateResult() { Failure = failure };
    }
}

An AuthenticateResult holds the result of an attempted authentication and is created by calling one of the static methods Success, Skip or Fail. If the authentication was successful, then a successful AuthenticationTicket must be provided.

The CookieAuthenticationHandler

The CookieAuthenticationHandler is where all the authentication work is actually done. It derives from the AuthenticationHandler base class, and so in principle only a single method needs implementing - HandleAuthenticateAsync():

protected abstract Task<AuthenticateResult> HandleAuthenticateAsync();  

This method is responsible for actually authenticating a given request, i.e. determining if the given request contains an identity of the expected type, and if so, returns an AuthenticateResult containing the authenticated ClaimsPrinciple. As is to be expected, the CookieAuthenticationHandler implementation depends on a number of other methods but we'll run through each of those shortly:

protected override async Task<AuthenticateResult> HandleAuthenticateAsync()  
{
    var result = await EnsureCookieTicket();
    if (!result.Succeeded)
    {
        return result;
    }

    var context = new CookieValidatePrincipalContext(Context, result.Ticket, Options);
    await Options.Events.ValidatePrincipal(context);

    if (context.Principal == null)
    {
        return AuthenticateResult.Fail("No principal.");
    }

    if (context.ShouldRenew)
    {
        RequestRefresh(result.Ticket);
    }

    return AuthenticateResult.Success(new AuthenticationTicket(context.Principal, context.Properties, Options.AuthenticationScheme));
}

So first of all, the handler calls EnsureCookieTicket() which tries to create an AuthenticateResult from a cookie in the HttpContext. Three things can happen here, depending on the state of the cookie:

  1. If the cookie doesn't exist, i.e. the user has not yet signed in, the method will return AuthenticateResult.Skip(), indicating this status.
  2. If the cookie exists and is valid, it returns a deserialised AuthenticationTicket using AuthenticateResult.Success(ticket).
  3. If the cookie cannot be decrypted (e.g. it is corrupt or has been tampered with), if it has expired, or if session state is used and no corresponding session can be found, it returns AuthenticateResult.Fail().

At this point, if we don't have a valid AuthenticationTicket, then the method just bails out. Otherwise, we are theoretically happy that a request is authenticated. However at this point we have literally just taken the word of an encrypted cookie. It's possible that things may have changed in the back end of your application since the cookie was issued - the user may have been deleted for instance! To handle this, the CookieHandler calls ValidatePrincipal, which should set the ClaimsPrincipal to null if it is no longer valid. If you are using the CookieAuthenticationMiddleware in your own apps and are not using ASP.NET Core Identity, you should take a look at the documentation for handling back-end changes during authentication.

SignIn and SignOut

For the simplest authentication, implementing HandleAuthenticateAsync is all that is required. In reality however, you will need to override other methods of AuthenticationHandler in order to have usable behaviour. The CookieAuthenticationHandler needs more behaviour than just this method - HandleAuthenticateAsync means we can read and deserialise and authentication ticket to a ClaimsPrinciple, but we also need to have the ability to set a cookie when the user signs in, and to remove the cookie when the user signs out.

The HandleSignInAsync(SignInContext signin) method builds up a new AuthenticationTicket, encrypts it, and writes the cookie to the response. It is called internally as part of a call to SignInAsync(), which in turn is called by AuthenticationManager.SignInAsync(). I won't cover this aspect in detail in this post, but it is the AuthenticationManager which you would typically invoke from your AccountController after a user has successfully logged in. As shown in my previous post, you would construct a ClaimsPrincipal with the appropriate claims and pass that in to AuthenticationManager, which eventually would reach the CookieAuthenticationMiddleware and allow you to set the cookie. Finally, if the user is currently on the login page, it redirects the user to the return url.

At the other end of the process, HandleSignOutAsync deletes the authentication cookie from the context, and if the user is on the logout page, redirects the user to the return url.

Unauthorised vs Forbidden

The final two methods of AuthenticationHandler which are overridden in the CookieAuthenticationHandler deal with the case where authentication or authorisation has failed. These two cases are distinct but easy to confuse.

A user is unauthorised if they have not yet signed in. This corresponds to a 401 when thinking about HTTP requests. A user is forbidden if they have already signed in, but the identity they are using does not have permission to view the requested resource, which corresponds to a 403 in HTTP.

The default implementations of HandleUnauthorizedAsync and HandleForbiddenAsync in the base AuthenticationHandler are very simple, and look like this (for the forbidden case):

protected virtual Task<bool> HandleForbiddenAsync(ChallengeContext context)  
{
    Response.StatusCode = 403;
    return Task.FromResult(true);
}

As you can see, they just set the status code of the response and leave it at that. While perfectly valid from an HTTP and security point of view, leaving the methods as that would give a poor experience for users, as they would simply see a blank screen:

Exploring the cookie authentication middleware in ASP.NET Core

Instead, the CookieAuthenticationHandler overrides these methods and redirects the users to a different page. For the unauthorised response, the user is automatically redirected to the LoginPath which we specified when setting up the middleware. The user can then hopefully login and continue where they left off.

Similarly, for the Forbidden response, the user is redirected to the path specified in AccessDeniedPath when we added the middleware to our pipeline. We don't redirect to the login path in this case, as the user is already authenticated, they just don't have the correct claims or permissions to view the requested resource.

Customising the CookieHandler using CookieHandlerOptions

We've already covered a couple of the properties on the CookieAuthenticationOptions we passed when creating the middleware, namely LoginPath and AccessDeniedPath, but it's worth looking at some of the other common properties too.

First up is AuthenticationScheme. In the previous post on authentication we said that when you create an authenticated ClaimsIdentity you must provide an AuthenticationScheme. The AuthenticationScheme provided when configuring the middleware is passed down into the ClaimsIdentity when it is created, as well as into a number of other fields. It becomes particularly important when you have multiple middleware for authentication and authorisation (which I'll go into on a later post).

Next, up is the property AutomaticAuthenticate, but that requires us to back peddle slightly, to think about how the authentication middleware works. I'm going to assume you understand about ASP.NET Core middleware in general, if not it's probably worth reading up on it first!

The AuthenticationHandler Middleware

The CookieAuthenticationMiddleware is typically configured to run relatively early in the pipeline. The abstract base class AuthentictionMiddleware<T> from which it derives has a simple Invoke method, which just creates a new handler of the appropriate type, initialises it, runs the remaining middleware in the pipeline, and then tears down the handler:

 public abstract class AuthenticationMiddleware<TOptions> 
    where TOptions : AuthenticationOptions, new()
{
    private readonly RequestDelegate _next;

    public string AuthenticationScheme { get; set; }
    public TOptions Options { get; set; }
    public ILogger Logger { get; set; }
    public UrlEncoder UrlEncoder { get; set; }

    public async Task Invoke(HttpContext context)
    {
        var handler = CreateHandler();
        await handler.InitializeAsync(Options, context, Logger, UrlEncoder);
        try
        {
            if (!await handler.HandleRequestAsync())
            {
                await _next(context);
            }
        }
        finally
        {
            try
            {
                await handler.TeardownAsync();
            }
            catch (Exception)
            {
                // Don't mask the original exception, if any
            }
        }
    }

    protected abstract AuthenticationHandler<TOptions> CreateHandler();
}

As part of the call to InitializeAsync, the handler verifies whether AutomaticAuthenticate is true. If it is, then the handler will immediately run the method HandleAuthenticateAsync, so all subsequent middleware in the pipeline will see an authenticated ClaimsPrincipal. In contrast, if you do not set AutomaticAuthenticate to true, then authentication will only occur at the point authorisation is required, e.g. when you hit an [Authorize] attribute or similar.

Similarly, during the return path through the middleware pipeline, if AutomaticChallenge is true and the response code is 401, then the handler will call HandleUnauthorizedAsync. In the case of the CookieAuthenticationHandler, as discussed, this will automatically redirect you to the login page specified.

The key points here are that when the Automatic properties are set, the authentication middleware always runs at it's configured place in the pipeline. If not, the handlers are only run in response to direct authentication or challenge requests. If you are having problems where you are returning a 401 from a controller, and you are not getting redirected to the login page, then check the value of AutomaticChallenge and make sure it's true.

In cases, where you only have a single piece of authentication middleware, it makes sense to have both values set to true. Where it gets more complicated is if you have multiple authentication handlers. In that case, as explained in the docs, you must set AutomaticAuthenticate to false. I'll cover the specifics of using multiple authentication handlers in a subsequent post but the docs give a good starting point.

Summary

In this post we used the CookieAuthenticationMiddleware as an example of how to implement an AuthenticationMiddleware. We showed some of the methods which must be handled in order to implement an AuthenticationHandler, the methods called to sign a user in and out, and how unauthorised and forbidden requests are handled.

Finally, we showed some of the common options available when configuring the CookieAuthenticationOptions, and the effects they have.

In later posts I will cover how to configure your application to use multiple authentication handlers, how authorisation works and the various ways to use it, and how ASP.NET Core Identity pulls all of these aspects together to do the hard work for you.


Andrew Lock: Introduction to Authentication with ASP.NET Core

Introduction to Authentication with ASP.NET Core

This is the first in a series of posts looking at authentication and authorisation in ASP.NET Core. In this post, I'm going to talk about authentication in general and how claims-based authentication works in ASP.NET Core.

The difference between Authentication and Authorisation

First of all, we should clarify the difference between these two dependent facets of security. The simple answer is that Authentication is the process of determining who you are, while Authorisation revolves around what you are allowed to do, i.e. permissions. Obviously before you can determine what a user is allowed to do, you need to know who they are, so when authorisation is required, you must also first authenticate the user in some way.

Authentication in ASP.NET Core

The fundamental properties associated with identity have not really changed in ASP.NET Core - although they are different, they should be familiar to ASP.NET developers in general. For example, in ASP.NET 4.x, there is a property called User on HttpContext, which is of type IPrincipal, which represents the current user for a request. In ASP.NET Core there is a similar property named User, the difference being that this property is of type ClaimsPrincipal, which implements IPrincipal.

The move to use ClaimsPrincipal highlights a fundamental shift in the way authentication works in ASP.NET Core compared to ASP.NET 4.x. Previously, authorisation was typically Role-based, so a user may belong to one or more roles, and different sections of your app may require a user to have a particular role in order to access it. In ASP.NET Core this kind of role-based authorisation can still be used, but that is primarily for backward compatibility reasons. The route they really want you to take is claims-based authentication.

Claims-based authentication

The concept of claims-based authentication can be a little confusing when you first come to it, but in practice it is probably very similar to approaches you are already using. You can think of claims as being a statement about, or a property of, a particular identity. That statement consists of a name and a value. For example you could have a DateOfBirth claim, FirstName claim, EmailAddress claim or IsVIP claim. Note that these statements are about what or who the identity is, not what they can do.

The identity itself represents a single declaration that may have many claims associated with it. For example, consider a driving license. This is a single identity which contains a number of claims - FirstName, LastName, DateOfBirth, Address and which vehicles you are allowed to drive. Your passport would be a different identity with a different set of claims.

So lets take a look at that in the context of ASP.NET Core. Identities in ASP.NET Core are a ClaimsIdentity. A simplified version of the class might look like this (the actual class is a lot bigger!):

public class ClaimsIdentity: IIdentity  
{
    public string AuthenticationType { get; }
    public bool IsAuthenticated { get; }
    public IEnumerable<Claim> Claims { get; }

    public Claim FindFirst(string type) { /*...*/ }
    public Claim HasClaim(string type, string value) { /*...*/ }
}

I have shown there of the main properties in this outline, including Claims which consists of all the claims associated with an identity. There are a number of utility methods for working with the Claims, two of which I have shown here. These are useful when you come to authorisation, and you are trying to determine whether a particular Identity has a given Claim you are interested in.

The AuthenticationType property is fairly self-explanatory. In our practical example previously, this might be the string Passport or DriversLicense, but in ASP.NET it is more likely to be Cookies, Bearer, or Google etc. It's simply the method that was used to authenticate the user, and to determine the claims associated with an identity.

Finally, the property IsAuthenticated indicates whether an identity is authenticated or not. This might seem redundant - how could you have an identity with claims when it is not authenticated? One scenario may be where you allow guest users on your site, e.g. on a shopping cart. You still have an identity associated with the user, and that identity may still have claims associated with it, but they will not be authenticated. This is an important distinction to bear in mind.

As an adjunct to that, in ASP.NET Core if you create a ClaimsIdentity and provide an AuthenticationType in the constructor, IsAuthenticated will always be true. So an authenticated user must always have an AuthenticationType, and, conversely, you cannot have an unauthenticated user which has an AuthenticationType.

Multiple Identities

Hopefully at this point you have a conceptual handle on claims and how they relate to an Identity. I said at the beginning of this section that the User property on HttpContext is a ClaimsPrinciple, not a ClaimsIdentity, so lets take a look at a simplified version of it:

public class ClaimsPrincipal :IPrincipal  
{
    public IIdentity Identity { get; }
    public IEnumerable<ClaimsIdentity> Identities { get; }
    public IEnumerable<Claim> Claims { get; }

    public bool IsInRole(string role) { /*...*/ }
    public Claim FindFirst(string type) { /*...*/ }
    public Claim HasClaim(string type, string value) { /*...*/ }
}

The important point to take from this class is that there is an Identities property which returns IEnumerable<ClaimsIdentity>. So a single ClaimsPrincipal can consist of multiple Identities. There is also an Identity property that is there in order to implement the IPrincipal interface - in .NET Core it just selects the first identity in Identites.

Going back to our previous example of the passport and driving license, multiple identities actually makes sense - those documents are both forms of identity, each of which contain a number of claims. In this case you are the principal, and you have two forms of identity. When you have those two pieces of identity in your possession, you as the principal inherit all the claims from all your identities.

Consider another practical example - you are taking a flight. First you will be asked at the booking desk to prove the claims you make about your FirstName and LastName etc. Luckily, you remembered your passport, which is an identity that verifies those claims, so you receive your boarding pass and you're on your way to the next step.

At security you are asked to prove the claim that you are booked on to a flight. This time you need the other form of identity you are carrying, the boarding pass, which has the FlightNumber claim, so you are allowed to continue on your way.

Finally, once you are through security, you make your way to the VIP lounge, and are asked to prove your VIP status with the VIP Number claim. This could be in the form of a VIP card, which would be another form of identity and would verify the claim requested. If you did not have a card, you could not present the requested claim, you would be denied access, and so would be asked to leave and stop making a scene.

Introduction to Authentication with ASP.NET Core

Again, the key points here are that a principal can have multiple identities, these identities can have multiple claims, and the ClaimsPrincipal inherits all the claims of its Identities.

As mentioned previously, the role based authorisation is mostly around for backwards compatibility reasons, so the method IsInRole will be generally unneeded if you adhere to the claims-based authentication emphasised in ASP.NET Core. Under the hood, this is also just implemented using claims, where the claim type defaults to RoleClaimType, or ClaimType.Role.

Thinking in terms of ASP.NET Core again, multiple identities and claims could be used for securing different parts of your application, just as they were at the airport. For example, you may login with a username and password, and be granted a set of claims based on the identity associated with that, which allows you to browse the site. But say you have a particularly sensitive section in your app, that you want to secure further. This could require that you present an additional identity, with additional associated claims, for example by using two factor authentication, or requiring you to re-enter your password. That would allow the current principle to have multiple identities, and to assume the claims of all the provided identities.

Creating a new principal

So now we've seen how principals work in ASP.NET Core, how would we go about actually creating one? A simple example, such as you might see in a normal web page login might contain code similar to the following

public async Task<IActionResult> Login(string returnUrl = null)  
{
    const string Issuer = "https://gov.uk";

    var claims = new List<Claim> {
        new Claim(ClaimTypes.Name, "Andrew", ClaimValueTypes.String, Issuer),
        new Claim(ClaimTypes.Surname, "Lock", ClaimValueTypes.String, Issuer),
        new Claim(ClaimTypes.Country, "UK", ClaimValueTypes.String, Issuer),
        new Claim("ChildhoodHero", "Ronnie James Dio", ClaimValueTypes.String)
    };

    var userIdentity = new ClaimsIdentity(claims, "Passport");

    var userPrincipal = new ClaimsPrincipal(userIdentity);

    await HttpContext.Authentication.SignInAsync("Cookie", userPrincipal,
        new AuthenticationProperties
        {
            ExpiresUtc = DateTime.UtcNow.AddMinutes(20),
            IsPersistent = false,
            AllowRefresh = false
        });

    return RedirectToLocal(returnUrl);
}

This method currently hard-codes the claims in, but obviously you would obtain the claim values from a database or some other source. The first thing we do is build up a list of claims, populating each with a string for its name, a string for its value, and optional Issuer and ClaimValueType fields. The ClaimType class is a helper which exposes a number of common claim types. Each of these is a url for example http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name, but you do not have to use a url, as shown in the last claim added.

Once you have built up your claims you can create a new ClaimsIdentity, passing in your claim list, and specifying the AuthenticationType (to ensure that your identity has IsAuthentication=true). Finally you can create a new ClaimsPrincipal using your identity and sign the user in. In this case we are telling the AuthenticationManager to use the "Cookie" authentication handler, which we must have configured as part of our middleware pipeline.

Summary

In this post, I described how claims-based authentication works and how it applies to ASP.NET Core. In the next post, I will look at the next stage of the authentication process - how the cookie middleware actually goes about signing you in with the provided principal. Subsequent posts will cover how you can use multiple authentication handlers, how authorisation works, and how ASP.NET Core Identity ties it all together.


Dominick Baier: Fixing OAuth 2.0 with OpenID Connect?

I didn’t like Nat’s Fixing OAuth? post.

“For protecting a resource with low value, current RFC6749 and RFC6750 with an appropriate constraint should be good enough…For protecting a resource whose value is higher than a certain level, e.g., the write access to the Financial API, then it would be more appropriate to use a modified protocol.”

I agree that write access to a financial API is a high value operation (and security measure will go far beyond authentication and token requests) – but most users and implementers of OAuth 2.0 based system would surely disagree that their resources only have a low value.

Then on the other hand I agree that OAuth 2.0 (or rather RFC6749 and 6750) on its own indeed has its issues and I would advise against using it (important part “on its own”).

Instead I would recommend using OpenID Connect – all of the OAuth 2.0 problems regarding client to provider communication are already fixed in OIDC – metadata, signed protocol responses, sender authentication, nonces etc.

When we designed identity server, we always saw OpenID Connect as a “super-set” of OAuth 2.0 and always recommended against using OAuth without the OIDC parts. Some people didn’t like that – but applying sound constraints definitely helped security.

I really don’t understand why this is not the official messaging? Maybe it’s political?

Screenshot 2016-07-29 08.42.17.png

(no response)

Wrt to the issues around bearer tokens – well – I really, really don’t understand why proof of possession and HTTP signing takes that long and seems to be such a low priority. We successfully implemented PoP tokens in IdentityServer and customers are using it. Of course there are issues – there will always be issues. But sitting on a half done spec for years will definitely not solve them.

So my verdict is – for interactive applications, don’t use OAuth 2.0 on its own. Just use OpenID Connect and identity tokens in addition to access tokens – you don’t need to be a financial API to have proper security.

 


Filed under: IdentityServer, OAuth, OpenID Connect, WebAPI


Andrew Lock: Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

This is another in a series of posts looking at how to add multi-tenancy to ASP.NET Core applications using SaasKit. SaasKit is an open source project, created by Ben Foster, to make adding multi-tenancy to your application easier.

In the last two posts I looked at how you can load your tenants from the database, and cache the TenantContext<AppTenant> between requests. Once you have a tenant context being correctly resolved as part of your middleware pipeline, you can start to add additional tenant-specific features on top of this.

Theming and static files

One very common feature in multi-tenant applications is the ability to add theming, so that different tenants can have a custom look-and feel, while keeping the same overall functionality. Ben described a way to do this on his blog using custom Views per tenant, and a custom IViewLocationExpander for resolving them at run time.

This approach works well for what it is trying to achieve - a tenant can have a highly customised view of the same underlying functionality by customising the view templates per tenant. Similarly, the custom _layout.cshtml files reference different css files located at, for example /themes/THEME_NAME/assets, so the look of the site can be customised per tenant. However this is relatively complicated if all you want to do is, for example, serve a different file for each tenant - it requires you to create a custom theme and view for each tenant.

Also, in this approach there is no isolation between the different themes, the templates just reference different files. It is perfectly possible to reference the files of one theme from another, by just including the appropriate path. This approach assumes there is no harm with a tenant using theme A accessing files from theme B. This is a safe bet when just used for theming, but what if we were serving some semi-sensitive file, say a site logo. It may be that we don't want Tenant A to be able to view the logo of Tenant B, without explicitly being within the Tenant B context.

To demonstrate the problem, I created a simple MVC multi-tenant application using the default template and added SaasKit. I added my AppTenant model shown below, and configured the tenant to be loaded by hostname from configuration for simplicity. You can find the full code on GitHub.

public class AppTenant  
{
    public string Name { get; set; }
    public string Hostname { get; set; }
    public string Folder { get; set; }
}

Note that the AppTenant class has a Folder property. This will be the name of the subfolder in which tenant specific assets live. Static files are served by default from the wwwroot folder; we will store our tenant specific files in a sub folder of this as indicated by the Folder property. For example. for Tenant 1, we store our files in /wwwroot/tenants/tenant1:

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Inside of each of the tenant-specific folders I have created an images/banner.svg file which will we show on the homepage for each tenant. The key thing to keep in mind is we don't want tenants to be able to access the banner of another tenant.

First attempt - direct serving of static files

The easiest way to show the tenant specific banner on the homepage is to just update the image path to include AppTenant.Folder. To do this we first inject the current AppTenant into our View as described in a previous post, and use the property directly in the image path:

@inject AppTenant Tenant;
@{
    ViewData["Title"] = "Home Page";
}

<div id="myCarousel" class="carousel slide">  
    <div class="carousel-inner" role="listbox">
        <div class="item active">
            <img src="~/tenant/@Tenant.Folder/images/banner.svg" alt="ASP.NET" class="img-responsive" />
        </div>
    </div>
</div>  

Here you can see we are creating a banner header containing just one image, and injecting the AppTenant.Folder property to ensure we get the right banner. The result is that different images are displayed per tenant

Tenant 1 (localhost:5001):
Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Tenant 2 (localhost:5002):
Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

This satisfies our first requirement of having tenant-specific files, but it fails at the second - we can access the Tenant 2 banner from the Tenant 1 hostname (localhost:5001):

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

This is the specific problem we are trying to address, so we will need a new approach.

Forking the middleware pipeline

The technique we are going to use here is to fork the middleware pipeline. As explained in my previous post on creating custom middleware, middleware is essentially everything that sits between the raw request constructed by the web server and your application behaviour.

In ASP.NET Core the middleware effectively sits in a sequential pipe. Each piece of middleware can perform some operation on the HttpContext, and then either return, or call the next middleware in the pipe. Finally it gets another chance to modify the HttpContext on the way 'back through'.

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

When you use SaasKit in your application, you add a piece of TenantResolutionMiddleware into the pipeline. It is also possible, as described in Ben Foster's post, to split the middleware pipeline per tenant. In that way you can have different middleware for each tenant, before the pipeline merges again, to continue with the remainder of the middleware:

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

To achieve our requirements, we are going to be doing something slightly different again - we are going to fork the pipeline completely such that requests to our tenant specific files go down one branch, while all other requests continue down the pipeline as usual.

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Building the middleware

Before we go about building the required custom middleware, it's worth noting that there are actually lots of different ways to achieve what I'm aiming for here. The approach I'm going to show is just one of them.

  • Tenant resolution should happen at the start of the pipeline
  • Requests for tenant specific static files should arrive at the static file path, with the AppTenant.Folder segment removed. e.g. from the example above, a request for the banner image for tenant 1 should go to /tenant/images/banner.svg.
  • Register a route which matches paths starting with the /tenant/ segment.
  • If the route is not matched, continue on the pipeline as usual.
  • If the route is matched, fork the pipeline. Insert the appropriate AppTenant.Folder segment into the path and serve the file using the standard static file middleware.

UseRouter to match path and fork the pipeline

The first step in processing a tenant-specific file, is identifying when a tenant-specific static file is requested. We can achieve this using the IRouter interface from the ASP.NET Core library, and configuring it to look for our path prefix.

We know that any requests to our files should start with the folder name /tenant/ so we configure our router to fork the pipeline whenever it is matched. We can do this using a RouteBuilder and MapRoute in the Startup.Configure method:

var routeBuilder = new RouteBuilder(app);  
var routeTemplate = "tenant/{*filePath}";  
routeBuilder.MapRoute(routeTemplate, (IApplicationBuilder fork) =>  
    {
        //Add middleware to rewrite our path for tenant specific files
        fork.UseMiddleware<TenantSpecificPathRewriteMiddleware>();
        fork.UseStaticFiles();
    });
var router = routeBuilder.Build();  
app.UseRouter(router);  

We are mapping a single route as required, and also specifying a catch-all route parameter which will match everything after the first segment, and assign it to the filePath route parameter.

It is also here that the middleware pipeline is forked when the route is matched. We have added the static file middleware to the end of the pipeline fork, and our custom middleware just before that. As the static file middleware just sees a path that contains our tenant-specific files, it acts exactly like normal - if the file exists, it serves it, otherwise it returns a 404.

Rewriting the path for tenant-specific files

In order to rewrite the path we will use a small piece of middleware which is called before we attempt to resolve our tenant-specific static files.

public class TenantSpecificPathRewriteMiddleware  
{
    private readonly RequestDelegate _next;

    public TenantSpecificPathRewriteMiddleware(
        RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        var tenantContext = context.GetTenantContext<AppTenant>();

        if (tenantContext != null)
        {
            //remove the prefix portion of the path
            var originalPath = context.Request.Path;
            var tenantFolder = tenantContext.Tenant.Folder;
            var filePath = context.GetRouteValue("filePath");
            var newPath = new PathString($"/tenant/{tenantFolder}/{filePath}");

            context.Request.Path = newPath;

            await _next(context);

            //replace the original url after the remaining middleware has finished processing
            context.Request.Path = originalPath;
        }
    }
}

This middleware just does one thing - it inserts the AppTenant.Folder segment into the path, and replaces the value of HttpContext.Request.Path. It then calls the remaining downstream middleware (in our case, just the static file handler). Once the remaining middleware has finished processing, it restores the original request path. That way, any upstream middleware which looks at the path on the return journey through will be unaware any change happened.

It is worth noting that this setup makes it impossible to access files from another tenant's folder. For example, if I am Tenant 1, attempting to access the banner of Tenant 2, I might try a path like /tenant/tenant2/images/banner.svg. However, our rewriting middleware will alter the path to be /tenant/tenant1/tenant2/images/banner.svg - which likely does not exist, but in any case resides in the tenant1 folder and so is by definition acceptable for serving to Tenant 1.

Referencing a tenant specific file

Now we have the relevant infrastructure in place we just need to reference the tenant-specific banner file in our view:

@{
    ViewData["Title"] = "Home Page";
}

<div id="myCarousel" class="carousel slide">  
    <div class="carousel-inner" role="listbox">
        <div class="item active">
            <img src="~/tenant/images/banner.svg" alt="ASP.NET" class="img-responsive" />
        </div>
    </div>
</div>  

As an added bonus, we no longer need to inject the tenant into the view in order to build the full path to the tenant-specific file. We just reference the path without the AppTenant.Folder segment in the knowledge it'll be added later.

Testing it out

And that's it, we're all done! To test it out we verify that localhost:5001 and localhost:5002 return their appropriate banners as before.

Tenant 1 (localhost:5001):
Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Tenant 2 (localhost:5002):
Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

So that still works, but what about if we try and access the purple banner of Tenant 2 from Tenant 1?

Forking the pipeline - adding tenant-specific files with SaasKit in ASP.NET Core

Success - looking at the developer tools we can see that the request returned a 404. This was because the actual path tested by the static file middleware, /tenant/tenant1/tenant2/images/banner.svg, does not exist.

Tidying things up

Now we've seen that our implementation works, we can tidy things up a little. As a convention, middleware is typically added to the pipeline with a Use extension method, in the same way UseStaticFiles was added to our fork earlier. We can easily wrap our router in an extension method to give the same effect

public static IApplicationBuilder UsePerTenantStaticFiles<TTenant>(  
    this IApplicationBuilder app,
    string pathPrefix,
    Func<TTenant, string> tenantFolderResolver)
{
    var routeBuilder = new RouteBuilder(app);
    var routeTemplate = pathPrefix + "/{*filePath}";
    routeBuilder.MapRoute(routeTemplate, (IApplicationBuilder fork) =>
        {
            fork.UseMiddleware<TenantSpecificPathRewriteMiddleware<TTenant>>(pathPrefix, tenantFolderResolver);
            fork.UseStaticFiles();
        });
    var router = routeBuilder.Build();
    app.UseRouter(router);

    return app;
}

As well as wrapping the route builder in an IApplicationBuilder extension method, I've done a couple of extra things too. First, I've made the method (and our TenantSpecificPathRewriteMiddleware) generic, so that we can reuse it in apps with other AppTenant implementations. As part of that, you need to pass in a Func<TTenant, string> to indicate how to obtain the tenant-specific folder name. Finally, you can pass in the tenant/ routing template prefix, so you can name the tenant-specific folder in wwwroot anything you like.

To use the extension method , we just call it in Startup.Configure, after the tenant resolution middleware:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    //other configuration
    app.UseMultitenancy<AppTenant>();

    app.UsePerTenantStaticFiles<AppTenant>("tenant", x => x.Folder);

    app.UseStaticFiles();
    //app.UseMvc(); etc
}

Considerations

As always with middleware, the order is important. Obviously we cannot use tenant specific static files if we have not yet run the tenant resolution middleware. Also, it's critical for this design that the UseStaticFiles call comes after both UseMultitenancy and UsePerTenantStaticFiles. This is in contrast to the usual pattern where you would have UseStaticFiles very early in the pipeline.

The reason for this is that we need to make sure we fork the pipeline as early as possible when resolving paths of the form /tenant/REST_OF_THE_PATH. If the static file handler was first in the pipeline then we would be back to square one in serving files from other tenants!

Another point I haven't addressed is how we handle the case when the tenant context cannot be resolved. There are many different ways to handle this, which Ben covers in detail in his post on handling unresolved tenants. These include adding a default tenant (so a context always exists), adding additional middleware to redirect, or returning a 404 if the tenant cannot be resolved.

With respect to our fork of the pipeline, we are explicitly checking for a tenant context in the TenantSpecificPathRewriteMiddleware, and if one is not found, we are just returning immediately. Note however that we are no setting a status code, which means that the response sent to the browser will be the default 200, but with no content. The result is essentially undefined at this point, so it is probably wise to handle the unresolved context issue immediately after the call to UseMultitenancy, before calling our tenant-specific static file middleware.

As I mentioned previously, there are a number of different ways we could achieve the end result we're after here. For example, we could have used the Map extension on IApplicationBuilder to fork the pipeline instead of using an IRouter. The Map method looks for a path prefix (/tenant in our case) and forks the pipeline at this point, in a similar way to the IRouter implementation shown. It's worth nothing there's also a basic url-rewriting middleware in development which may be useful for this sort of requirement in the near future.

Summary

Adding multi-tenancy to an ASP.NET Core application is made a lot simpler thanks to the open source SaasKit. Depending on your requirements, it can be used to enable data partitioning by using different databases per client, to provide different themes and styling across tenants, or to wholesale swap out portions of the middleware pipeline depending on the tenant.

In this post I showed how we can create a fork of the ASP.NET Core middleware pipeline and to use it to map generic urls of the form PREFIX/path/to/file.txt, to a tenant-specific folder such as PREFIX/TENANT/path/to/file.txt. This allows us to isolate static files between tenants where necessary.


Andrew Lock: Loading tenants from the database with SaasKit - Part 2, Caching

Loading tenants from the database with SaasKit - Part 2, Caching

In my previous post, I showed how you could add multi-tenancy to an ASP.NET Core application using the open source SaasKit library. Saaskit requires you to register an ITenantResolver<TTenant> which is used to identify and resolve the applicable tenant (if any) for a given HttpContext. In my post I showed how you could resolve tenants stored in a database using Entity Framework Core.

One of the advantages of loading tenants from the database is that the available tenants can be configured at runtime, as opposed to loaded once at startup. That means we can bring new tenants online or take others down while our app is still running, without any downtime. Also, the tenant details are completely decoupled from our application settings - there's no risk of tenant details being uploaded to our source control system, as the tenant details are production data that just lives in our production database.

The main disadvantage is that every single request (which makes it through the middleware pipeline to the TenantResolutionMiddleware) will be hitting the database to try and resolve the current tenant. We will always be getting the freshest data that way, but it will become more of a problem as our app scales.

In this post, I'm going to show a couple of ways you can get around this problem, while still storing your tenants in the database. You can find the source code for the examples on GitHub.

1. Loading tenants from the database into IOptions<T>

One of the simplest ways around the problem is to go back to storing our AppTenant models in an IOptions<T> backed setting class. In the simplest configuration-based implementation, the AppTenants themselves are loaded from appsettings.json (for example) and used directly in the ITenantResolver. This is the approach demonstrated in one of the SaasKit samples.

First we create an Options object containing our tenants:

public class MultitenancyOptions  
{
    public ICollection<AppTenant> AppTenants { get; set; } = new List<AppTenant>();
}

Then we update the app tenant resolver to resolve the tenants from our MultitenancyOptions object using the IOptions pattern:

public class AppTenantResolver : ITenantResolver<AppTenant>  
{
    private readonly ICollection<AppTenant> _tenants;

    public AppTenantResolver(IOptions<MultitenancyOptions> appTenantSettings)
    {
        _tenants = appTenantSettings.Value.AppTenants;
    }

    public Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
    {
        TenantContext<AppTenant> tenantContext = null;

        var tenant = _tenants.FirstOrDefault(
            t => t.Hostname.Equals(context.Request.Host.Value.ToLower()));

        if (tenant != null)
        {
            tenantContext = new TenantContext<AppTenant>(tenant);
        }

        return Task.FromResult(tenantContext);
    }
}

Finally, in the ConfigureServices method of Startup, you can configure the MultitenancyOptions. In the SaasKit sample application, this is loaded directly from the IConfigurationRoot using:

services.Configure<MultitenancyOptions>(Configuration.GetSection("Multitenancy"));  

We can use a similar technique in our application, using the same AppTenantResolver and MultitenancyOptions, but instead of configuring them directly from IConfigurationRoot, we will load them from the database. Our full ConfigureServices method, including configuring our Entity Framework DbContext and adding the multi-tenancy services, is shown below:

Warning, while the code below works, I don't recommend you use this specific version of it, as explained below. For a better implementation, see my subsequent post here.

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();

    var connectionString = Configuration["ApplicationDbContext:ConnectionString"];
    services.AddDbContext<ApplicationDbContext>(
        opts => opts.UseNpgsql(connectionString)
    );

    services.Configure<MultitenancyOptions>(
        options =>
        {
            var provider = services.BuildServiceProvider();
            using (var dbContext = provider.GetRequiredService<ApplicationDbContext>())
            {
                options.AppTenants = dbContext.AppTenants.ToList();
            }
        });

    services.AddMultitenancy<AppTenant, AppTenantResolver>();
}

The key here is that we are configuring our MultitenancyOptions to be loaded from the database. This lambda will be run the first time that IOptions<MultitenancyOptions> is required and will be cached for the lifetime of the app.

When the first request comes in, the TenantResolutionMiddleware will attempt to resolve a TenantContext<AppTenant>. This requires creating an instance of the AppTenantResolver, which in turn has a dependency on IOptions<MultitenancyOptions>. At this point, the AppTenants are loaded from the database as per our configuration and added to the MultitenancyOptions. The remainder of the request then processes as normal.

On subsequent requests, the previously configured IOptions<MultitenancyOptions> is injected immediately into the AppTenantResolver, so the configuration code and our database are hit only once.

Obviously this approach has a significant drawback - any changes to the AppTenants table in the database are ignored by the application; the available tenants are fixed after the first request. However it does still have the advantage of tenant details being stored in the database, so it may fit your needs.

One final thing to point out is the way we resolved the Entity Framework ApplicationDbContext while still in the ConfigureService method. To do this, we had to call IServiceCollection.BuildServiceProvider in order to get an IServiceProvider, from which we could then retrieve an ApplicationDbContext.

While this works perfectly well in this example, I am not 100% sure this is a great idea - explicitly having to call BuildServiceProvider just feels wrong! Also, I believe it could lead to some subtle bugs if you are using a third party container (like Autofac or StructureMap) that uses it's own implementation of IServiceProvider; the code above would bypass the third-party container. Just some things to be aware of if you decide to use it in your application.

Update - After seeing this tweet from David Fowler, it looks like you should probably use an IServiceScopeFactory to create a scope before obtaining an IServiceProvider, something similar to this.:

services.Configure<MultitenancyOptions>(  
    options =>
    {
        var scopeFactory = services
            .BuildServiceProvider()
            .GetRequiredService<IServiceScopeFactory>();

        using (var scope = scopeFactory.CreateScope())
        {
            var provider = scope.ServiceProvider;
            using (var dbContext = provider.GetRequiredService<ApplicationDbContext>())
            {
                options.AppTenants = dbContext.AppTenants.ToList();
            }
        }
    });

Update 2 - A much cleaner approach to this can be achieved by using IConfigureOptions<T>, as explained in my post here. Also see this post by Ben Collins for the inspiration.

2.Caching tenants using MemoryCacheTenantResolver

So the configuration based approach works well enough but it has some caveats. We no longer hit the database on every request, but we've lost the ability to add new tenants at runtime.

Luckily, SaasKit comes with an ITenantResolver<TTenant> implementation base class which will give us the best of both worlds - the MemoryCacheTenantResolver<TTenant>. This class adds a wrapper around an IMemoryCache, allowing you to easily cache TenantContexts between requests.

To make use of it we need to implement some abstract methods:

public class CachingAppTenantResolver : MemoryCacheTenantResolver<AppTenant>  
{
    private readonly ApplicationDbContext _dbContext;

    public CachingAppTenantResolver(ApplicationDbContext dbContext, IMemoryCache cache, ILoggerFactory loggerFactory)
        : base(cache, loggerFactory)
    {
        _dbContext = dbContext;
    }

    protected override string GetContextIdentifier(HttpContext context)
    {
        return context.Request.Host.Value.ToLower();
    }

    protected override IEnumerable<string> GetTenantIdentifiers(TenantContext<AppTenant> context)
    {
        return new[] { context.Tenant.Hostname };
    }

    protected override Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
    {
        TenantContext<AppTenant> tenantContext = null;
        var hostName = context.Request.Host.Value.ToLower();

        var tenant = _dbContext.AppTenants.FirstOrDefault(
            t => t.Hostname.Equals(hostName));

        if (tenant != null)
        {
            tenantContext = new TenantContext<AppTenant>(tenant);
        }

        return Task.FromResult(tenantContext);
    }
}

The first method, GetContextIdentifier() returns the unique identifier for a tenant. It is used as the key for the IMemoryCache, so it must be unique and resolvable from the HttpContext.

GetTenantIdentifiers() is called after a tenant has been resolved. We return all the applicable identifiers for the given tenant, which allows us to resolve the provided context when any of these identifiers are found in the HttpContext. That allows you to have multiple hostnames which resolve to the same tenant, for example.

Finally, ResolveAsync() is the method where the actual resolution for a tenant occurs, which is called if a tenant cannot be found in the IMemoryCache. This method call is identical to the one in my previous post, where we are finding the first tenant with the provided hostname in the database. If the tenant can be resolved, we create a new context and return it, whereupon it will be cached for future requests.

It's worth noting that if the tenant can not be resolved from the HttpContext (no tenant exists in the database with the provided hostname), then ResolveAsync returns null. However, this value is not cached in the IMemoryCache. This means every request with the missing hostname will require a call to ResolveAsync and consequently a hit against the database. Depending on your setup that may or may not be an issue. If necessary you could create your own version of MemoryCacheTenantResolver which also caches null results.

To see the caching in effect we can just check out the logs generated by SaasKit when we make a request, thanks to the universal logging enabled by universal dependency injection.

On the first request to a new host, where I have my tenants stored in a PostgreSQL database, we can see the MemoryCacheTenantResolver attempting to resolve the tenant using the hostname, getting a miss, and so hitting the database:

dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      Resolving TenantContext using CachingAppTenantResolver.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver[0]  
      TenantContext not present in cache with key "localhost:5000". Attempting to resolve.
dbug: Npgsql.NpgsqlConnection[3]  
      Opening connection to database 'DbTenantswithSaaskit' on server 'tcp://localhost:5432'.
info: Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommandBuilderFactory[1]  
      Executed DbCommand (2,233ms) [Parameters=[@__ToLower_0='?'], CommandType='Text', CommandTimeout='30']
      SELECT "t"."AppTenantId", "t"."Hostname", "t"."Name"
      FROM "AppTenants" AS "t"
      WHERE "t"."Hostname" = @__ToLower_0
      LIMIT 1
dbug: Npgsql.NpgsqlConnection[4]  
      Closing connection to database 'DbTenantswithSaaskit' on server 'tcp://localhost:5432'.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver[0]  
      TenantContext:131d4739-0447-47f6-a0b3-f8a8656a946f resolved. Caching with keys "localhost:5000".
dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      TenantContext Resolved. Adding to HttpContext.

On the second request to the same tenant, the MemoryCacheTenantResolver gets a hit from the cache, so immediately returns the TenantContext from the first request.

dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      Resolving TenantContext using CachingAppTenantResolver.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver[0]  
      TenantContext:131d4739-0447-47f6-a0b3-f8a8656a946f retrieved from cache with key "localhost:5000".
dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      TenantContext Resolved. Adding to HttpContext.

When we call a different host (localhost:5001), the MemoryCacheTenantResolver again hits the database, and stores the result in the cache.

dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      Resolving TenantContext using CachingAppTenantResolver.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver[0]  
      TenantContext not present in cache with key "localhost:5001". Attempting to resolve.
dbug: Npgsql.NpgsqlConnection[3]  
      Opening connection to database 'DbTenantswithSaaskit' on server 'tcp://localhost:5432'.
info: Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommandBuilderFactory[1]  
      Executed DbCommand (118ms) [Parameters=[@__ToLower_0='?'], CommandType='Text', CommandTimeout='30']
      SELECT "t"."AppTenantId", "t"."Hostname", "t"."Name"
      FROM "AppTenants" AS "t"
      WHERE "t"."Hostname" = @__ToLower_0
      LIMIT 1
dbug: Npgsql.NpgsqlConnection[4]  
      Closing connection to database 'DbTenantswithSaaskit' on server 'tcp://localhost:5432'.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver[0]  
      TenantContext:3915c2b9-8210-47ad-a22c-193e23f2d552 resolved. Caching with keys "localhost:5001".
dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware[0]  
      TenantContext Resolved. Adding to HttpContext.

With our new CachingAppTenantResolver we now have the best of both worlds - we can happily add new tenants to the database and they will be resolved in subsequent requests, but we are not hitting the database for every subsequent request to a known host. Obviously this approach can be extended - as with any sort of caching, we may well need to be able to invalidate certain tenant contexts if for example a tenant is removed. And there is the question of whether failed tenant resolutions should be cached. Again, just things to think about when you come to adding it to your application!

Summary

Multi-tenancy can be a tricky thing to get right, and SaasKit is a great open source project for providing the basics to get up and running. As before, I recommend you check out the project on GitHub and also check out Ben Foster's blog as he has whole bunch of posts on it. In this post we showed a couple of approaches for caching TenantContexts between requests, to reduce the traffic to the database.

Whether either of these approaches will work for you will depend on your exact use case, but hopefully they will give you a start in the right direction. Thanks to the design of the SaasKit TenantResolutionMiddleware it is easy to just plug in a new ITenantResolver if your requirements change down the line.


Anuraj Parameswaran: How to execute Stored Procedure in EF Core

This post is on using stored procedure in EF Core. The support for stored procedure in EF Core is similar to the earlier versions of EF Code first. In this post I am using NorthWind database for demo purposes. I am using database first approach to generate model classes. First I have created three stored procedures. One will select all the rows in products table, another with a parameter and the third one is inserting the data to table. Here is the implementation.


Anuraj Parameswaran: Building a custom dotnet cli tool

This post is about building a custom dotnet cli tool, using this you can extend the dotnet cli for various operations like minifing images, scripts, css etc. The tools used by the .NET CLI are just console applications, so you can create a dotnet core console application and use it. In this post I am building a tool to optimize images in the web application. I am using the ImageProcessorCore package.


Anuraj Parameswaran: Consuming WCF Services in ASP.NET Core

This post is about consuming WCF Services in ASP.NET Core. With the availability of .Net Core RC2 and ASP.NET Core RC2 Microsoft introduced an update to the WCF Connected Service Preview for ASP.NET 5 Visual Studio extension tool for generating SOAP service references for clients built on top of WCF for .NET Core RC2. To consume a WCF Service, first you need to install the WCF Connected Service extension, which can be downloaded and installed using Extensions and Updates feature from Tools. Or you can download it from Visual Studio Gallery. Please make sure youre installing the required prerequisites, otherwise it may not install successfully. Once installation completed successfully, you can create a new ASP.NET Project and consume the service. This tool retrieves metadata from a WCF service in the current solution, locally or on a network, and generates a .NET Core 1.0.0 compatible source code file for a WCF client proxy that you can use to access the service.


Anuraj Parameswaran: Slack Authentication with ASP.NET Core

This post is about implementing authentication with Slack. Similar to Linkedin or GitHub, Slack also supports OAuth 2 protocol for authentication. In this post, for authenticating a user against slack, the generic OAuth middleware is used. To use OAuth middleware you require few details about the OAuth provider.


Damien Bowden: Import, Export ASP.NET Core localized data as CSV

This article shows how localized data can be imported and exported using Localization.SqlLocalizer. The data is exported as CSV using the Formatter defined in the WebApiContrib.Core.Formatter.Csv package. The data can be imported using a file upload.

This makes it possible to export the applications localized data to a CSV format. A translation company can then translate the data, and it can be imported back into the application.

Code: https://github.com/damienbod/AspNet5Localization

The two required packages are added to the project.json file. The Localization.SqlLocalizer package is used for the ASP.NET Core localization. The WebApiContrib.Core.Formatter.Csv package defines the CSV InputFormatter and OutputFormatter.

"Localization.SqlLocalizer": "1.0.3",
"WebApiContrib.Core.Formatter.Csv": "1.0.0"

The packages are then added in the Startup class. The DBContext LocalizationModelContext is added and also the ASP.NET Core localization middleware. The InputFormatter and the OutputFormatter are alos added to the MVC service.

using System;
using System.Collections.Generic;
using System.Globalization;
using Localization.SqlLocalizer.DbStringLocalizer;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Localization;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Microsoft.Net.Http.Headers;
using WebApiContrib.Core.Formatter.Csv;

namespace ImportExportLocalization
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            var sqlConnectionString = Configuration["DbStringLocalizer:ConnectionString"];

            services.AddDbContext<LocalizationModelContext>(options =>
                options.UseSqlite(
                    sqlConnectionString,
                    b => b.MigrationsAssembly("ImportExportLocalization")
                )
            );

            // Requires that LocalizationModelContext is defined
            services.AddSqlLocalization(options => options.UseTypeFullNames = true);

            services.AddMvc()
                .AddViewLocalization()
                .AddDataAnnotationsLocalization();

            services.Configure<RequestLocalizationOptions>(
                options =>
                {
                    var supportedCultures = new List<CultureInfo>
                        {
                            new CultureInfo("en-US"),
                            new CultureInfo("de-CH"),
                            new CultureInfo("fr-CH"),
                            new CultureInfo("it-CH")
                        };

                    options.DefaultRequestCulture = new RequestCulture(culture: "en-US", uiCulture: "en-US");
                    options.SupportedCultures = supportedCultures;
                    options.SupportedUICultures = supportedCultures;
                });

            var csvFormatterOptions = new CsvFormatterOptions();

            services.AddMvc(options =>
            {
                options.InputFormatters.Add(new CsvInputFormatter(csvFormatterOptions));
                options.OutputFormatters.Add(new CsvOutputFormatter(csvFormatterOptions));
                options.FormatterMappings.SetMediaTypeMappingForFormat("csv", MediaTypeHeaderValue.Parse("text/csv"));
            });

            services.AddScoped<ValidateMimeMultipartContentFilter>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
            app.UseRequestLocalization(locOptions.Value);

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseBrowserLink();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();

            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }
    }
}

The ImportExportController makes it possible to download all the localized data as a csv file. This is implemented in the GetDataAsCsv method. This file can then be emailed or whatever to a translation company. When the updated file is returned, it can be imported using the ImportCsvFileForExistingData method. The method accepts the file and updates the data in the database. It is also possible to add new csv data, but care has to be taken as the key has to match the configuration of the Localization.SqlLocalizer middleware.

using System;
using System.Collections.Generic;
using System.IO;
using System.Reflection;
using System.Threading.Tasks;
using Localization.SqlLocalizer.DbStringLocalizer;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Net.Http.Headers;
using Newtonsoft.Json;

namespace ImportExportLocalization.Controllers
{
    [Route("api/ImportExport")]
    public class ImportExportController : Controller
    {
        private IStringExtendedLocalizerFactory _stringExtendedLocalizerFactory;

        public ImportExportController(IStringExtendedLocalizerFactory stringExtendedLocalizerFactory)
        {
            _stringExtendedLocalizerFactory = stringExtendedLocalizerFactory;
        }

        // http://localhost:6062/api/ImportExport/localizedData.csv
        [HttpGet]
        [Route("localizedData.csv")]
        [Produces("text/csv")]
        public IActionResult GetDataAsCsv()
        {
            return Ok(_stringExtendedLocalizerFactory.GetLocalizationData());
        }

        [Route("update")]
        [HttpPost]
        [ServiceFilter(typeof(ValidateMimeMultipartContentFilter))]
        public IActionResult ImportCsvFileForExistingData(CsvImportDescription csvImportDescription)
        {
            // TODO validate that data is a csv file.
            var contentTypes = new List<string>();

            if (ModelState.IsValid)
            {
                foreach (var file in csvImportDescription.File)
                {
                    if (file.Length > 0)
                    {
                        var fileName = ContentDispositionHeaderValue.Parse(file.ContentDisposition).FileName.Trim('"');
                        contentTypes.Add(file.ContentType);

                        var inputStream = file.OpenReadStream();
                        var items = readStream(file.OpenReadStream());
                        _stringExtendedLocalizerFactory.UpdatetLocalizationData(items, csvImportDescription.Information);
                    }
                }
            }

            return RedirectToAction("Index", "Home");
        }

        [Route("new")]
        [HttpPost]
        [ServiceFilter(typeof(ValidateMimeMultipartContentFilter))]
        public IActionResult ImportCsvFileForNewData(CsvImportDescription csvImportDescription)
        {
            // TODO validate that data is a csv file.
            var contentTypes = new List<string>();

            if (ModelState.IsValid)
            {
                foreach (var file in csvImportDescription.File)
                {
                    if (file.Length > 0)
                    {
                        var fileName = ContentDispositionHeaderValue.Parse(file.ContentDisposition).FileName.Trim('"');
                        contentTypes.Add(file.ContentType);

                        var inputStream = file.OpenReadStream();
                        var items = readStream(file.OpenReadStream());
                        _stringExtendedLocalizerFactory.AddNewLocalizationData(items, csvImportDescription.Information);
                    }
                }
            }

            return RedirectToAction("Index", "Home");
        }

        private List<LocalizationRecord> readStream(Stream stream)
        {
            bool skipFirstLine = true;
            string csvDelimiter = ";";

            List<LocalizationRecord> list = new List<LocalizationRecord>();
            var reader = new StreamReader(stream);


            while (!reader.EndOfStream)
            {
                var line = reader.ReadLine();
                var values = line.Split(csvDelimiter.ToCharArray());
                if (skipFirstLine)
                {
                    skipFirstLine = false;
                }
                else
                {
                    var itemTypeInGeneric = list.GetType().GetTypeInfo().GenericTypeArguments[0];
                    var item = new LocalizationRecord();
                    var properties = item.GetType().GetProperties();
                    for (int i = 0; i < values.Length; i++)
                    {
                        properties[i].SetValue(item, Convert.ChangeType(values[i], properties[i].PropertyType), null);
                    }

                    list.Add(item);
                }

            }

            return list;
        }
    }
}

The index razor view has a download link and also 2 upload buttons to manage the localization data.


<fieldset>
    <legend style="padding-top: 10px; padding-bottom: 10px;">Download existing translations</legend>

    <a href="http://localhost:6062/api/ImportExport/localizedData.csv" target="_blank">localizedData.csv</a>

</fieldset>

<hr />

<div>
    <form enctype="multipart/form-data" method="post" action="http://localhost:6062/api/ImportExport/update" id="ajaxUploadForm" novalidate="novalidate">
        <fieldset>
            <legend style="padding-top: 10px; padding-bottom: 10px;">Upload existing CSV data</legend>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload Information</label>
                </div>
                <div class="col-xs-7">
                    <textarea rows="2" placeholder="Information" class="form-control" name="information" id="information"></textarea>
                </div>
            </div>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload CSV data</label>
                </div>
                <div class="col-xs-7">
                    <input type="file" name="file" id="fileInput">
                </div>
            </div>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <input type="submit" value="Upload Updated Data" id="ajaxUploadButton" class="btn">
                </div>
                <div class="col-xs-7">

                </div>
            </div>

        </fieldset>
    </form>
</div>

<div>
    <form enctype="multipart/form-data" method="post" action="http://localhost:6062/api/ImportExport/new" id="ajaxUploadForm" novalidate="novalidate">
        <fieldset>
            <legend style="padding-top: 10px; padding-bottom: 10px;">Upload new CSV data</legend>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload Information</label>
                </div>
                <div class="col-xs-7">
                    <textarea rows="2" placeholder="Information" class="form-control" name="information" id="information"></textarea>
                </div>
            </div>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload CSV data</label>
                </div>
                <div class="col-xs-7">
                    <input type="file" name="file" id="fileInput">
                </div>
            </div>

            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <input type="submit" value="Upload New Data" id="ajaxUploadButton" class="btn">
                </div>
                <div class="col-xs-7">

                </div>
            </div>

        </fieldset>
    </form>
</div>

The data can then be managed as required.

localizedDataCsvImportExport_01

The IStringExtendedLocalizerFactory offers all the import export functionality which is supported by Localization.SqlLocalizer. If anything else is required, please create an issue or use the source code and extend it yourself.

public interface IStringExtendedLocalizerFactory : IStringLocalizerFactory
{
	void ResetCache();

	void ResetCache(Type resourceSource);

	IList GetImportHistory();

	IList GetExportHistory();

	IList GetLocalizationData(string reason = "export");

	IList GetLocalizationData(DateTime from, string culture = null, string reason = "export");

	void UpdatetLocalizationData(List<LocalizationRecord> data, string information);

	void AddNewLocalizationData(List<LocalizationRecord> data, string information);
}

Links:

https://www.nuget.org/packages/Localization.SqlLocalizer/

https://www.nuget.org/packages/WebApiContrib.Core.Formatter.Csv/



Andrew Lock: Loading tenants from the database with SaasKit in ASP.NET Core

Loading tenants from the database with SaasKit in ASP.NET Core

Building a multi-tenant application can be a difficult thing to get right - it's normally critical that there is no leakage between tenants, where one tenant sees details from another. In the previous version of ASP.NET this problem was complicated by the multiple extension points you needed to hook into to inject your custom behaviour.

With the advent of ASP.NET Core and the concept of the 'Middleware pipeline', modelled after the OWIN interface, this process becomes a little easier. The excellent open source project SaasKit, by Ben Foster, makes adding multi-tenancy to your application a breeze. He has a number of posts on building multi-tenant applications on his blog, which I recommend checking out. In particular, his post here gave me the inspiration to try out SaasKit, and write this post.

In his post, Ben describes how to add middleware to your application to resolve the tenant for a given hostname from a list provided in appsettings.json. As an extension to this, rather than having a fixed set of tenants loaded at start up, I wanted to be able to resolve the tenants at runtime from a database.

In this post I'll show how to add multi-tenancy to an ASP.NET Core application where the tenant mapping is stored in a database. I'll be using the cross platform PostgreSQL database (see my previous post for configuring PostreSQL on OS X) but you can easily use a different database provider.

The Setup

First create a new ASP.NET Core application. We are going to be loading our tenants from the database using Entity Framework Core, so you will need to add a database provider and your connection string.

Once you're database is all configured, we will create our AppTenant entity. Tenants can be split in multiple ways, basically based on any consistent property of a request (e.g. hostname, headers etc). We will be use a hostname per tenant, so our AppTenant class looks like this:

namespace DatabaseMultiTenancyWithSaasKit.Models  
{
    public class AppTenant
    {
        public int AppTenantId { get; set; }
        public string Name { get; set; }
        public string Hostname { get; set; }
    }
}

We can add a migration and update our database with our new entity using the Entity Framework tools:

$ dotnet ef migrations add AddAppTenantEntity
$ dotnet ef database update

Resolving Tenants

We are using SaasKit to simplify our tenant handling so we will need to add SaasKit.Multitenancy to our project.json:

{
  "dependencies": {
    ...
    "SaasKit.Multitenancy": "1.1.4",
    ...
  },
}

Next we can add an implementation of an ITenantResolver<AppTenant>. This class will be used to identify the associated tenant for a given request. If a tenant is found it returns a TenantContext<AppTenant>, if no tenant can be resolved it returns null.

using System.Linq;  
using System.Threading.Tasks;  
using DatabaseMultiTenancyWithSaasKit.Models;  
using Microsoft.AspNetCore.Http;  
using SaasKit.Multitenancy;

namespace DatabaseMultiTenancyWithSaasKit.Services  
{
    public class AppTenantResolver : ITenantResolver<AppTenant>
    {
        private readonly ApplicationDbContext _dbContext;

        public AppTenantResolver(ApplicationDbContext dbContext)
        {
            _dbContext = dbContext;
        }

        public Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
        {
            TenantContext<AppTenant> tenantContext = null;
            var hostName = context.Request.Host.Value.ToLower();

            var tenant = _dbContext.AppTenants.FirstOrDefault(
                t => t.Hostname.Equals(hostName));

            if (tenant != null)
            {
                tenantContext = new TenantContext<AppTenant>(tenant);
            }

            return Task.FromResult(tenantContext);
        }
    }
}

In this implementation we just find the first AppTenant in the database with the provided HostName - you can obviously match on any parameter here depending on your AppTenant definition.

Configuring the services and Middleware

Now we have defined our AppTenant and a way of resolving a tenant from the database, we just need to wire this all up in to to our applications.

As with most ASP.NET Core components we need to register the dependent services and the middleware in our Startup class. First we configure our tenant class and resolver with the AddMultitenancy extension method on the IServiceCollection:

public void ConfigureServices(IServiceCollection services)  
{
    //Add other services e.g. MVC, connection string, IOptions<T> etc
    services.AddMultitenancy<AppTenant, AppTenantResolver>();
}

Finally we setup our middleware to resolve our tenant. The order of middleware components is important - we add the SaasKit middleware early in the pipeline, just after the static file middleware.

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)  
{
    // if astatic file is requested, serve that without needing to resolve a tenant from the db first.
    app.UseStaticFiles();
    app.UseMultitenancy<AppTenant>();
    // other middleware
}

Setup app to listen on multiple urls

In order to have a multi-tenant app based on hostname, we need to update our app to actually listen on multiple urls. How to do this will depend on how you are hosting your app.

For now I will configure Kestrel to listen on three urls by specifying them directly in our WebHostBuilder. In production you would definitely want to configure this using a different approach so the urls are not hard coded - otherwise we will not be getting any benefit of storing tenants in the database!

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseKestrel()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseUrls(
                "http://localhost:5000",
                "http://localhost:5001",
                "http://localhost:5002")
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

We also need to add the tenants to the database. As I'm using PostgreSQL, I did this from the command line using psql, inserting a new AppTenant into the DbTenantswithSaaskit database:

$ psql -d DbTenantswithSaaskit -c "INSERT INTO \"AppTenants\"(\"AppTenantId\", \"Hostname\", \"Name\") Values(1, 'localhost:5000', 'First Tenant')"

Once they are all added, we have 3 tenants in our database:

$ psql -d DbTenantswithSaaskit -c "SELECT * FROM \"AppTenants\""

 AppTenantId |    Hostname    |     Name      
-------------+----------------+---------------
           1 | localhost:5000 | First Tenant
           2 | localhost:5001 | Second Tenant
           3 | localhost:5002 | Third Tenant
(3 rows)

If we run the app now, the AppTenant is resolved from the database based on the current hostname. However currently we aren't actually using the tenant anywhere so running the app just gives us the default view no matter which url we hit:

Loading tenants from the database with SaasKit in ASP.NET Core

Injecting the current tenant

To prove that our AppTenant has been resolved, we will inject it into _Layout.cshtml and use it to change the title in the navigation bar.

The ability to inject arbitrary services directly into view pages is new in ASP.NET Core and can be useful for injecting view specific services. An AppTenant is not view-specific, so it is more likely to be required in the Controller rather than the View, however view injection is a useful mechanism for demonstration here.

First we add the @inject statement to the top of _Layout.cshtml:

@inject DatabaseMultiTenancyWithSaasKit.Models.AppTenant Tenant;

We now effectively have a property Tenant we can reference later in the layout page:

<a asp-area="" asp-controller="Home" asp-action="Index" class="navbar-brand">@Tenant.Name</a>  

Now when we navigate to the various registered urls, we can see the current AppTenant has been loaded from the database, and it's Name displayed in the navigation bar:

Loading tenants from the database with SaasKit in ASP.NET Core

Loading tenants from the database with SaasKit in ASP.NET Core

Loading tenants from the database with SaasKit in ASP.NET Core

Summary

One of the first steps in any multi-tenant application is identifying which tenant the current request is related to. In this post we used the open source SaasKit to resolve tenants based on the current request hostname. We designed the resolver to load the tenants from a database, so that we could dynamically add new tenants at runtime. We then used service injection to show that the AppTenant object representing our current tenant can be sourced from the dependency injection container. The source code for the above example can be found here.

If you are interested in building multi-tenancy apps with SaasKit I highly recommend you check out Ben Foster's blog at benfoster.io for more great examples.


Dominick Baier: .NET Core 1.0 is released, but where is IdentityServer?

In short: we are working on it.

Migrating the code from Katana to ASP.NET Core was actually mostly mechanical. But obviously new approaches and patterns have been introduced which might, or might not align directly with how we used to do things in IdentityServer3.

We also wanted to take the time to do some re-work and re-thinking, as well as doing some breaking changes that we couldn’t easily do before.

For a roadmap – in essence we will release a beta including the new UI interaction next week. Then we will have an RC by August and an RTM before the final ASP.NET/.NET Core tooling ships later this year.

Meanwhile we encourage you to try the current bits and give us feedback. The more the better.

Stay tuned.


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Anuraj Parameswaran: Entity Framework Core Scaffold DbContext from Existing Database

This post is about reverse engineering model classes from existing database using Entity Framework Core. This is useful in Database First scenarios than the Code First scenario. In order to scaffold a DbContext from an existing database, you first have to set up project.json file. You need to add reference of Entity Framework tools in the project.json file tools section. For this post I am generating DbContext and model classes from Sqlite Database. So I am using EF Sqlite references as well.


Damien Bowden: Injecting Configurations in Razor Views in ASP.NET Core

This article shows how application configurations can be injected and used directly in razor views in an ASP.NET Core MVC application. This is useful when an SPA requires application URLs which are different with each deployment and need to be deployed as configurations in a json or xml file.

Code: https://github.com/damienbod/AspNetCoreInjectConfigurationRazor

The required configuration properties are added to the ApplicationConfigurations configuration section in the appsettings.json file.

{
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    },
    "ApplicationConfigurations": {
        "ApplicationHostUrl": "https://damienbod.com/",
        "RestServiceTwo": "http://someapp/api/"
    }
}

An ApplicationConfigurations class is created which will be used to read the configuration properties.

namespace AspNetCoreInjectConfigurationRazor.Configuration
{
    public class ApplicationConfigurations
    {
        public string ApplicationHostUrl { get; set; }

        public string RestServiceTwo { get; set; }
    }
}

In the Startup class, the appsetting.json file is loaded into the IConfigurationRoot in the constructor.

public Startup(IHostingEnvironment env)
{
	var builder = new ConfigurationBuilder()
		.SetBasePath(env.ContentRootPath)
		.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
		.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
		.AddEnvironmentVariables();
	Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }

The ApplicationConfigurations section is then added to the default ASP.NET Core IoC in the ConfigureServices method in the startup class.

public void ConfigureServices(IServiceCollection services)
{
	services.Configure<ApplicationConfigurations>(
          Configuration.GetSection("ApplicationConfigurations"));

	services.AddMvc();
}

The IOptions object is directly injected into the cshtml razor view using the @inject. The values can then be used and for example, added to input hidden HTML objects, which can then be used from any javascript framework.

@using Microsoft.Extensions.Options;
@using AspNetCoreInjectConfigurationRazor.Configuration;

@inject IOptions<ApplicationConfigurations> OptionsApplicationConfiguration

@{
    ViewData["Title"] = "Home Page";
}

<h2>Injected properties direct from configuration file:</h2>
<ol>
    <li>@OptionsApplicationConfiguration.Value.ApplicationHostUrl</li>
    <li>@OptionsApplicationConfiguration.Value.RestServiceTwo</li>
</ol>

@*Could be used in an SPA app using hidden inputs*@
<input id="ApplicationHostUrl" 
       name="ApplicationHostUrl" 
       type="hidden" 
       value="@OptionsApplicationConfiguration.Value.ApplicationHostUrl"/>
<input id="RestServiceTwo" 
       name="id="RestServiceTwo" 
       type="hidden" 
       value="@OptionsApplicationConfiguration.Value.RestServiceTwo" />

When to application is started, the view is then returned with the configuration properties from the json file and the hidden inputs can be viewed using the F12 debug function in the browser.

AspNetCoreInjectConfigurationRazor_01

Links:

https://docs.asp.net/en/latest/mvc/views/dependency-injection.html



Andrew Lock: Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

One of the great selling points of the .NET Core framework is its cross-platform credentials. Similar to most .NET developers I imagine, the vast majority of my development time has been on Windows. As a Mac user, however, I have been experimenting with creating ASP.NET applications directly in OS X.

In this post I'll describe the process of installing PostgreSQL, adding Entity Framework (EF) Core to your application, building a model, and running migrations to create your database. You can find the source code for the final application on GitHub.

Prerequisites

There are a number of setup steps I'm going to assume here in order to keep the post to a sensible length.

  1. Install the .NET Core SDK for OS X from dot.net. This will also encourage you to install Homebrew in order to update your openssl installation. I recommend you do this as we'll be using Homebrew again later.
  2. Install Visual Studio Code. This great cross platform editor is practically a requirement when doing .NET development where Visual Studio isn't available. You should also install the C# extension.
  3. (Optional) Install Yeoman ASP.NET templates (and npm). Although not required, installing Yeoman and the .NET Core templates can get you up and running with a new web application faster. Yeoman uses npm and can be directly integrated into VS Code using an extension.
  4. Create a new application. In this post I have created a basic MVC application without any authentication/identity or entity framework models in it.

Installing PostgreSQL

There are a number of Database Providers you can use with Entity Framework core today, with more on the way. I chose to go with PostgreSQL as it's a mature, cross-platform database (and I want to play with Marten later!)

The easiest way to install PostgreSQL on OS X is to use Homebrew. Hopefully you already have it installed as part of installing the .NET Core SDK. If you'd rather use a graphical installer there are a number of possibilities listed on their downloads page.

Running the following command will download and install PostgreSQL along with any dependencies.

$ brew install postgresql

Assuming all goes well, the database manager should be installed. You have a couple of options for running it; you can either run the database on demand in the foreground of a terminal tab, or you can have it run automatically on restart as a background service. To run as a service, use:

$ brew services start postgresql

I chose to run in the foreground, as I'm just using it for experimental development at the moment. You can do so with:

$ postgres -D /usr/local/var/postgres

In order to use Entity Framework migrations, you need a user with the createdb permission. When PostgreSQL is installed, a new super-user role should be created automatically in your provider with your current user's login details. We can check this by querying the pg_roles table.

To run queries against the database you can use the psql interactive terminal. In a new tab, run the following command to view the existing roles, which should show your username and that you have the createdb permission.

$ psql postgres -c "SELECT rolname, rolcreatedb::text FROM pg_roles"

 rolname | rolcreatedb 
---------+-------------
 Sock    | true
(1 row)

Installing EF Core into your project

Now we have PostgreSQL installed, we can go about adding Entity Framework Core to our ASP.NET Core application. First we need to install the required libraries into our project.json. The only NuGet package directly required to use PostgreSQL is the Npgsql provider, but we need the additional EF Core libraries in order to run migrations against the database. Note that the Tools library should go in the tools section of your project.json, while the others should go in the dependencies section.

{
  dependencies: {
    "Npgsql.EntityFrameworkCore.PostgreSQL": "1.0.0",
    "Microsoft.EntityFrameworkCore.Design": "1.0.0-preview2-final"
  },

  tools: {
    "Microsoft.EntityFrameworkCore.Tools": {
      "version": "1.0.0-preview2-final",
      "imports": "portable-net45+win8+dnxcore50"
    }
  }
}

Migrations allow us to use a code-first approach to creating the database. This means we can create our models in code, generate a migration, and run that against PostgreSQL. The migrations will then create the database if it doesn't already exist and update the tables to match our model as required.

The first step is to create our entity models and db context. in this post I am using a simple model consisting of an Author which may have many Articles.

using Microsoft.EntityFrameworkCore;

public class Author  
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public List<Article> Articles { get; set; } = new List<Article>();
}

public class Article  
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string Url { get; set; }
    public string Body { get; set; }

    public int AuthorId { get; set; }
    public Author Author { get; set; }
}

public class ArticleContext : DbContext  
{
    public ArticleContext(DbContextOptions<ArticleContext> options)
        : base(options)
    { }

    public DbSet<Article> Articles { get; set; }
    public DbSet<Author> Authors { get; set; }
}

Our entity models are just simple POCO objects, which use the default conventions for the Primary Key and relationships. The DbContext can be used to customise your model, and to expose the DbSet<T>s which are used to query the database.

Now our model is designed, we need to setup our app to use the ArticleContext and our database. Add a section in your appsettings.json, or use any other configuration method to setup a connection string. The connection string should contain the name of the database to be created, in this case, DemoArticlesApp. The username and password will be your local OS X account's details.

{
  "DbContextSettings" :{
    "ConnectionString" : "User ID=Sock;Password=password;Host=localhost;Port=5432;Database=DemoArticlesApp;Pooling=true;"
  }
}

Finally, update the ConfigureServices method of your Startup class to inject your ArticleContext when requested, and to use the connection string specified in your configuration.

public void ConfigureServices(IServiceCollection services)  
{
    // Add framework services.
    services.AddMvc();

    var connectionString = Configuration["DbContextSettings:ConnectionString"];
    services.AddDbContext<ArticleContext>(
        opts => opts.UseNpgsql(connectionString)
    );
}

Generate Migrations using EF Core Tools

Now we have our instance of PostgreSQL started and our models built, we can use the EF Core tools to scaffold our migrations and update our database! When using Visual Studio, you would typically run entity framework migration code from the Package Manager Console, and that is still possible. However now we have the dotnet CLI we are also able to hook into the command support and run our migrations directly on the command line.

Note, before running these commands you must make sure you are in the root of your project, i.e. the same folder that contains your project.json.

We add our first migration and give it a descriptive name, InitialMigration using the ef migrations add command:

$ dotnet ef migrations add InitialMigration

Project adding-ef-core-on-osx (.NETCoreApp,Version=v1.0) will be compiled because inputs were modified  
Compiling adding-ef-core-on-osx for .NETCoreApp,Version=v1.0  
Compilation succeeded.  
    0 Warning(s)
    0 Error(s)
Time elapsed 00:00:01.5865396

Done. To undo this action, use 'dotnet ef migrations remove'  

This first builds your project, and then generates the migration files. As this is the first migration, it will also create the Migrations folder in your project, and add the new migrations to it.

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

You are free to look at the scaffolding code that was generated to make sure you are happy with what will be executed on the database. If you want to change something, you can remove the last migration with the command:

$ dotnet ef migrations remove

You can then fix any bugs, and add the initial migration again.

The previous step generated the code necessary to create our migrations, but it didn't touch the database itself. We can apply the generated migration to the database using the command ef database update:

$ dotnet ef database update 

Project adding-ef-core-on-osx (.NETCoreApp,Version=v1.0) will be compiled because Input items added from last build  
Compiling adding-ef-core-on-osx for .NETCoreApp,Version=v1.0  
Compilation succeeded.  
    0 Warning(s)
    0 Error(s)
Time elapsed 00:00:01.9422901

Done.  

All done! Our database has been created (as it didn't previously exist) and the tables for our entities have been created. To prove it for yourself run the following command, replacing DemoArticlesApp with the database name you specified earlier in your connection string:

$ psql DemoArticlesApp -c "SELECT table_name FROM Information_Schema.tables where table_schema='public'"

      table_name       
-----------------------
 __EFMigrationsHistory
 Authors
 Articles
(3 rows)

Here we can see the Authors and Articles tables which correspond to their model equivalents. There is also an __EFMigrationsHistory which is used by Entity Framework core to keep track of which migrations have been applied.

Injecting your DbContext into MVC Controllers

Now we have both our app and our database configured, lets put the two to use. I've created a couple of simple WebApi controllers to allow getting and posting Authors and Articles. To hook this up to the database, we inject an instance of our ArticlesContext to use for querying and updates. Only the AuthorsController is shown below, but the Articles controller is very similar.

using System.Collections.Generic;  
using System.Linq;  
using AddingEFCoreOnOSX.Models;  
using Microsoft.AspNetCore.Mvc;

namespace AddingEFCoreOnOSX.Controllers  
{
    [Route("api/[controller]")]
    public class AuthorsController : Controller
    {
        private readonly ArticleContext _context;
        public AuthorsController(ArticleContext context)
        {
            _context = context;
        }

        // GET: api/authors
        public IEnumerable<Author> Get()
        {
            return _context.Authors.ToList();
        }

        // GET api/authors/5
        [HttpGet("{id}")]
        public Author Get(int id)
        {
            return _context.Authors.FirstOrDefault(x => x.Id == id);
        }

        // POST api/authors
        [HttpPost]
        public IActionResult Post([FromBody]Author value)
        {
            _context.Authors.Add(value);
            _context.SaveChanges();
            return StatusCode(201, value);
        }
    }
}

This is a very simple controller. We can create a new Author by POSTing appropriate data to /api/authors (created using PostMan):

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

We can then fetch our list of authors with a GET to `/api/authors:

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

Similarly, we can create and list a new Article with a POST and GET to /api/articles:

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

Adding EF Core and PostgreSQL to an ASP.NET Core project on OS X

Summary

In this post I showed how to install PostgreSQL on OS X. I then built an Entity Framework Core entity model in my project, and added the required DbContext and settings. I used the dotnet CLI to generate migrations for my model and then applied these to the database. Finally, I injected the DbContext into my MVC Controllers to query the newly created database .


Anuraj Parameswaran: Using nuget packages in ASP.NET Core

While developing ASP.NET Core you might face some situations where you have the source code with you, but the nuget package is not available. One example is ImageProcessorCore where source code is available, but nuget package is not available, if you want to use this library in your project, you first need to create a package out of it and host it locally.


Dominick Baier: Update for authentication & API access for native applications and IdentityModel.OidcClient

The most relevant spec for authentication and API access for native apps has been recently updated.

If you are “that kind of person” that enjoys looking at diffs of pre-release RFCs – you would have spotted a new way of dealing with the system browser for desktop operating systems (e.g. Windows or MacOS).

Quoting section 7.3:

“More applicable to desktop operating systems, some environments allow apps to create a local HTTP listener on a random port, and receive URI redirects that way.  This is an acceptable redirect URI choice for native apps on compatible platforms.”

IOW – your application launches a local “web server”, starts the system browser with a local redirect URI and waits for the response to come back (either a code or an error). This is much easier than trying to fiddle with custom URL monikers and such on desktop operating systems.

William Denniss – one of the authors of the above spec and the corresponding reference implementations – also created a couple of samples that show the usage of that technique for Windows desktop apps.

Inspired by that I, created a sample showing how to do OpenID Connect authentication from a console application using IdentityModel.OidcClient.

In a nutshell – it works like this:

Open a local listener

// create a redirect URI using an available port on the loopback address.
string redirectUri = string.Format("http://127.0.0.1:7890/");
Console.WriteLine("redirect URI: " + redirectUri);
 
// create an HttpListener to listen for requests on that redirect URI.
var http = new HttpListener();
http.Prefixes.Add(redirectUri);
Console.WriteLine("Listening..");
http.Start();

 

Construct the start URL, open the system browser and wait for a response

var options = new OidcClientOptions(
    "https://demo.identityserver.io",
    "native.code",
    "secret",
    "openid profile api",
    redirectUri);
options.Style = OidcClientOptions.AuthenticationStyle.AuthorizationCode;
 
var client = new OidcClient(options);
var state = await client.PrepareLoginAsync();
 
Console.WriteLine($"Start URL: {state.StartUrl}");
            
// open system browser to start authentication
Process.Start(state.StartUrl);
 
// wait for the authorization response.
var context = await http.GetContextAsync();

 

Process the response and access the claims and tokens

var result = await client.ValidateResponseAsync(context.Request.Url.AbsoluteUri, state);
 
if (result.Success)
{
    Console.WriteLine("\n\nClaims:");
    foreach (var claim in result.Claims)
    {
        Console.WriteLine("{0}: {1}", claim.Type, claim.Value);
    }
 
    Console.WriteLine();
    Console.WriteLine("Access token:\n{0}", result.AccessToken);
 
    if (!string.IsNullOrWhiteSpace(result.RefreshToken))
    {
        Console.WriteLine("Refresh token:\n{0}", result.RefreshToken);
    }
}
else
{
    Console.WriteLine("\n\nError:\n{0}", result.Error);
}
 
http.Stop();

 

Sample can be found here – have fun ;)

 

 


Filed under: IdentityModel, OAuth, OpenID Connect, WebAPI


Andrew Lock: How to configure urls for Kestrel, WebListener and IIS express in ASP.NET Core

How to configure urls for Kestrel, WebListener and IIS express in ASP.NET Core

In this post I describe how to configure the urls your application binds to when using the Kestrel or WebListener HTTP servers that come with ASP.NET Core. I'll also cover how to set the url when you are developing locally with Visual Studio using IIS Express, and how this relates to Kestrel and WebListener.

Background

In ASP.NET Core the hosting model has completely changed from ASP.NET 4.x. Previously your application was inextricably bound to IIS and System.Web, but in ASP.NET Core, your application is essentially just a console app. You then create and configure your own lightweight HTTP server within your application itself. This provides a larger range of hosting options than just hosting in IIS - in particular self-hosting in your own process.

ASP.NET Core comes with two HTTP servers which you can plug straight in out of the box. If you have been following the development of ASP.ENT Core at all, you will no doubt have heard of Kestrel, the new high performance, cross-platform web server built specifically for ASP.NET Core.

The other server is WebListener. Kestrel get's all the attention, and probably rightly so given it's performance and cross-platform credentials, but web listener is actually more fully featured, in particular regarding platform specific features such as Windows-Authentication.

While you can directly self-host your ASP.NET Core applications using Kestrel or WebListener, that's generally not the recommended approach when you come to deploy your application. According to the documentation:

If you intend to deploy your application on a Windows server, you should run IIS as a reverse proxy server that manages and proxies requests to Kestrel. If deploying on Linux, you should run a comparable reverse proxy server such as Apache or Nginx to proxy requests to Kestrel.

For self-hosting scenarios, such as running in Service Fabric, we recommend using Kestrel without IIS. However, if you require Windows Authentication in a self-hosting scenario, you should choose WebListener.

Using a reverse-proxy generally brings a whole raft of advantages. IIS can for example handle restarting your app if it crashes, it can manage the SSL layer and certificates for you, it can filter requests, as well as handle hosting multiple applications on the same server.

Configuring Urls in Kestrel and WebListener

Now we have some background on where Kestrel and WebListener fit, we'll dive into the mechanics of configuring the servers to listen at the correct urls. To be clear, we are talking about setting the url to which our application is bound e.g. http://localhost:5000, or http://myfancydomain:54321, which you would navigate to in your browser to view your app.

There is an excellent post from Ben Foster which shows the most common ways to configure Urls when you are using Kestrel as the web server. It's worth noting that all these techniques apply to WebListener too.

These methods assume you are working with an ASP.NET Core application created using one of the standard templates. Once created, there will be a file Program.cs which contains the static void main() entry point for your application:

public class Program  
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseKestrel()
            .UseIISIntegration()
            .UseStartup<Startup>()
            .Build();

        host.Run();
    }
}

It is the WebHostBuilder class which configures your application and HTTP server (in the example, Kestrel), and starts your app listening for requests with host.Run().

UseUrls()

The first, and easiest, option to specify the binding URLs is to hard code them into the WebHostBuilder using AddUrls():

var host = new WebHostBuilder()  
    .UseKestrel()
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseUrls("http://localhost:5100", "http://localhost:5101", http://*:5102)
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

Simply adding this one line will allow you to call your application at any of the provided urls, and even at the wildcard host. However, hard-coding the urls never feels like a particularly clean or extensible solution. Luckily, you can also load the urls from an external configuration file, from environment variables, from command line arguments, or any source supported by the Configuration system.

External file - hosting.json

To load from a file, create hosting.json, in the root of your project, and set the server.urls key as appropriate, separating each url with a semicolon. You can actually use any name now, the name hosting.json is no longer assumed, but it's probably best to continue to use it by convention.

{
  "server.urls": "http://localhost:5100;http://localhost:5101;http://*:5102"
}

Update your WebHostBuilder to load hosting.json as part of the initial configuration. It's important to set the base path so that the ConfigurationBuilder knows where to look for your hosting.json file.

var config = new ConfigurationBuilder()  
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile("hosting.json", optional: true)
    .Build();

var host = new WebHostBuilder()  
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseKestrel()
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

Note that the ConfigurationBuilder we use here is distinct from the ConfigurationBuilder typically used to read appsettings.json etc as part of your Startup.cs configuration. It is an instance of the same class, but the WebHostBuilder and app configuration are built separately - values from hosting.json will not pollute your appsettings.json configuration.

Command line arguments

As mentioned previously, and as shown in the previous snippet, you can configure your WebHostBuilder using any mechanism available to the ConfigurationBuilder. If you prefer to configure your application using command line arguments, add the Microsoft.Extensions.Configuration.CommandLine package to your project.json and update your ConfigurationBuilder to the following:

var config = new ConfigurationBuilder()  
    .AddCommandLine(args)
    .Build();

You can then specify the urls to use at runtime, again passing in the urls separated by semicolons:

> dotnet run --server.urls "http://localhost:5100;http://localhost:5101;http://*:5102"

Environment Variables

There are a couple of subtleties to using environment variables to configure your WebHostBuilder. The first approach is to just set your own environment variables and load them as usual with the ConfigurationBuilder. For example, using PowerShell you could set the variable "MYVALUES_SERVER.URLS" using:

[Environment]::SetEnvironmentVariable("MYVALUES_SERVER.URLS", "http://localhost:5100")

which can be loaded in our configuration builder using the prefix "MYVALUES_", allowing us to again set the urls at runtime:

var config = new ConfigurationBuilder()  
    .AddEnvironmentVariables("MYVALUES_")
    .Build();

ASPNETCORE_URLS

The other option to be aware of is the special environment variable, "ASPNETCORE_URLS". If this is set, it will overwrite any other values that have been set by UseConfiguration, whether from hosting.json or command line arguments etc. The only way (that I found) to overwrite this value is with an explicit UseUrls() call.

Using the ConfigurationBuilder approach to configuring your server gives the most flexibility at runtime for specifying your urls, so I would definitely encourage you to use this approach any time you find the need to specify your urls. However, you may well find the need to configure your Kestrel/WebListener urls at all surprisingly rare…

Configuring IIS Express and Visual Studio

In the previous section I demonstrated how to configure the urls used by your ASP.NET Core hosting server, whether Kestrel or WebListener. While you might directly expose those self-hosted servers in some cases (the docs cite Service Fabric as a possible use case), in most cases you will be reverse-proxied behind IIS on Windows, or Apache/Nginix on Linux. This means that the urls you have configured will not be the actual urls exposed to the outside world.

You can see this effect for yourself if you are developing using Visual Studio and IIS. When you create a new ASP.NET Core project in Visual studio 2015 (Update 3), a launchSettings.json file is created inside the Properties folder of your project. This file contains settings for when you host your application in IIS:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:55862/",
      "sslPort": 0
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "ExampleWebApplication": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

This file contains two sections, iisSettings and profiles. When running in visual studio, profiles provides the hooks required to launch and debug your application using F5. In this case we have two profiles; IIS Express, which fairly obviously runs the application using IIS Express; and ExampleWebApplication, the name of the web project, which runs the application using dotnet run.

How to configure urls for Kestrel, WebListener and IIS express in ASP.NET Core

These two profiles run your application in two distinct ways. The project profile, ExampleWebApplication, runs your application as though you had run the application directly using dotnet run from the command line - it even opens the console window that is running the application. In contrast, IIS Express hosts your application, acting as a reverse-proxy in the same wasy as IIS would in production.

If you were following along with updating your urls, using the methods described in the previous section and running using F5, you may have found that things weren't running as smoothly as expected.

When running using IIS Express or IIS, the urls you configure as part of Kestrel/WebListener are essentially unused - it is the url configured in IIS that is publicly exposed and which you can navigate to in your browser. Therefore when developing locally with IIS Express, you must update the iisSettings:applicationUrl key in launchSettings.json to change the url to which you are bound. You can also update the url in the Debug tab of Properties (Right click your project -> Properties -> Debug).

How to configure urls for Kestrel, WebListener and IIS express in ASP.NET Core

In contrast, when you run the project profile, the urls you configure using the previous section are directly exposed. However, by default the profile opens a browser window - which requires a url - so a default url of http://localhost:5000 is specified in the launchSettings setting. If you are running directly on Kestrel/Weblistener and are customising your urls don't forget to update this setting to load the correct url, or set "launchBrowser": false!

Summary

This post gave a brief summary of the changes to the server hosting model in ASP.NET Core. It described the various ways in which you can specify the binding urls when self-hosting your application using the Kestrel or WebListener servers. Finally it described things to look out for related to changing your urls when developing locally using Visual Studio and IIS Express.


Anuraj Parameswaran: Using PostgreSQL with ASP.NET Core

This post is about using PostgreSQL with ASP.NET Core. PostgreSQL is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards-compliance. Recently in ASP.NET Forums, someone asking about using postgresql with ASP.NET Core. Since I don’t have an installed version available, I thought I will use postgresql as a service version from elephantsql.com. They are offering a free tier postgresql database. You can register yourself and can create databases. In this post I am using EF Migrations for creating databases. So I am using a ASP.NET Core Web API project, I have created a API project with ASP.NET YO Generator. To connect to postgresql server, you require “Npgsql.EntityFrameworkCore.PostgreSQL” nuget package, and for EF migrations you require “Microsoft.EntityFrameworkCore.Tools” package. Here is the project.json file.


Anuraj Parameswaran: Bundling and Minification in ASP.NET Core

This post is about Bundling and Minification in ASP.NET Core. Bundling and minification are two techniques you can use in ASP.NET to improve page load performance for your web application. Bundling combines multiple files into a single file. Minification performs a variety of different code optimizations to scripts and CSS, which results in smaller payloads. In ASP.NET Core RTM release Microsoft introduced “BundlerMinifier.Core” tool which will help you to bundle and minimize Javascript and style sheet files. Unlike previous versions MVC, the bundling and minification is happening on development time not in runtime. To use “BundlerMinifier.Core” first you need to add reference of BundlerMinifier.Core in the project.json tools section.


Andrew Lock: Getting started with StructureMap in ASP.NET Core

Getting started with StructureMap in ASP.NET Core

ASP.NET Core 1.0 was released today, and those of you who haven't already, I really urge you to check it out and have a play. There are a whole raft of features that make it a very enjoyable development experience compared to the venerable ASP.NET 4.x. Plus, upgrading from RTM to RC2 is a breeze.

Among those feature, is the first class support for dependency injection (DI) and inversion of control (IoC) containers. While this was possible in ASP.NET 4.x, it often felt like a bit of an after thought, with many different extensibility points that you had to hook into to get complete control. In this post I will show to integrate StructureMap into your ASP.NET Core apps to use as your dependency injection container.

Why choose a different container?

With ASP.NET Core, Microsoft have designed the framework to use dependency injection as standard everywhere. ASP.NET Core apps use a minimal feature set container by default, and require you to register all your services for injection throughout your app. Pretty much every Hello World app you see shows how to configure a services container.

This works great for small apps and demos, but as anyone who has a built a web app of a reasonable size knows, your container can end up being somewhat of a code smell. Each new class/interface also requires a tweak to your container configuration, in what often feels like a redundant step. If you find yourself writing a lot of code like this:

services.AddTransient<IMyService, MyService>();  

then it may be time to start thinking about a more fully featured IoC container.

There are many possibilities (e.g. Autofac or Ninject) but my personal container of choice is StructureMap. StructureMap strongly emphasises convention over configuration, which helps to minimise a lot of the repeated mappings, allowing you to DRY up your container configuration.

To give a flavour of the benefits this can bring I'll show an example service configuration which includes a variety of different mappings. We'll then update the configuration to use StructureMap which will hopefully make the advantages evident. You can find the code for the project using the built in container here and for the project using StructureMap here.

As an aside, for those of you who may have looked at StructureMap a while back but were discouraged by the lack and/or age of the documentation - you have nothing to fear, it is really awesome now! Plus it's kept permanently in sync with the code base thanks to StoryTeller.

The test project using the built in container

First of all, I'll present the Startup configuration for the example app, using the built in ASP.NET Core container. All of the interfaces and classes are just for example, so they don't have any actual members, but that's not really important in this case. However, the inter-dependencies are such that an instance of every configured class is required when constructing the default ValuesController.

public class Startup  
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc();

        // Configure the IoC container
        ConfigureIoC(services);
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();

        app.UseMvc();
    }

    public void ConfigureIoC(IServiceCollection services)
    {
        services.AddTransient<IPurchasingService, PurchasingService>();
        services.AddTransient<ConcreteService, ConcreteService>();
        services.AddTransient<IGamingService, CrosswordService>();
        services.AddTransient<IGamingService, SudokuService>();
        services.AddScoped<IUnitOfWork, UnitOfWork>(provider => new UnitOfWork(priority: 3));

        services.Add(ServiceDescriptor.Transient(typeof(ILeaderboard<>), typeof(Leaderboard<>)));
        services.Add(ServiceDescriptor.Transient(typeof(IValidator<>), typeof(DefaultValidator<>))); 
        services.AddTransient<IValidator<UserModel>, UserModelValidator>();
    }
}

The Configure call and the constructor are just the default from the ASP.NET Core template. In ConfigureServices we first call AddMvc(), as required to register all our MVC services in the container, and then call out to a method ConfigureIoC, which encapsulates configuring all our app-specific services. I'll run through each of these registrations briefly to explain their intent.

services.AddTransient<IPurchasingService, PurchasingService>();  
services.AddTransient<ConcreteService, ConcreteService>();  

These are the simplest registrations - whenever an IPurchasingService is requested, a new PurchasingService should be created and used. Similarly for the specific concrete class ConcreteService, a new instance should be created and used whenever it is requested.

services.AddTransient<IGamingService, CrosswordService>();  
services.AddTransient<IGamingService, SudokuService>();  

These next two calls are registering all of our instances of IGamingService. These will both be injected when we have a constructor that requires IEnumerable<IGamingService> for example.

services.AddScoped<IUnitOfWork, UnitOfWork>(provider => new UnitOfWork(priority: 3));  

This is our first registration with a different lifetime - in this case, we are explicitly creating a new UnitOfWork for each http request that is made (instead of every time an IUnitOfWork is required, which may be more than once per Http request for Transient lifetimes.)

services.Add(ServiceDescriptor.Transient(typeof(ILeaderboard<>), typeof(Leaderboard<>)));  

This next registration is our first use of generics and allows us to do some pretty handy things using open generics. For example, if I request a type of ILeaderboard<UserModel> in my ValuesController constructor, the container knows that it should inject a concrete type of Leaderboard<UserModel>, without having to somehow register each and every generic mapping.

services.Add(ServiceDescriptor.Transient(typeof(IValidator<>), typeof(DefaultValidator<>)));  
services.AddTransient<IValidator<UserModel>, UserModelValidator>();  

Finally, we have another aspect of generic registrations. We have a slightly different situation here however - we have a specific UserModelValidator that we want to use whenever an IValidator<UserModel> is requested, and an open DefaultValidator<T> that we want to use for every other IValidator<T> request. We can specify the default IValidator<T> cleanly, but we must also explicitly register every specific implementation that we need - a list which will no doubt get longer as our app grows.

With our services all registered what are the key things we note? Well the main thing is that pretty much every concrete service we want to use has to be registered somewhere in the container. Every new class we add will probably result in another line in our Startup file, which could easily get out of hand. Which brings us to…

The test project using StructureMap

In order to use StructureMap in your ASP.NET Core app, you'll need to install the StructureMap.DNX library into your project.json, which as of writing is at version 0.5.1-rc2-final:

{
  "dependencies": {
    "StructureMap.Dnx": "0.5.1-rc2-final"
  }
}

I'll present the Startup configuration for the same app, but this time using StructureMap in place of the built-in container. Those functions which are unchanged are elided for brevity:

public class Startup  
{
    public Startup(IHostingEnvironment env) { /* Unchanged */}

    public IConfigurationRoot Configuration { get; }

    public IServiceProvider ConfigureServices(IServiceCollection services)
    {
        // Add framework services.
        services.AddMvc()
            .AddControllersAsServices();

        return ConfigureIoC(services);
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { /* Unchanged */}

    public IServiceProvider ConfigureIoC(IServiceCollection services)
    {
        var container = new Container();

        container.Configure(config =>
        {
            // Register stuff in container, using the StructureMap APIs...
            config.Scan(_ =>
            {
                _.AssemblyContainingType(typeof(Startup));
                _.WithDefaultConventions();
                _.AddAllTypesOf<IGamingService>();
                _.ConnectImplementationsToTypesClosing(typeof(IValidator<>));
            });

            config.For(typeof(IValidator<>)).Add(typeof(DefaultValidator<>));
            config.For(typeof(ILeaderboard<>)).Use(typeof(Leaderboard<>));
            config.For<IUnitOfWork>().Use(_ => new UnitOfWork(3)).ContainerScoped();

            //Populate the container using the service collection
            config.Populate(services);
        });

        return container.GetInstance<IServiceProvider>();

    }
}

On first glance it may seem like there is more configuration, not less! However, this configuration is far more generalised, emphasises convention over explicit class registrations, and will require less modification as the app grows. I'll run through each of the steps again and then we can compare the two approaches.

First of all, you should note that the ConfigureServices (and ConfigureIoC) call returns an instance of an IServiceProvider. This is the extensibility point that allows the built in container to be swapped out in place of StructureMap, and is implemented by the StructureMap.DNX library.

Within the ConfigureIoC method we create our StructureMap Container - this is the DI container against which mappings are registered - and configure it using a number of different approaches.

config.Scan(_ =>  
            {
                _.AssemblyContainingType(typeof(Startup));
                _.WithDefaultConventions();
                // ...remainder of scan method
            }

The first technique we use is the assembly scanner using Scan. This is one of the most powerful features of StructureMap as it allows you to automatically register classes against interfaces without having to configure them explicitly. In this method we have asked StructureMap to scan our assembly (the assembly containing our Startup class) and to look for candidate classes WithDefaultConventions. The default convention will register concrete classes to interfaces where the names match, for example IMyService and MyService. From personal experience, the number of these cases will inevitably grow with the size of your application, so the ability to have the simple cases like this automatically handled is invaluable.

_.AddAllTypesOf<IGamingService>();  

Within the scanner we also automatically register all our implementations of IGamingService using AddAllTypesOf. This will automatically find and register CrosswordService and SudokuService against the IGamingService interface. If later we add WordSearchService as an additional implementation of IGamingService, we don't have to remember to head back to our Startup class and configure it - StructureMap will seamlessly handle it.

_.ConnectImplementationsToTypesClosing(typeof(IValidator<>));  

The final auto-registration calls ConnectImplementationsToTypesClosing. This method looks for any concrete IValidator<T> implementations that close the interface, and registers them. For our app, we just have the one - UserModelValidator. However if you add new ones to your app, an AvatarModelValidator for example, StructureMap will automatically pick them up.

config.For(typeof(IValidator<>)).Add(typeof(DefaultValidator<>));  

For those IValidator<T> without an explicit implementation, we register the open generic DefaultValidator<T>, which will be used when there is no concrete class that closes the generic for T. So requests to IValidator<Something> will be resolved with DefaultValidator<Something>.

config.For(typeof(ILeaderboard<>)).Use(typeof(Leaderboard<>));  
config.For<IUnitOfWork>().Use(_ => new UnitOfWork(3)).ContainerScoped();  

The next two registrations work very similarly to their equivalents using the built in DI container. The first call registers the Leaderboard<T> type to be used wherever an ILeaderboard<T> is requested. The final call describes an expression that can be used to create a new UnitOfWork, and specifies that it should be ContainerScoped, i.e. per Http request.

config.Populate(services);  

The final call in our StructureMap configuration comes from the StructureMap.DNX library. This call takes all those services which were previously registered in the container (e.g. all the MVC services etc registered by the framework itself), and registers them with StructureMap.

Note: I won't go in to why here, (this issue covers it pretty well), but if you run into problems with your MVC controllers not being created correctly it is probably because you need to call AddControllersAsServices after calling AddMvc in ConfigureServices. This ensures that StructureMap is used to create the instances of your controllers instead of an internal DefaultControllerActivator, which will bypass a lot of your StructureMap configuration. In this example app, AddControllersAsServices is required for ConcreteService to be automatically resolved correctly.

What's the point?

Given that we wrote just as much configuration code for this small app using StructureMap as we did for the built in container, you may be thinking "why bother?" And for very small apps you may well be right, the extra dependency may not be worthwhile. The real value appears when your app starts to grow. Just consider how many of our concrete services were automatically registered without us needing to explicitly configure them:

  • PurchasingService - default convention
  • CrosswordService - registered as implements IGamingService
  • SudokuService - registered as implements IGamingService
  • ConcreteService - concrete services are automatically registered
  • UserModelValidator - closes the IValidator<T> generic

That's a whopping 5 services out of the 8 we registered using the built in service that we didn't need to mention with StructureMap. As your app grows and more services are added, you'll find you have to touch your ConfigureIoC method far less than if you had stuck with the built in container.

If you're still not convinced there are a whole host of other benefits and features StructureMap can provide, some of which are:

  • Creating child/nested containers e.g. for multi tenancy support
  • Multiple profiles, similarly for tenancy support
  • Setter Injection
  • Constructor selection
  • Conventional "Auto" Registration
  • Automatic Lazy<T>/Func<T> resolution
  • Auto resolution of concrete types
  • Interception and Configurable Decorators
  • Amazing debugging/testing tools for viewing inside your container
  • Configurable assembly scanning

Conclusion

In this post I tried to highlight the benefits of using StructureMap as your dependency injection container in your ASP.NET Core applications. While the built-in container is a great start, using a more fully featured container will really simplify configuration as your app grows. I highly recommend checking out the documentation at http://structuremap.github.io to learn more. Give it a try in your ASP.NET Core applications using StructureMap.DNX.


Dominick Baier: Identity Videos, Podcasts and Slides from Conference Season 2016/1

My plan was to cut down on conferences and travelling in general – this didn’t work out ;) I did more conferences in the first 6 months of 2016 than I did in total last year. weird.

Here are some of the digital artefacts:

NDC Oslo 2016: Authentication & secure API access for native & mobile Applications

DevSum 2016: What’s new in ASP.NET Core Security

DevSum 2016: Buzzfrog Podcast with Dag König

DevWeek 2016: Modern Applications need modern Identity

DevWeek 2016: Implementing OpenID Connect and OAuth 2.0 with IdentityServer

All my slides are on speakerdeck.


Filed under: .NET Security, ASP.NET, Conferences & Training, IdentityModel, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Filip Woj: Inheriting route attributes in ASP.NET Web API

I was recently working on a project, where I had a need to inherit routes from a generic base Web API controller. This is not supported by Web API out of the box, but can be enabled with a tiny configuration tweak. Let’s have a look.

The problem with inheriting attribute routes

If you look at the definition of the RouteAttribute in ASP.NET Web API, you will see that it’s marked as an “inheritable” attribute. As such, it’s reasonable to assume that if you use that attribute on a base controller, it will be respected in a child controller you create off the base one.

However, in reality, that is not the case, and that’s due to the internal logic in DefaultDirectRouteProvider – which is the default implementation of the way how Web API discovers attribute routes.

We discussed this class (and the entire extensibility point, as the direct route provider can be replaced) before – for example when implementing a centralized route prefix for Web API.

So if this is your generic Web API code, it will not work out of the box:

public abstract class GenericController<TEntity> : ApiController where TEntity : class, IMyEntityDefinition, new()
{
    private readonly IGenericRepository<TEntity> _repo;

    protected GenericController(IGenericRepository<TEntity> repo)
    {
        _repo = repo;
    }

    [Route("{id:int}")]
    public virtual async Task<IHttpActionResult> Get(int id)
    {
        var result = await _repo.FindAsync(id);
        if (result == null)
        {
            return NotFound();
        }

        return Ok(result);
    }
}

[RoutePrefix("api/items")]public class ItemController : GenericController<Item>
{
    public GenericController(IGenericRepository<Item> repo) : base(repo)
    {}
}

Ignoring the implementation details of the repository pattern here, assuming all your dependency injection is configured already – with the above controller, trying to hit api/items/{id} is going to produce 404.

The solution for inheriting attribute routes

One of the methods that this default direct route provider exposes as overrideable, is the one shown below. It is responsible for extracting route attributes from an action descriptor:

protected virtual IReadOnlyList<IDirectRouteFactory> GetActionRouteFactories(HttpActionDescriptor actionDescriptor)
        {
            // Ignore the Route attributes from inherited actions.
            ReflectedHttpActionDescriptor reflectedActionDescriptor = actionDescriptor as ReflectedHttpActionDescriptor;
            if (reflectedActionDescriptor != null &&
                reflectedActionDescriptor.MethodInfo != null &&
                reflectedActionDescriptor.MethodInfo.DeclaringType != actionDescriptor.ControllerDescriptor.ControllerType)
            {
                return null;
            }

            Collection<IDirectRouteFactory> newFactories = actionDescriptor.GetCustomAttributes<IDirectRouteFactory>(inherit: false);

            Collection<IHttpRouteInfoProvider> oldProviders = actionDescriptor.GetCustomAttributes<IHttpRouteInfoProvider>(inherit: false);

            List<IDirectRouteFactory> combined = new List<IDirectRouteFactory>();
            combined.AddRange(newFactories);

            foreach (IHttpRouteInfoProvider oldProvider in oldProviders)
            {
                if (oldProvider is IDirectRouteFactory)
                {
                    continue;
                }

                combined.Add(new RouteInfoDirectRouteFactory(oldProvider));
            }

            return combined;
        }

Without going into too much details about this code – it’s clearly visible that it specifically ignores inherited route attributes (route attributes implement IDirectRouteFactory interface).

So in order to make our initial sample generic controller work, we need to override the above method and read all inherited routes. This is extremely simple and is shown below:

public class InheritanceDirectRouteProvider : DefaultDirectRouteProvider
{
    protected override IReadOnlyList<IDirectRouteFactory> GetActionRouteFactories(HttpActionDescriptor actionDescriptor)
    {
        return actionDescriptor.GetCustomAttributes<IDirectRouteFactory>(true);
    }
}

This can now be registered at the application startup against your HttpConfiguration – which is shown in the next snippet as an extension method + OWIN Startup class.

public static class HttpConfigurationExtensions
{
    public static void MapInheritedAttributeRoutes(this HttpConfiguration config)
    {
        config.MapHttpAttributeRoutes(new InheritanceDirectRouteProvider());
    }
}

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var config = new HttpConfiguration();
        config.MapInheritedAttributeRoutes();
        app.UseWebApi(config);
    }
}

And that’s it!


Andrew Lock: Reloading strongly typed Options on file changes in ASP.NET Core RC2

Reloading strongly typed Options on file changes in ASP.NET Core RC2

In the previous version of ASP.NET, configuration was typically stored in the <AppSettings> section of web.config. Touching the web.config file would cause the application to restart with the new settings. Generally speaking this worked well enough, but triggering a full application reload every time you want to tweak a setting can sometimes create a lot of friction during development.

ASP.NET Core has a new configuration system that is designed to aggregate settings from multiple sources, and expose them via strongly typed classes using the Options pattern. You can load your configuration from environment variables, user secrets, in memory collections json file types, or even your own custom providers.

When loading from files, you may have noticed the reloadOnChange parameter in some of the file provider extension method overloads. You'd be right in thinking that does exactly what it sounds - it reloads the configuration file if it changes. However, it probably won't work as you expect without some additional effort.

In this article I'll describe the process I went through trying to reload Options when appsettings.json changes. Note that the final solution is currently only applicable for RC2 - it has been removed from the RTM release, but will be back post-1.0.0.

Trying to reload settings

To demonstrate the default behaviour, I've created a simple ASP.NET Core WebApi project using Visual Studio. To this I have added a MyValues class:

public class MyValues  
{
    public string DefaultValue { get; set; }
}

This is a simple class that will be bound to the configuration data, and injected using the options pattern into consuming classes. I bind the DefaultValues property by adding a Configure call in Startup.ConfigureServices :

public class Startup  
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }

    public void ConfigureServices(IServiceCollection services)
    {
        // Configure our options values
        services.Configure<MyValues>(Configuration.GetSection("MyValues"));
        services.AddMvc();
    }
}

I have included the configuration building step so you can see that appsettings.json is configured with reloadOnChange: true. Our MyValues class needs a default value, so I added the required configuration to appsettings.json:

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "MyValues": {
    "DefaultValue" : "first"
  }
}

Finally, the default ValuesController is updated to have an IOptions<MyValues> instance injected in to the constructor, and the Get action just prints out the DefaultValue.

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly MyValues _myValues;
    public ValuesController(IOptions<MyValues> values)
    {
        _myValues = values.Value;
    }

    // GET api/values
    [HttpGet]
    public string Get()
    {
        return _myValues.DefaultValue;
    }
}

Debugging our application using F5, and navigating to http://localhost:5123/api/values, gives us the following output:

Reloading strongly typed Options on file changes in ASP.NET Core RC2

Perfect, so we know our values are being loaded and bound correctly. So what happens if we change appsettings.json? While still debugging, I updated the appsettings.json as below, and hit refresh in the browser…

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "MyValues": {
    "DefaultValue": "I'm new!"
  }
}

Reloading strongly typed Options on file changes in ASP.NET Core RC2

Hmmm… That's the same as before… I guess it doesn't work.

Overview of configuration providers

Before we dig in to why this didn't work, and how to update it to give our expected behaviour, I'd like to take a step back to cover the basics of how the configuration providers work.

After creating a ConfigurationBuilder in our Startup class constructor, we can add a number of sources to it. These can be file-based providers, user secrets, environment variables or a wide variety of other sources. Once all your sources are added, a call to Build will cause each source's provider to load their configuration settings internally, and returns a new ConfigurationRoot.

This ConfigurationRoot contains a list of providers with the values loaded, and functions for retrieving particular settings. The settings themselves are stored internally by each provider in an IDictionary<string, string>. Considering the first appsettings.json in this post, once loaded the JsonConfigurationProvider would contain a dictionary similar to the following:

new Dictionary<string, string> {  
  {"Logging:IncludeScopes": "Debug"},
  {"Logging:LogLevel:Default": "Debug"},
  {"Logging:LogLevel:System": "Information"}
  {"Logging:LogLevel:Microsoft": "Information"},
  {"MyValues:DefaultValue": first}
}

When retrieving a setting from the ConfigurationRoot, the list of sources is inspected in reverse to see if it has a value for the string key provided; if it does, it returns the value, otherwise the search continues up the stack of providers until it is found, or all providers have been searched.

Overview of model binding

Now we understand how the configuration values are built, let's take a quick look at how our IOptions<> instances get created. There are a number of gotchas to be aware of when model binding (I discuss some in a previous post), but essentially it allows you to bind the flat string dictionary that IConfigurationRoot receives to simple POCO classes that can be injected.

When you setup one of your classes (e.g. MyValues above) to be used as an IOptions<> class, and you bind it to a configuration section, a number of things happen.

First of all, the binding occurs. This takes the ConfigurationRoot we were supplied previously, and interrogates it for settings which map to properties on the model. So, again considering the MyValues class, the binder first creates an instance of the class. It then uses reflection to loop over each of the properties in the class (in this case it only finds DefaultValue) and tries to populate it. Once all the properties that can be bound are set, the instantiated MyValues object is cached and returned.

Secondly, it configures the IoC dependency injection container to inject the IOptions<MyValues> class whenever it is required.

Exploring the reload problem

Lets recap. We have an appsettings.json file which is used to provide settings for an IOptions<MyValues> class which we are injecting into our ValuesController. The JSON file is configured with reloadOnChange: true. When we run the app, we can see the values load correctly initially, but if we edit appsettings.json then our injected IOptions<MyValues> object does not change.

Let's try and get to the bottom of this...

The reloadOnChange: true parameter

We need to establish at which point the reload is failing, so we'll start at the bottom of the stack and see if the configuration provider is noticing the file change. We can test this by updating our ConfigureServices call to inject the IConfigurationRoot directly into our ValuesController, so we can directly access the values. This is generally discouraged in favour of the strongly typed configuration available through the IOptions<> pattern, but it lets us bypass the model binding for now.

First we add the configuration to our IoC container:

public class Startup  
{
    public void ConfigureServices(IServiceCollection services)
    {
        // inject the configuration directly
        services.AddSingleton(Configuration);

        // Configure our options values
        services.Configure<MyValues>(Configuration.GetSection("MyValues"));
        services.AddMvc();
    }
}

And we update our ValuesController to receive and display the MyValues section of the IConfigurationRoot.

[Route("api/[controller]")]
public class ValuesController : Controller  
{
    private readonly IConfigurationRoot _config;
    public ValuesController(IConfigurationRoot config)
    {
        _config = config;
    }

    // GET api/values
    [HttpGet]
    public IEnumerable<KeyValuePair<string,string>> Get()
    {
        return _config.GetValue<string>("MyValues:DefaultValue");
    }
}

Performing the same operation as before - debugging, then changing appsettings.json to our new values - gives:

Reloading strongly typed Options on file changes in ASP.NET Core RC2

Excellent, we can see the new value is returned! This demonstrates that the appsettings.json file is being reloaded when it changes, and that it is being propagated to the IConfigurationRoot.

Enabling trackConfigChanges

Given we know that the underlying IConfigurationRoot is reloading as required, there must be an issue with the binding configuration of IOptions<>. We bound the configuration to our MyValues class using services.Configure<MyValues>(Configuration.GetSection("MyValues"));, however there is another extension method available to us:

services.Configure<MyValues>(Configuration.GetSection("MyValues"), trackConfigChanges: true);  

This extension has the property trackConfigChanges, which looks to be exactly what we're after! Unfortunately, updating our Startup.Configure() method to use this overload doesn't appear to have any effect - our injected IOptions<> still isn't updated when the underlying config file changes.

Using IOptionsMonitor

Clearly we're missing something. Diving in to the aspnet/Options library on GitHub we can see that as well as IOptions<> there is also an IOptionsMonitor<> interface.

Note, a word of warning here - the rest of this post is applicable to RC2, but has since been removed from RTM. It will be back post-1.0.0.

using System;

namespace Microsoft.Extensions.Options  
{
    public interface IOptionsMonitor<out TOptions>
    {
        TOptions CurrentValue { get; }
        IDisposable OnChange(Action<TOptions> listener);
    }
}

You can inject this class in much the same way as you do IOptions<MyValues> - we can retrieve our setting value from the CurrentValue property.

We can test our appsettings.json modification routine again by injecting into our ValuesController:

private readonly MyValues _myValues;  
public ValuesController(IOptionsMonitor<MyValues> values)  
{
    _myValues = values.CurrentValue;
}

Unfortunately, we have the exact same behaviour as before, no reloading for us yet:

Reloading strongly typed Options on file changes in ASP.NET Core RC2

Which, finally, brings us to…

The Solution

So again, this solution comes with the caveat that it only works in RC2, but it will most likely be back in a similar way post 1.0.0.

The key to getting reloads to propagate is to register a listener using the OnChange function of an OptionsMonitor<>. Doing so will retrieve a change token from the IConfigurationRoot and register the listener against it. You can see the exact details here. Whenever a change occurs, the OptionsMonitor<> will reload the IOptionsValue using the original configuration method, and then invoke the listener.

So to finally get reloading of our configuration-bound IOptionsMonitor<MyValues>, we can do something like this:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IOptionsMonitor<MyValues> monitor)  
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    monitor.OnChange(
        vals =>
        {
            loggerFactory
                .CreateLogger<IOptionsMonitor<MyValues>>()
                .LogDebug($"Config changed: {string.Join(", ", vals)}");
        });

    app.UseMvc();
}

In our configure method we inject an instance of IOptionsMonitor<MyValues> (this is automatically registered as a singleton in the services.Configure<MyValues> method). We can then add a listener using OnChange - we can do anything here, a noop function is fine. In this case we create a logger that writes out the full configuration.

We are already injecting IOptionsMonitor<MyValues> into our ValuesController so we can give one last test by running with F5, viewing the output, then modifying our appsettings.json and checking again:

Reloading strongly typed Options on file changes in ASP.NET Core RC2

Success!

Summary

In this post I discussed how to get changes to configuration files to be automatically detected and propagated to the rest of the application via the Options pattern.

It is simple to detect configuration file changes if you inject the IConfigurationRoot object into your classes. However, this is not the recommended approach to configuration - a strongly typed approach is considered better practice.

In order to use both strongly typed configuration and have the ability to respond to changes we need to use the IOptionsMonitor<> implementations in Microsoft.Extensions.Options. We must register a callback using the OnChange method and then inject IOptionsMonitor<> in our classes. With this setup, the CurrentValue property will always represent the latest configuration values.

As stated earlier, this setup works currently in the RC2 version of ASP.NET Core, but has been subsequently postponed till a post 1.0.0 release.


Darrel Miller: Back to my core

I've spent a large part of the last two years playing the role of a technical marketeer.  Call it developer advocate, API Evangelist, or my favourite title, API Concierge, my role was to engage with developers and help them, in any way I could, to build better HTTP APIs.  I have really enjoyed the experience and had the opportunity to meet many great people.  However, the more you hear yourself talk about what people should do, the more you are reminded that you aren't actually doing the stuff you are talking about any more.  The time has come for me to stop just talking about building production systems and start doing it again.

Badge2

Code is the answer

Starting this month, I am joining Microsoft as a full time software developer.   I am rejoining the Azure API Management team, this time to actually help build the product.  I am happy to be working on a product that is all about helping people build better HTTP based applications in a shorter amount of time.  I'm also really happy to being on a team that really cares about how HTTP should be used and are determined to make these capabilities available to the widest possible audience.

API Management is one of those dreadfully named product categories that actually save developers real time and money when building APIs.  Do you really want to implement rate limiting, API token issuing and geolocated  HTTP caching?

As a platform for middleware, API Management products can help you solve all kinds of challenges related to security, deployment, scaling and versioning of HTTP based systems.  It’s definitely my cup of tea.

I am hoping to still have chance to do a few conferences a year and I definitely want to keep on blogging.  Perhaps you'll see some deeper technical content from me in the near future.  It's time to recharge those technical batteries and demonstrate that I can still walk the walk.

Interviews

Having just gone through the process of interviewing, I have some thoughts on the whole process.  I think it is fair to say that Microsoft have a fairly traditional interview process.  You spend a day taking to people from the hiring team and related teams.  You get the usual personal questions and questions about past experiences. When applying for a developer role you get a bunch of technical questions that usually require whiteboard coding on topics that are covered in college level courses.  I haven’t been in university for a very long time.  I can count the number of times I have had to reverse a linked list professionally on one hand.

These types of interview questions are typically met with scorn by experienced developers.  I have heard numerous people suggest alternative interview techniques that I believe would be more effective at determining if someone is a competent developer.

However, these are the hoops that candidates are asked to jump through.  It isn’t a surprise.  It is easy to find this out.  It is fairly easy to practice doing whiteboard coding and there are plenty of resources out there that demonstrate how to achieve many of these comp sci challenges.

I’ve heard developers say that if they were asked to perform such an irrelevant challenge on an interview that they would walk out.  I don’t look at it that way.  I consider it an arbitrary challenge and if I can do the necessary prep work to pass, then it is a reflection on my ability to deal with other challenges I may face.  Maybe these interviews are an artificial test, but I would argue so was university. I certainly didn’t learn how to write code while doing an engineering degree.

Remote Work

I’m not going to be moving to Redmond.  I’m going to continue living in Montreal and working for a Redmond based team.  We have one other developer who is remote, but is on the same timezone as the team.  It would be easier to do the job if I were in Redmond, but I can’t move for family reasons.  I’m actually glad that I can’t move, because I honestly think that remote work is the future for the tech industry.  Once a team gets used to working with remote team members, there really isn’t a downside and there are lots of upsides.

The tech industry has a major shortage of talent and a ridiculous tendancy to congregate in certain geographic locations, which causes significant economic problems.  Tech people don’t have any need to be physically close to collaborate.  We should take advantage of that.

But Microsoft?

There is lots of doom and gloom commentary around Microsoft in the circles that I frequent.  Lots of it is related to the issues around ASP.Net Core and .Net Core.  If you look a little into Microsoft’s history you will see whenever they attempt to make major changes that allow the next generation of products they get beaten up for it.  Windows Vista is a classic example.  It was perceived as huge failure, but it made the big changes that allowed Windows 7 to be successful. 

The Core stuff is attempting to do a major reset on 15 years of history.  Grumpiness is guaranteed.  It doesn’t worry me particularly.  Could they have done stuff better?  Sure.  Did I ever think that a few teams in Microsoft could have instigated such a radical amount of change? Nope, never. But it is going to take time.  Way more time than those who like living on the bleeding edge are going to be happy about.

There is a whole lot of change happening at Microsoft.  The majority of what I see is really encouraging.  The employees I have met so far are consistently enthusiastic about the company and many of the employees who have left the company will describe their time there very favourably.

Historically, Microsoft was notorious for its hated stack ranking performance review system.  I had heard that the system had been abolished but I had no idea what the replacement system was until last week.  Only time will tell whether the new system will actually work, but my initial impression is that it is going to have an extremely positive impact on Microsoft culture.  The essense of the system is that you are measured on your contributions to your team, the impact you have had on helping other employees succeed and how you have built on the work of others.  The system, as I understand it, is designed to reward collaboration within the company.  If that doesn’t have an impact on the infamous Microsoft org chart comic, I don’t know what will.

Building stuff is fun

I got hooked on the creative endevour of writing code 34 years ago and I hope to still be doing it for many more to come.


Pedro Félix: Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g. id.example.com, app1.example.com, app2.example.com), as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google Chrome.app” –args –proxy-server=10.211.55.3:8888 –proxy-bypass-list=localhost, where 10.221.55.3 is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

hosting
As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.



Pedro Félix: Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use example.com subdomains (e.g. id.example.com, app1.example.com, app2.example.com), which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than 127.0.0.1. So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

127.0.0.2 app1.example.com
127.0.0.3 app2.example.com
127.0.0.4 id.example.com

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.



Damien Bowden: Import and Export CSV in ASP.NET Core

This article shows how to import and export csv data in an ASP.NET Core application. The InputFormatter and the OutputFormatter classes are used to convert the csv data to the C# model classes.

Code: https://github.com/damienbod/AspNetCoreCsvImportExport

2016.06.29: Updated to ASP.NET Core RTM

The LocalizationRecord class is used as the model class to import and export to and from csv data.

using System;

namespace AspNetCoreCsvImportExport.Model
{
    public class LocalizationRecord
    {
        public long Id { get; set; }
        public string Key { get; set; }
        public string Text { get; set; }
        public string LocalizationCulture { get; set; }
        public string ResourceKey { get; set; }
    }
}

The MVC Controller CsvTestController makes it possible to import and export the data. The Get method exports the data using the Accept header in the HTTP Request. Per default, Json will be returned. If the Accept Header is set to ‘text/csv’, the data will be returned as csv. The GetDataAsCsv method always returns csv data because the Produces attribute is used to force this. This makes it easy to download the csv data in a browser.

The Import method uses the Content-Type HTTP Request Header, to decide how to handle the request body. If the ‘text/csv’ is defined, the custom csv input formatter will be used.

using System.Collections.Generic;
using AspNetCoreCsvImportExport.Model;
using Microsoft.AspNetCore.Mvc;

namespace AspNetCoreCsvImportExport.Controllers
{
    [Route("api/[controller]")]
    public class CsvTestController : Controller
    {
        // GET api/csvtest
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(DummyData());
        }

        [HttpGet]
        [Route("data.csv")]
        [Produces("text/csv")]
        public IActionResult GetDataAsCsv()
        {
            return Ok( DummyData());
        }

        private static IEnumerable<LocalizationRecord> DummyData()
        {
            var model = new List<LocalizationRecord>
            {
                new LocalizationRecord
                {
                    Id = 1,
                    Key = "test",
                    Text = "test text",
                    LocalizationCulture = "en-US",
                    ResourceKey = "test"

                },
                new LocalizationRecord
                {
                    Id = 2,
                    Key = "test",
                    Text = "test2 text de-CH",
                    LocalizationCulture = "de-CH",
                    ResourceKey = "test"

                }
            };

            return model;
        }

        // POST api/csvtest/import
        [HttpPost]
        [Route("import")]
        public IActionResult Import([FromBody]List<LocalizationRecord> value)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }
            else
            {
                List<LocalizationRecord> data = value;
                return Ok();
            }
        }

    }
}

The csv input formatter implements the InputFormatter class. This checks if the context ModelType property is a type of IList and if so, converts the csv data to a List of Objects of type T using reflection. This is implemented in the read stream method. The implementation is very basic and will not work if you have more complex structures in your model class.

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Formatters;
using Microsoft.Net.Http.Headers;

namespace AspNetCoreCsvImportExport.Formatters
{
    /// <summary>
    /// ContentType: text/csv
    /// </summary>
    public class CsvInputFormatter : InputFormatter
    {
        private readonly CsvFormatterOptions _options;

        public CsvInputFormatter(CsvFormatterOptions csvFormatterOptions)
        {
            if (csvFormatterOptions == null)
            {
                throw new ArgumentNullException(nameof(csvFormatterOptions));
            }

            _options = csvFormatterOptions;
        }

        public override Task<InputFormatterResult> ReadRequestBodyAsync(InputFormatterContext context)
        {
            var type = context.ModelType;
            var request = context.HttpContext.Request;
            MediaTypeHeaderValue requestContentType = null;
            MediaTypeHeaderValue.TryParse(request.ContentType, out requestContentType);


            var result = readStream(type, request.Body);
            return InputFormatterResult.SuccessAsync(result);
        }

        public override bool CanRead(InputFormatterContext context)
        {
            var type = context.ModelType;
            if (type == null)
                throw new ArgumentNullException("type");

            return isTypeOfIEnumerable(type);
        }

        private bool isTypeOfIEnumerable(Type type)
        {

            foreach (Type interfaceType in type.GetInterfaces())
            {

                if (interfaceType == typeof(IList))
                    return true;
            }

            return false;
        }

        private object readStream(Type type, Stream stream)
        {
            Type itemType;
            var typeIsArray = false;
            IList list;
            if (type.GetGenericArguments().Length > 0)
            {
                itemType = type.GetGenericArguments()[0];
                list = (IList)Activator.CreateInstance(itemType);
            }
            else
            {
                typeIsArray = true;
                itemType = type.GetElementType();

                var listType = typeof(List<>);
                var constructedListType = listType.MakeGenericType(itemType);

                list = (IList)Activator.CreateInstance(constructedListType);
            }


            var reader = new StreamReader(stream);

            bool skipFirstLine = _options.UseSingleLineHeaderInCsv;
            while (!reader.EndOfStream)
            {
                var line = reader.ReadLine();
                var values = line.Split(_options.CsvDelimiter.ToCharArray());
                if(skipFirstLine)
                {
                    skipFirstLine = false;
                }
                else
                {
                    var itemTypeInGeneric = list.GetType().GetTypeInfo().GenericTypeArguments[0];
                    var item = Activator.CreateInstance(itemTypeInGeneric);
                    var properties = item.GetType().GetProperties();
                    for (int i = 0;i<values.Length; i++)
                    {
                        properties[i].SetValue(item, Convert.ChangeType(values[i], properties[i].PropertyType), null);
                    }

                    list.Add(item);
                }

            }

            if(typeIsArray)
            {
                Array array = Array.CreateInstance(itemType, list.Count);

                for(int t = 0; t < list.Count; t++)
                {
                    array.SetValue(list[t], t);
                }
                return array;
            }
            
            return list;
        }
    }
}

The csv output formatter is implemented using the code from Tugberk Ugurlu’s blog with some small changes. Thanks for this. This formatter uses ‘;’ to separate the properties and a new line for each object. The headers are added tot he first line.

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Formatters;

namespace AspNetCoreCsvImportExport.Formatters
{
    /// <summary>
    /// Original code taken from
    /// http://www.tugberkugurlu.com/archive/creating-custom-csvmediatypeformatter-in-asp-net-web-api-for-comma-separated-values-csv-format
    /// Adapted for ASP.NET Core and uses ; instead of , for delimiters
    /// </summary>
    public class CsvOutputFormatter :  OutputFormatter
    {
        private readonly CsvFormatterOptions _options;

        public string ContentType { get; private set; }

        public CsvOutputFormatter(CsvFormatterOptions csvFormatterOptions)
        {
            ContentType = "text/csv";
            SupportedMediaTypes.Add(Microsoft.Net.Http.Headers.MediaTypeHeaderValue.Parse("text/csv"));

            if (csvFormatterOptions == null)
            {
                throw new ArgumentNullException(nameof(csvFormatterOptions));
            }

            _options = csvFormatterOptions;

            //SupportedEncodings.Add(Encoding.GetEncoding("utf-8"));
        }

        protected override bool CanWriteType(Type type)
        {

            if (type == null)
                throw new ArgumentNullException("type");

            return isTypeOfIEnumerable(type);
        }

        private bool isTypeOfIEnumerable(Type type)
        {

            foreach (Type interfaceType in type.GetInterfaces())
            {

                if (interfaceType == typeof(IList))
                    return true;
            }

            return false;
        }

        public async override Task WriteResponseBodyAsync(OutputFormatterWriteContext context)
        {
            var response = context.HttpContext.Response;

            Type type = context.Object.GetType();
            Type itemType;

            if (type.GetGenericArguments().Length > 0)
            {
                itemType = type.GetGenericArguments()[0];
            }
            else
            {
                itemType = type.GetElementType();
            }

            StringWriter _stringWriter = new StringWriter();

            if (_options.UseSingleLineHeaderInCsv)
            {
                _stringWriter.WriteLine(
                    string.Join<string>(
                        _options.CsvDelimiter, itemType.GetProperties().Select(x => x.Name)
                    )
                );
            }


            foreach (var obj in (IEnumerable<object>)context.Object)
            {

                var vals = obj.GetType().GetProperties().Select(
                    pi => new {
                        Value = pi.GetValue(obj, null)
                    }
                );

                string _valueLine = string.Empty;

                foreach (var val in vals)
                {

                    if (val.Value != null)
                    {

                        var _val = val.Value.ToString();

                        //Check if the value contans a comma and place it in quotes if so
                        if (_val.Contains(","))
                            _val = string.Concat("\"", _val, "\"");

                        //Replace any \r or \n special characters from a new line with a space
                        if (_val.Contains("\r"))
                            _val = _val.Replace("\r", " ");
                        if (_val.Contains("\n"))
                            _val = _val.Replace("\n", " ");

                        _valueLine = string.Concat(_valueLine, _val, _options.CsvDelimiter);

                    }
                    else
                    {

                        _valueLine = string.Concat(string.Empty, _options.CsvDelimiter);
                    }
                }

                _stringWriter.WriteLine(_valueLine.TrimEnd(_options.CsvDelimiter.ToCharArray()));
            }

            var streamWriter = new StreamWriter(response.Body);
            await streamWriter.WriteAsync(_stringWriter.ToString());
            await streamWriter.FlushAsync();
        }
    }
}

The custom formatters need to be added to the MVC middleware, so that it knows how to handle media types ‘text/csv’.

public void ConfigureServices(IServiceCollection services)
{
  var csvFormatterOptions = new CsvFormatterOptions();
  
  services.AddMvc(options =>
  {
     options.InputFormatters.Add(new CsvInputFormatter(csvFormatterOptions));
     options.OutputFormatters.Add(new CsvOutputFormatter(csvFormatterOptions));
     options.FormatterMappings.SetMediaTypeMappingForFormat("csv", MediaTypeHeaderValue.Parse("text/csv"));
  })
}

When the data.csv link is requested, a csv type response is returned to the client, which can be saved. This data contains the header texts and the value of each property in each object. This can then be opened in excel.

http://localhost:10336/api/csvtest/data.csv

Id;Key;Text;LocalizationCulture;ResourceKey
1;test;test text;en-US;test
2;test;test2 text de-CH;de-CH;test

This data can then be used to upload the csv data to the server which is then converted back to a C# object. I use fiddler, postman or curl can also be used, or any HTTP Client where you can set the header Content-Type.


 http://localhost:10336/api/csvtest/import 

 User-Agent: Fiddler 
 Content-Type: text/csv 
 Host: localhost:10336 
 Content-Length: 110 


 Id;Key;Text;LocalizationCulture;ResourceKey 
 1;test;test text;en-US;test 
 2;test;test2 text de-CH;de-CH;test 

The following image shows that the data is imported correctly.

importExportCsv

Notes

The implementation of the InputFormatter and the OutputFormatter classes are specific for a list of simple classes with only properties. If you require or use more complex classes, these implementations need to be changed.

Links

http://www.tugberkugurlu.com/archive/creating-custom-csvmediatypeformatter-in-asp-net-web-api-for-comma-separated-values-csv-format

ASP.NET Core 1.0 MVC 6 Custom Protobuf Formatters

http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/

https://wildermuth.com/2016/03/16/Content_Negotiation_in_ASP_NET_Core



Andrew Lock: Creating a custom ConfigurationProvider in ASP.NET Core to parse YAML

Creating a custom ConfigurationProvider in ASP.NET Core to parse YAML

In the previous incarnation of ASP.NET, configuration was primarily handled by the ConfigurationManager in System.Configuration, which obtained it's values from web.config. In ASP.NET Core there is a new, lightweight configuration system that is designed to be highly extensible. It lets you aggregate many configuration values from multiple different sources, and then access those in a strongly typed fashion using the new Options pattern.

Microsoft have written a number of packages for loading configuration from a variety of sources. Currently, using packages in the Microsoft.Extensions.Configuration namespace, you can read values from:

  • Console command line arguments
  • Environment variables
  • User Secrets stored using the Secrets Manager
  • In memory collections
  • JSON files
  • XML files
  • INI files

I recently wanted to use a YAML file as a configuration source, so I decided to write my own provider to support it. In this article I'm going to describe the process of creating a custom configuration provider. I will outline the provider I created, but you could easily adapt it to read any other sort of structured file you need to.

If you are just looking for the YAML provider itself, rather than how to create your own custom provider, you can find the code on GitHub and on NuGet.

Introduction to the ASP.NET Core configuration system

For those unfamiliar with it, the code below shows a somewhat typical File - New Project configuration for an ASP.NET Core application. It shows the constructor for the Startup class which is called when your app is just starting up.

public Startup(IHostingEnvironment env)  
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }  

This version was scaffolded by the Yeoman generator so may differ from the Visual Studio template but they are both similar. Configuration is performed using a ConfigurationBuilder which is used to aggregate settings from various sources. Before adding anything else, you should be sure to set the ContentRootPath, so the builder knows where to look for your files.

We are then adding two JSON files - the appsettings.json file (which is typically where you would store settings you previously stored in web.config), and an environment specific JSON file (when in development, it would look for a appsettings.development.json file). Any settings with the same key in the latter file will overwrite settings read from the first.

Finally, the environment variables are added to the settings collection, again overwriting any identical values, and the configuration is built into an IConfigurationRoot, which essentially exposes a key-value store of setting keys and values.

Under the hood

There are a few important points to note in this setup.

  1. Settings discovered later in the pipeline overwrite any settings found previously.
  2. The setting keys are case insensitive.
  3. Setting keys are a string representation of the whole context of a setting, with a context delimited by the : character.

Hopefully the first two points make sense but what about that third one? Essentially we need to 'flatten' all our configuration files so that they have a single string key for every value. Taking a simple JSON example:

{
  "Outer" : { 
    "Middle" : { 
      "Inner": "value1",
      "HasValue": true
    }
  }
}

This example contains nested objects, but only two values that are actually being exposed as settings. The JsonConfigurationProvider takes this representation and ultimately converts it into an IDictionary<string, string> with the following values:

new Dictionary<string, string> {  
  {"Outer:Middle:Inner", "value1"},
  {"Outer:Middle:HasValue", "true"}
}

YAML basics

YAML stands for "YAML Ain't Markup Language", and according to the official YAML website::

YAML is a human friendly data serialization standard for all programming languages.

It is a popular format for configuration files as it is easy to ready and write, used by continuous integration tools like AppVeyor and Travis. For example, an appveyor.yml file might look something like the following:

version: '{build}'  
pull_requests:  
  do_not_increment_build_number: true
branches:  
  only:
  - master
nuget:  
  disable_publish_on_pr: true
build_script:  
- ps: .\Build.ps1
test: off  
artifacts:  
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:  
- provider: NuGet
  server: https://www.myget.org/F/andrewlock-ci/api/v2/package
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  on:
    branch: master
    appveyor_repo_tag: true

Whitespace and case are important in YAML, so the indents all have meaning. If you are used to working with JSON, it may help to think of an indented YAML section as being surrounded by {}.

There are essentially 3 primary structures in YAML, which correspond quite nicely to JSON equivalents. I'll go over these briefly as we will need to understand how each should be converted to produce the key-value pairs we need for the configuration system.

YAML Scalar

A scalar is just a value - this might be the property key on the left, or the property value on the right. All of the identifiers in the snippet below are scalars.

key1: value  
key2: 23  
key3: false  

The scalar corresponds fairly obviously with the simple types in javascript (int, string, boolean etc - not arrays or objects), whether they are used as keys or values.

YAML Mapping

The YAML mapping structure is essentially a dictionary, with a unique identifier and a value. It corresponds to an object in JSON. Within a mapping, all the keys must be unique; YAML is case sensitive. The example below shows a simple mapping structure, and two nested mappings:

mapping1:  
  prop1: val1
  prop2: val2
mapping2:  
  mapping3:
    prop1: otherval1
    prop2: otherval2
  mapping4: 
    prop1: finalval
    prop1: finalval

YAML Sequence

Finally, we have the sequence, which is equivalent to a JSON array. Again, nested sequences are possible - the example shows a sequence of mappings, equivalent to a JSON array of objects:

sequence1:  
- map1:
   prop1: value1
- map2:
   prop2: value2

Creating a custom configuration provider

Now we have an understanding of what we are working with, we can dive in to the fun bit, creating our configuration provider!

In order to create a custom provider, you only need to implement two interfaces from the Microsoft.Extensions.Configuration.Abstractions package - IConfigurationProvider and IConfigurationSource.

In reality, it's unlikely you will need to implement these directly - there are a number of base classes you can use which contain partial implementations to get you started.

The ConfigurationSource

The first interface to implement is the IConfigurationSource. This has a single method that needs implementing, but there is also a base FileConfigurationSource which is more appropriate for our purposes:

public class YamlConfigurationSource : FileConfigurationSource  
{
    public override IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        FileProvider = FileProvider ?? builder.GetFileProvider();
        return new YamlConfigurationProvider(this);
    }
}

If not already set, this calls the extension method GetFileProvider on IConfigurationBuilder to obtain an IFileProvider which is used later to load files from disk. It then creates a new instance of a YamlConfigurationProvider (described next), and returns it to the caller.

The ConfigurationProvider

There are a couple of possibilities for implementing IConfigurationProvider but we will be implementing the base class FileConfigurationProvider. This base class handles all the additional requirements of loading files for us, handling missing files, reloads, setting key management etc. All that is required is to implement a single Load method. The YamlConfigurationProvider (elided for brevity) is show below:

using System;  
using System.IO;  
using Microsoft.Extensions.Configuration;

public class YamlConfigurationProvider : FileConfigurationProvider  
{
    public YamlConfigurationProvider(YamlConfigurationSource source) : base(source) { }

    public override void Load(Stream stream)
    {
        var parser = new YamlConfigurationFileParser();

        Data = parser.Parse(stream);
    }
}

Easy, we're all done! We just create an instance of the YamlConfiguraionFileParser, parse the stream, and set the output string dictionary to the Data property.

Ok, so we're not quite there. While we have implemented the only required interfaces, we have a couple of support classes we need to setup.

The FileParser

The YamlConfigurationProvider above didn't really do much - it's our YamlConfigurationFileParser that contains the meat of our provider, converting the stream of characters provided to it into a string dictionary.

In order to parse the stream, I turned to YamlDotNet, a great open source library for parsing YAML files into a representational format. I also took a peek at the source code behind the JsonConfigurationFileParser in the aspnet/Configuration project on GitHub. In fact, given how close the YAML and JSON formats are, most of the code I wrote was inspired either by the Microsoft source code, or examples from YamlDotNet.

The parser we create must take a stream input from a file, and convert it in to an IDictionary<string, string>. To do this, we make use of the visitor pattern, visiting each of the YAML nodes we discover in turn. I'll break down the basic outline of the YamlConfigurationFileParser below:

using System;  
using System.Collections.Generic;  
using System.IO;  
using System.Linq;  
using Microsoft.Extensions.Configuration;  
using YamlDotNet.RepresentationModel;

internal class YamlConfigurationFileParser  
{
    private readonly IDictionary<string, string> _data = 
        new SortedDictionary<string, string>(StringComparer.OrdinalIgnoreCase);
    private readonly Stack<string> _context = new Stack<string>();
    private string _currentPath;

    public IDictionary<string, string> Parse(Stream input)
    {
        _data.Clear();
        _context.Clear();

        var yaml = new YamlStream();
        yaml.Load(new StreamReader(input));

        // Examine the stream and fetch the top level node
        var mapping = (YamlMappingNode)yaml.Documents[0].RootNode;

        // The document node is a mapping node
        VisitYamlMappingNode(mapping);

        return _data;
    }

    // Implementation details elided for brevity
    private void VisitYamlMappingNode(YamlMappingNode node) { }

    private void VisitYamlMappingNode(YamlScalarNode yamlKey, YamlMappingNode yamlValue) { }

    private void VisitYamlNodePair(KeyValuePair<YamlNode, YamlNode> yamlNodePair) { }

    private void VisitYamlSequenceNode(YamlScalarNode yamlKey, YamlSequenceNode yamlValue) { }

    private void VisitYamlSequenceNode(YamlSequenceNode node) { }

    private void EnterContext(string context) { }

    private void ExitContext() { }

    // Final 'leaf' call for each tree which records the setting's value 
    private void VisitYamlScalarNode(YamlScalarNode yamlKey, YamlScalarNode yamlValue)
    {
        EnterContext(yamlKey.Value);
        var currentKey = _currentPath;

        if (_data.ContainsKey(currentKey))
        {
            throw new FormatException(Resources.FormatError_KeyIsDuplicated(currentKey));
        }

        _data[currentKey] = yamlValue.Value;
        ExitContext();
    }

}

I've hidden most of the visitor functions as they're really just implementation details, but if you're interested you can find the full YamlConfigurationFileParser code on GitHub.

First, we have our private fields - Dictionary<string, string> _data which will contain all our settings once parsing is complete, Stack<string> _context which keeps track of the level of nesting we have, and string _currentPath which will be set to the current setting key when _context changes. Note that the dictionary is created with StringComparer.OrdinalIgnoreCase (remember we said setting keys are case insensitive).

The processing is started by calling Parse(stream) with the open file stream. We clear any previous data or context we have, create an instance of YamlStream, and load our provided stream into it. We then retrieve the document level RootNode which you can think of as sitting just outside the YAML document, pointing to the document contents.

Creating a custom ConfigurationProvider in ASP.NET Core to parse YAML

Now we have a reference to the document structures, we can visit each of these in sequence, looping over all of the children until we have visited every node. For each node, we call the appropriate 'visit' method depending on the node type.

I have only shown the body of the VisitYamlScalarNode(keyNode, valueNode) for brevity but the other 'visit' methods are relatively simple. For every level you go into a mapping structure, the mapping 'key' node gets pushed onto the context stack. For a sequence structure, the 0 based index of the item is pushed on to the stack before it is processed.

Every visitation context will ultimately terminate in a call to VisitYamlScalarNode. This method adds the final key to the context, and fetches the combined setting key path in _currentPath. It checks that the key has not been previously added (in this file), and then saves the setting key and final scalar value into the dictionary.

Once all the nodes have been visited, the final Dictionary is returned, and we're done! To give a concrete example, consider the following YAML file:

key1: value1  
mapping1:  
  mapping2a: 
    inside: value2
  mapping2b:
  - seq1
  - seq2
a_sequence:  
- a_mapping: 
    inner: value3

Once every node has been visited, we would have a dictionary with the following entries:

new Dictionary<string, string> {  
  {"key1", "value1"},
  {"mapping1:mapping2a:inside", "value2"},
  {"mapping1:mapping2b:0", "seq1"},
  {"mapping1:mapping2b:1", "seq2"},
  {"a_sequence:0:a_mapping:inner", "value3"},
}

The builder extension methods

We now have all the pieces that are required to load and provide configuration values from a YAML file. However the new configuration system makes heavy use of extension methods to enable a fluent configuration experience. In keeping, with this, we will add a few extension methods to IConfigurationBuilder to allow you to easily add a YAML source.

using System;  
using System.IO;  
using Microsoft.Extensions.FileProviders;  
using Microsoft.Extensions.Configuration

public static class YamlConfigurationExtensions  
{
    public static IConfigurationBuilder AddYamlFile(this IConfigurationBuilder builder, string path)
    {
        return AddYamlFile(builder, provider: null, path: path, optional: false, reloadOnChange: false);
    }

    public static IConfigurationBuilder AddYamlFile(this IConfigurationBuilder builder, string path, bool optional)
    {
        return AddYamlFile(builder, provider: null, path: path, optional: optional, reloadOnChange: false);
    }

    public static IConfigurationBuilder AddYamlFile(this IConfigurationBuilder builder, string path, bool optional, bool reloadOnChange)
    {
        return AddYamlFile(builder, provider: null, path: path, optional: optional, reloadOnChange: reloadOnChange);
    }

    public static IConfigurationBuilder AddYamlFile(this IConfigurationBuilder builder, IFileProvider provider, string path, bool optional, bool reloadOnChange)
    {
        if (provider == null && Path.IsPathRooted(path))
        {
            provider = new PhysicalFileProvider(Path.GetDirectoryName(path));
            path = Path.GetFileName(path);
        }
        var source = new YamlConfigurationSource
        {
            FileProvider = provider,
            Path = path,
            Optional = optional,
            ReloadOnChange = reloadOnChange
        };
        builder.Add(source);
        return builder;
    }
}

These overloads all mirror the AddJsonFile equivalents you will likely have already used. The first three overloads of AddYamlFile all just delegate to the final overload, passing in default values for the various optional parameters. In the final overload, we first create a PhysicalFileProvider which is used to load files from disk, if one was not provided. We then setup our YamlConfigurationSource with the provided options, add it to the collection of IConfigurationSource in IConfigurationBuilder, and return the builder itself to allow the fluent configuration style.

Putting it all together

We now have all the pieces required to load application settings from YAML files! If you have created your own custom file provider in a class library, you need to include a reference to it in the project.json of your web application. If you just want to use the public YamlConfigurationProvider described here, you can pull it from NuGet using:

{
  "dependencies": {
    "NetEscapades.Configuration.Yaml": "1.0.3"
  }
}

Finally, use the extension method in your Startup configuration!

public Startup(IHostingEnvironment env)  
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddYamlFile("my_required_settings.yml", optional: false);
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }  

In the configuration above, you can see we have added a YAML file to the start of our configuration pipeline, in which we load a required my_required_settings.yml file. This can be used to give us default setting values which can then be overwritten by our JSON files if required.

As mentioned before, all the code for this setup is on GitHub and NuGet so feel free to check it out. If you find any bugs, or issues, please do let me know.

Happy coding!

Resources


Damien Bowden: ASP.NET Core, Angular2 with Webpack and Visual Studio

This article shows how Webpack could be used together with Visual Studio ASP.NET Core and Angular2. Both the client and the server side of the application is implemented inside one ASP.NET Core project which makes it easier to deploy.

vs_webpack_angular2

Code: https://github.com/damienbod/Angular2WebpackVisualStudio

Authors

Fabian Gosebrink, Damien Bowden.
This post is hosted on both http://damienbod.com and http://offering.solutions/ and will be hosted on http://blog.noser.com afterwards.

2016.08.12: Updated to Angular2 rc5 and split webpack file.
2016.07.02: Updated to Angular2 rc4
2016.06.29: Updated to ASP.NET Core RTM
2016.06.26: Updated to Angular 2 rc3 and new routing
2016.06.17: Updated to Angular 2 rc2

Setting up the application

The ASP.NET Core application contains both the server side API services and also hosts the Angular 2 client application. The source code for the Angular 2 application is implemented in the angular2App folder. Webpack is then used to deploy the application, using the development build or a production build, which deploys the application to the wwwroot folder. This makes it easy to deploy the application using the standard tools from Visual Studio with the standard configurations.

npm configuration

The npm package.json configuration loads all the required packages for Angular 2 and Webpack. The Webpack packages are all added to the devDependencies. A “npm build” script and also a “npm buildProduction” are also configured, so that the client application can be built using Webpack from the cmd line using “npm build” or “npm buildProduction”. These two scripts just call the same cmd as the Webpack task runner.

{
  "version": "1.0.0",
  "description": "",
  "main": "wwwroot/index.html",
  "author": "",
  "license": "ISC",
  "scripts": {
    "start": "webpack-dev-server --inline --progress --port 8080",
    "build": "webpack -d --color",
    "buildProduction": "webpack -d --color --config webpack.prod.js",
    "tsc": "tsc",
    "tsc:w": "tsc -w",
    "typings": "typings",
    "postinstall": "typings install"
  },
  "dependencies": {
    "@angular/common": "2.0.0-rc.5",
    "@angular/compiler": "2.0.0-rc.5",
    "@angular/core": "2.0.0-rc.5",
    "@angular/forms": "0.3.0",
    "@angular/http": "2.0.0-rc.5",
    "@angular/platform-browser": "2.0.0-rc.5",
    "@angular/platform-browser-dynamic": "2.0.0-rc.5",
    "@angular/router": "3.0.0-rc.1",
    "@angular/upgrade": "2.0.0-rc.5",
    "core-js": "^2.4.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "zone.js": "^0.6.12",

    "bootstrap": "^3.3.6",
        "extract-text-webpack-plugin": "^1.0.1"
  },
  "devDependencies": {
    "autoprefixer": "^6.3.2",
    "clean-webpack-plugin": "^0.1.9",
    "copy-webpack-plugin": "^2.1.3",
    "css-loader": "^0.23.0",
    "extract-text-webpack-plugin": "^1.0.1",
    "file-loader": "^0.8.4",
    "html-loader": "^0.4.0",
    "html-webpack-plugin": "^2.8.1",
    "jquery": "^2.2.0",
    "json-loader": "^0.5.3",
    "node-sass": "^3.4.2",
    "null-loader": "0.1.1",
    "postcss-loader": "^0.9.1",
    "raw-loader": "0.5.1",
    "rimraf": "^2.5.1",
    "sass-loader": "^3.1.2",
    "style-loader": "^0.13.0",
    "ts-helpers": "^1.1.1",
    "ts-loader": "0.8.2",
    "typescript": "1.8.10",
    "typings": "1.0.4",
    "url-loader": "^0.5.6",
    "webpack": "1.13.1",
    "webpack-dev-server": "^1.14.1"
  }
}

typings configuration

The typings are configured for webpack builds.

{
    "globalDependencies": {
        "core-js": "registry:dt/core-js#0.0.0+20160602141332",
        "node": "registry:dt/node#6.0.0+20160807145350"
    }
}

tsconfig configuration

The tsconfig is configured to use commonjs as the module.

{
    "compilerOptions": {
        "target": "es5",
        "module": "commonjs",
        "moduleResolution":  "node",
        "removeComments": true,
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true,
        "noEmitHelpers": false,
        "sourceMap": true
    },
    "exclude": [
        "node_modules"
    ],
    "compileOnSave": false,
    "buildOnSave": false
}

Webpack build

The Webpack development build >webpack -d just uses the source files and creates outputs for development. The production build copies everything required for the client application to the wwwroot folder, and uglifies the js files. The webpack -d –watch can be used to automatically build the dist files if a source file is changed.

The Webpack config file was created using the excellent gihub repository https://github.com/preboot/angular2-webpack. Thanks for this. Small changes were made to this, such as the process.env.NODE_ENV and Webpack uses different source and output folders to match the ASP.NET Core project. If you decide to use two different projects, one for server, and one for client, preboot or angular-cli, or both together would be a good choice for the client application.

webpack.config.js

/// <binding ProjectOpened='Run - Development' />

var isProd = (process.env.NODE_ENV === 'production');

if (!isProd) {
    module.exports = require('./webpack.dev.js');
} else
{
    module.exports = require('./webpack.prod.js');
}

webpack.dev.js

var path = require('path');
var webpack = require('webpack');

var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
var Autoprefixer = require('autoprefixer');
var HtmlWebpackPlugin = require('html-webpack-plugin');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

module.exports = {

    debug: true,
    //watch: true,
    devtool: 'eval-source-map',

    entry: {
        'polyfills': './angular2App/polyfills.ts',
        'vendor': './angular2App/vendor.ts',
        'app': './angular2App/boot.ts' // our angular app
    },

    output: {
        path: "./wwwroot/",
        filename: 'dist/[name].bundle.js',
        publicPath: "/"
    },

    resolve: {
        extensions: ['', '.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        loaders: [
            {
                test: /\.ts$/,
                loader: 'ts',
                query: {
                    'ignoreDiagnostics': [
                        2403, // 2403 -> Subsequent variable declarations
                        2300, // 2300 -> Duplicate identifier
                        2374, // 2374 -> Duplicate number index signature
                        2375, // 2375 -> Duplicate string index signature
                        2502 // 2502 -> Referenced directly or indirectly
                    ]
                },
                exclude: [/node_modules\/(?!(ng2-.+))/]
            },

            // copy those assets to output
            {
                test: /\.(png|jpg|gif|ico|woff|woff2|ttf|svg|eot)$/,
                exclude: /node_modules/,
                loader: "file?name=assets/[name]-[hash:6].[ext]",
            },

            // Load css files which are required in vendor.ts
            {
                test: /\.css$/,
                exclude: /node_modules/,
                loader: "style-loader!css-loader"
            },

            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loader: 'raw-loader!style-loader!css-loader!sass-loader'
            },

            {
                test: /\.html$/,
                loader: 'raw'
            }
        ],
        noParse: [/.+zone\.js\/dist\/.+/, /.+angular2\/bundles\/.+/, /angular2-polyfills\.js/]
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/fonts',
                './wwwroot/assets'
            ]
        ),

        new CommonsChunkPlugin({
            name: ['vendor', 'polyfills']
        }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            inject: 'body',
            chunksSortMode: helpers.packageSort(['polyfills', 'vendor', 'app']),
            template: 'angular2App/index.html'
        }),

        new CopyWebpackPlugin([
            { from: './angular2App/images/*.*', to: "assets/", flatten: true }
        ])
    ]
};


webpack.prod.js

var path = require('path');
var webpack = require('webpack');

var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
var Autoprefixer = require('autoprefixer');
var HtmlWebpackPlugin = require('html-webpack-plugin');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');
var helpers = require('./webpack.helpers');

module.exports = {

    entry: {
        'polyfills': './angular2App/polyfills.ts',
        'vendor': './angular2App/vendor.ts',
        'app': './angular2App/boot.ts' // our angular app
    },

    output: {
        path: "./wwwroot/",
        filename: 'dist/[name].bundle.js',
        publicPath: "/"
    },

    resolve: {
        extensions: ['', '.ts', '.js', '.json', '.css', '.scss', '.html']
    },

    devServer: {
        historyApiFallback: true,
        stats: 'minimal',
        outputPath: path.join(__dirname, 'wwwroot/')
    },

    module: {
        loaders: [
            {
                test: /\.ts$/,
                loader: 'ts',
                query: {
                    'ignoreDiagnostics': [
                        2403, // 2403 -> Subsequent variable declarations
                        2300, // 2300 -> Duplicate identifier
                        2374, // 2374 -> Duplicate number index signature
                        2375, // 2375 -> Duplicate string index signature
                        2502 // 2502 -> Referenced directly or indirectly
                    ]
                },
                exclude: [/node_modules\/(?!(ng2-.+))/]
            },

            // copy those assets to output
            {
                test: /\.(png|jpg|gif|ico|woff|woff2|ttf|svg|eot)$/,
                exclude: /node_modules/,
                loader: "file?name=assets/[name]-[hash:6].[ext]",
            },

            // Load css files which are required in vendor.ts
            {
                test: /\.css$/,
                exclude: /node_modules/,
                loader: "style-loader!css-loader"
            },

            {
                test: /\.scss$/,
                exclude: /node_modules/,
                loader: 'raw-loader!style-loader!css-loader!sass-loader'
            },

            {
                test: /\.html$/,
                loader: 'raw'
            }
        ],
        noParse: [/.+zone\.js\/dist\/.+/, /.+angular2\/bundles\/.+/, /angular2-polyfills\.js/]
    },

    plugins: [
        new CleanWebpackPlugin(
            [
                './wwwroot/dist',
                './wwwroot/fonts',
                './wwwroot/assets'
            ]
        ),
        new webpack.NoErrorsPlugin(),
        new webpack.optimize.DedupePlugin(),
        new webpack.optimize.UglifyJsPlugin(),
        new CommonsChunkPlugin({
            name: ['vendor', 'polyfills']
        }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            inject: 'body',
            chunksSortMode: helpers.packageSort(['polyfills', 'vendor', 'app']),
            template: 'angular2App/index.html'
        }),

        new CopyWebpackPlugin([
            { from: './angular2App/images/*.*', to: "assets/", flatten: true }
        ])
    ]
};

webpack.helpers.js

var path = require('path');

module.exports = {
    // Helper functions
    root: function (args) {
        args = Array.prototype.slice.call(arguments, 0);
        return path.join.apply(path, [__dirname].concat(args));
    },

    packageSort: function (packages) {
        // packages = ['polyfills', 'vendor', 'app']
        var len = packages.length - 1;
        var first = packages[0];
        var last = packages[len];
        return function sort(a, b) {
            // polyfills always first
            if (a.names[0] === first) {
                return -1;
            }
            // main always last
            if (a.names[0] === last) {
                return 1;
            }
            // vendor before app
            if (a.names[0] !== first && b.names[0] === last) {
                return -1;
            } else {
                return 1;
            }
        };
    }
};

Lets dive into this a bit:

Firstly, all plugins are loaded which are required to process all the js, ts, … files which are included, or used in the project.

var path = require('path');
var webpack = require('webpack');

var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
var Autoprefixer = require('autoprefixer');
var HtmlWebpackPlugin = require('html-webpack-plugin');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');

var isProd = (process.env.NODE_ENV === 'production');

The npm environment variable NODE_ENV is used to define the type of build, either a development build or a production build. The entries are configured depending on this parameter.

    config.entry = {
        'polyfills': './angular2App/polyfills.ts',
        'vendor': './angular2App/vendor.ts',
        'app': './angular2App/boot.ts' // our angular app
    };

The entries provide Webpack with the required information, where to start from, or where to hook in to. Three entry points are defined in this configuration. These strings point to the files required in the solution. The starting point for the app itself is provided in one of these files, boot.ts as a starting-point and also all vendor scripts minified in one file, the vendor.ts.

// RxJS.
import 'rxjs';

// Angular 2.
import '@angular/common';
import '@angular/compiler';
import '@angular/core';
import '@angular/http';
import '@angular/platform-browser';
import '@angular/platform-browser-dynamic';
import '@angular/router';

// Reflect Metadata.
import 'reflect-metadata';

// Other libraries.
import 'jquery/src/jquery';
import 'bootstrap/dist/js/bootstrap';

import './css/bootstrap.css';
import './css/bootstrap-theme.css';

Webpack knows which paths to run and includes the corresponding files and packages.

The “loaders” section and the “modules” section in the configuration provides Webpack with the following information: which files it needs to get and how to read the files. The modules tells Webpack what to do with the files exactly. Like minifying or whatever.

In this project configuration, if a production node parameter is set, different plugins are pushed into the sections because the files should be treated differently.

Angular 2 index.html

The index.html contains all the references required for the Angular 2 client. The scripts are added as part of the build and not manually. The developer only needs to use the imports.

Source index.html file in the angular2App/public folder:

<!doctype html>
<html>
<head>
    <base href="./">

    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Angular 2 Webpack Demo</title>

    <meta http-equiv="content-type" content="text/html; charset=utf-8" />

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

</head>
<body>
    <my-app>Loading...</my-app>
</body>
</html>


And the produced build file in the wwwroot folder. The scripts for the app, vendor and boot have been added using Webpack. Hashes are used in a production build for cache busting.

<!doctype html>
<html>
<head>
    <base href="./">

    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Angular 2 Webpack Demo</title>

    <meta http-equiv="content-type" content="text/html; charset=utf-8" />

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    <link rel="stylesheet" href="css/bootstrap.css">
</head>
<body>
    <my-app>Loading...</my-app>
<script type="text/javascript" src="http://localhost:5000/dist/polyfills.js"></script><script type="text/javascript" src="http://localhost:5000/dist/vendor.js"></script><script type="text/javascript" src="http://localhost:5000/dist/app.js"></script></body>
</html>

Visual Studio tools

Webpack task runner from Mads Kristensen can be downloaded and used to send Webpack commands using the webpack.config.js file. The node NODE_ENV parameter is used to define the build type. The parameter can be set to “development”, or “production”.

vs_webpack_angular2_02

The Webpack task runner can also be used by double clicking the task. The execution results are then displayed in the task runner console.

vs_webpack_angular2_03

This runner provides a number of useful commands which can be activated automatically. These tasks can be attached to Visual Studio events by right clicking the task and selecting a binding. This adds a binding tag to the webpack.config.js file.

/// <binding ProjectOpened='Run - Development' />

Webpack SASS

SASS is used to style the SPA application. The SASS files can be built using the SASS loader. Webpack can build all the styles inline or as an external file, depending on your Webpack config.

{
  test: /\.scss$/,
  exclude: root('angular2App', 'app'),
  loader: ExtractTextPlugin.extract('style', 'css?sourceMap!postcss!sass')
},

Webpack Clean

clean-webpack-plugin is used to clean up the deployment folder inside the wwwroot. This ensures that the application uses the latest files.

The clean task can be configured as follows:

var CleanWebpackPlugin = require('clean-webpack-plugin');

And used in Webpack.

  new CleanWebpackPlugin(['./wwwroot/dist']),

Angular 2 component files

The Angular 2 components are slightly different to the standard example components. The templates and the styles use require, which adds the html or the css, scss to the file directly using Webpack, or as an external link depending on the Webpack config.

import { Observable } from 'rxjs/Observable';
import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { Http } from '@angular/http';
import { DataService } from '../services/DataService';


@Component({
    selector: 'homecomponent',
    template: require('./home.component.html'),
    directives: [CORE_DIRECTIVES],
    providers: [DataService]
})

export class HomeComponent implements OnInit {

    public message: string;
    public values: any[];

    constructor(private _dataService: DataService) {
        this.message = "Hello from HomeComponent constructor";
    }

    ngOnInit() {
        this._dataService
            .GetAll()
            .subscribe(data => this.values = data,
            error => console.log(error),
            () => console.log('Get all complete'));
    }
}

The ASP.NET Core API

The ASP.NET Core API is quite small and tiny. It just provides a demo CRUD service.

 [Route("api/[controller]")]
    public class ValuesController : Microsoft.AspNetCore.Mvc.Controller
    {
        // GET: api/values
        [HttpGet]
        public IActionResult Get()
        {
            return new JsonResult(new string[] { "value1", "value2" });
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            return new JsonResult("value");
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return new CreatedAtRouteResult("anyroute", null);
        }

        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return new OkResult();
        }

        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return new NoContentResult();
        }
    }

The Angular2 Http-Service

Note that in a normal environment, you should always return the typed classes and never the plain HTTP response like here. This application only has strings to return, and this is enough for the demo.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';

@Injectable()
export class DataService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration) {

        this.actionUrl = _configuration.Server + 'api/values/';

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    public GetAll = (): Observable =&gt; {
        return this._http.get(this.actionUrl).map((response: Response) =&gt; response.json());
    }

    public GetSingle = (id: number): Observable =&gt; {
        return this._http.get(this.actionUrl + id).map(res =&gt; res.json());
    }

    public Add = (itemName: string): Observable =&gt; {
        var toAdd = JSON.stringify({ ItemName: itemName });

        return this._http.post(this.actionUrl, toAdd, { headers: this.headers }).map(res =&gt; res.json());
    }

    public Update = (id: number, itemToUpdate: any): Observable =&gt; {
        return this._http
            .put(this.actionUrl + id, JSON.stringify(itemToUpdate), { headers: this.headers })
            .map(res =&gt; res.json());
    }

    public Delete = (id: number): Observable =&gt; {
        return this._http.delete(this.actionUrl + id);
    }
}

Notes:

The Webpack configuration could also build all of the scss and css files to a separate app.css or app.”hash”.css which could be loaded as a single file in the distribution. Some of the vendor js and css could also be loaded directly in the html header using the index.html file and not included in the Webpack build.

If you are building both the client application and the server application in separate projects, you could also consider angular-cli of angular2-webpack for the client application.

Debugging the Angular 2 in Visual Studio with breakpoints is not possible with this setup. The SPA app can be debugged in chrome.

Links:

https://github.com/preboot/angular2-webpack

https://webpack.github.io/docs/

https://github.com/jtangelder/sass-loader

https://github.com/petehunt/webpack-howto/blob/master/README.md

http://www.sochix.ru/how-to-integrate-webpack-into-visual-studio-2015/

http://sass-lang.com/

WebPack Task Runner from Mads Kristensen

http://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html

https://angular.io/docs/ts/latest/guide/webpack.html

https://angular.io/docs/ts/latest/tutorial/toh-pt5.html

http://angularjs.blogspot.ch/2016/06/improvements-coming-for-routing-in.html?platform=hootsuite



Damien Bowden: Adding SQL localization data using an Angular 2 form and ASP.NET Core

This article shows how SQL localized data can be added to a database using Angular 2 forms which can then be displayed without restarting the application. The ASP.NET Core localization is implemented using Localization.SqlLocalizer. This NuGet package is used to save and retrieve the dynamic localized data. This makes it possible to add localized data at run-time.

Code: https://github.com/damienbod/Angular2LocalizationAspNetCore

2016.08.14: Updated to Angular2 rc5 and angular2localization 0.8.10
2016.06.28: Updated to Angular2 rc3, angular2localization 0.8.5 and dotnet RTM

Posts in this series

The ASP.NET Core API provides an HTTP POST action method which allows the user to add a new ProductCreateEditDto object to the application. The view model adds both product data and also localization data to the SQLite database using Entity Framework Core.

[HttpPost]
public IActionResult Post([FromBody]ProductCreateEditDto value)
{
	_productCudProvider.AddProduct(value);
	return Created("http://localhost:5000/api/ShopAdmin/", value);
}

The Angular 2 app uses the ProductService to send the HTTP POST request to the ShopAdmin service. The post methods sends the payload as a json object in the body of the request.

public CreateProduct = (product: ProductCreateEdit): Observable<ProductCreateEdit> => {
	let item: string = JSON.stringify(product);
	this.setHeaders();
	return this._http.post(this.actionUrlShopAdmin, item, {
		headers: this.headers,
                body: '',
	}).map((response: Response) => <ProductCreateEdit>response.json())
	.catch(this.handleError);
}

The client model is the same as the server side view model. The ProductCreateEdit class has an array of localized records.

import { LocalizationRecord } from './LocalizationRecord';

export class ProductCreateEdit {
    Id: number;
    Name: string;
    Description: string;
    ImagePath: string;
    PriceEUR: number;
    PriceCHF: number;
    LocalizationRecords: LocalizationRecord[];
} 

export class LocalizationRecord {
    Key: string;
    Text: string;
    LocalizationCulture: string;
} 

The shop-admin.component.html template contains the form which is used to enter the data and this is then sent to the server using the product service. The forms in Angular 2 have changed a lot compared to Angular 1 forms. The form uses the FormModule to define the Angular 2 form specifics. These control items need to be defined in the corresponding ts file.

<div class="container">
    <div [hidden]="submitted">
        <h1>New Product</h1>
        <form *ngIf="active" (ngSubmit)="onSubmit()" #productForm="ngForm">
            <div class="form-group">
                <label class="control-label" for="name">{{ 'ADD_PRODUCT_NAME' | translate:lang }}</label>
                <input type="text" class="form-control" id="name" required  [(ngModel)]="Product.Name" name="name" placeholder="name" #name="ngModel">
                <div [hidden]="name.valid || name.pristine" class="alert alert-danger">
                    Name is required
                </div>
            </div>
          
            <div class="form-group">
                <label class="control-label" for="description">{{ 'ADD_PRODUCT_DESCRIPTION' | translate:lang }}</label>
                <input type="text" class="form-control" id="description" required [(ngModel)]="Product.Description" name="description" placeholder="description" #description="ngModel">
                <div [hidden]="description.valid || description.pristine" class="alert alert-danger">
                    description is required
                </div>
            </div>

            <div class="form-group">
                <label class="control-label" for="priceEUR">{{ 'ADD_PRODUCT_PRICE_EUR' | translate:lang }}</label>
                <input type="number" class="form-control" id="priceEUR" required [(ngModel)]="Product.PriceEUR" name="priceEUR" placeholder="priceEUR" #priceEUR="ngModel">
                <div [hidden]="priceEUR.valid || priceEUR.pristine" class="alert alert-danger">
                    priceEUR is required
                </div>
            </div>

            <div class="form-group">
                <label class="control-label" for="priceCHF">{{ 'ADD_PRODUCT_PRICE_CHF' | translate:lang }}</label>
                <input type="number" class="form-control" id="priceCHF" required [(ngModel)]="Product.PriceCHF" name="priceCHF" placeholder="priceCHF" #priceCHF="ngModel">
                <div [hidden]="priceCHF.valid || priceCHF.pristine" class="alert alert-danger">
                    priceCHF is required
                </div>
            </div>

            <div class="form-group">
                <label class="control-label" >{{ 'ADD_PRODUCT_LOCALIZED_NAME' | translate:lang }}</label>

                <div class="row">
                    <div class="col-md-3"><em>de</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Namede" required [(ngModel)]="Name_de"  name="Namede" #Namede="ngModel">
                        <div [hidden]="Namede.valid || Namede.pristine" class="alert alert-danger">
                            Name_de is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>fr</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Namefr" required  [(ngModel)]="Name_fr"  name="Namefr" #Namefr="ngModel">
                        <div [hidden]="Namefr.valid || Namefr.pristine" class="alert alert-danger">
                            Name_fr is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>it</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Nameit" required [(ngModel)]="Name_it" name="Nameit" #Nameit="ngModel">
                        <div [hidden]="Nameit.valid || Nameit.pristine" class="alert alert-danger">
                            Name_it is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>en</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Nameen" required [(ngModel)]="Name_en"  name="Nameen" #Nameen="ngModel">
                        <div [hidden]="Nameen.valid || Nameen.pristine" class="alert alert-danger">
                            Name_en is required
                        </div>
                    </div>
                </div>

            </div>

            <div class="form-group">
                <label class="control-label">{{ 'ADD_PRODUCT_LOCALIZED_DESCRIPTION' | translate:lang }}</label>

                <div class="row">
                    <div class="col-md-3"><em>de</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Descriptionde" required [(ngModel)]="Description_de" name="Descriptionde" #Descriptionde="ngModel">
                        <div [hidden]="Descriptionde.valid || Descriptionde.pristine" class="alert alert-danger">
                            Description DE is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>fr</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Descriptionfr" required [(ngModel)]="Description_fr" name="Descriptionfr" #Descriptionfr="ngModel">
                        <div [hidden]="Descriptionfr.valid || Descriptionfr.pristine" class="alert alert-danger">
                            Description FR is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>it</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Descriptionit" required [(ngModel)]="Description_it" name="Descriptionit" #Descriptionit="ngModel">
                        <div [hidden]="Descriptionit.valid || Descriptionit.pristine" class="alert alert-danger">
                            Description IT is required
                        </div>
                    </div>
                </div>
                <div class="row">
                    <div class="col-md-3"><em>en</em></div>
                    <div class="col-md-9">
                        <input type="text" class="form-control" id="Descriptionen" required [(ngModel)]="Description_en" name="Descriptionen" #Descriptionen="ngModel">
                        <div [hidden]="Descriptionen.valid || Descriptionen.pristine" class="alert alert-danger">
                            Description EN is required
                        </div>
                    </div>
                </div>

            </div>
            <button type="submit" class="btn btn-default" [disabled]="!productForm.form.valid">Submit</button>

        </form>
    </div>
</div>

The built in Angular 2 form components are imported from the ‘@angular/common’ library. The different Validators, or your own custom Validators can be added here. The Create method uses the Control items and the Product model to create the full product item and send the data to the Shop Admin Controller on the server. When successfully created, the user is redirected to the Shop component showing all products in the selected language.

import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES, CommonModule, FORM_PROVIDERS }   from '@angular/common';
import { FormsModule }    from '@angular/forms';
import { Router, ROUTER_DIRECTIVES} from '@angular/router';
import { Observable } from 'rxjs/Observable';
import { Http } from '@angular/http';
import { Product } from '../services/Product';
import { ProductCreateEdit } from  '../services/ProductCreateEdit';
import { Locale, LocaleService, LocalizationService} from 'angular2localization/angular2localization';
import { ProductService } from '../services/ProductService';
import { TranslatePipe } from 'angular2localization/angular2localization';

@Component({
    selector: 'shopadmincomponent',
    template: require('./shop-admin.component.html'),
    directives: [CORE_DIRECTIVES, ROUTER_DIRECTIVES],
    pipes: [TranslatePipe]
})

export class ShopAdminComponent extends Locale implements OnInit  {

    public message: string;
    public Product: ProductCreateEdit = new ProductCreateEdit();
    public Currency: string;

    public Name_de: string;
    public Name_fr: string;
    public Name_it: string;
    public Name_en: string;
    public Description_de: string;
    public Description_fr: string;
    public Description_it: string;
    public Description_en: string;

    submitted = false;

    onSubmit() {
        this.submitted = true;
        this.Create();
    }

    // Reset the form with a new hero AND restore 'pristine' class state
    // by toggling 'active' flag which causes the form
    // to be removed/re-added in a tick via NgIf
    // TODO: Workaround until NgForm has a reset method (#6822)
    active = true;
    saving: boolean = false;

    constructor(
        private router: Router,
        public _localeService: LocaleService,
        public localization: LocalizationService,
        private _productService: ProductService
    ) {

        super(null, localization);

        this.message = "shop-admin.component";

        this._localeService.languageCodeChanged.subscribe(item => this.onLanguageCodeChangedDataRecieved(item));
        
    }

    ngOnInit() {
        console.log("ngOnInit ShopAdminComponent");
        // TODO Get product if Id exists
        this.initProduct();

        this.Currency = this._localeService.getCurrentCurrency();
        if (!(this.Currency === "CHF" || this.Currency === "EUR")) {
            this.Currency = "CHF";
        }
    }

    public Create() {

        this.submitted = true;

        this.saving = true;

        this.Product.LocalizationRecords = [];
        this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "de-CH", Text: this.Name_de });
        this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "fr-CH", Text: this.Name_fr });
        this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "it-CH", Text: this.Name_it });
        this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "en-US", Text: this.Name_en });

        this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "de-CH", Text: this.Description_de });
        this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "fr-CH", Text: this.Description_fr });
        this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "it-CH", Text: this.Description_it });
        this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "en-US", Text: this.Description_en });

        this._productService.CreateProduct(this.Product)
            .subscribe(data => {
                this.saving = false;
                this.router.navigate(['/shop']);
            }, error => {
                this.saving = false;
                console.log(error)
            },
            () => this.saving = false);
    }
 

    private onLanguageCodeChangedDataRecieved(item) {
        console.log("onLanguageCodeChangedDataRecieved Shop Admin");
        console.log(item + " : "+ this._localeService.getCurrentLanguage());
    }


    private initProduct() {
        this.Product = new ProductCreateEdit();      
    }

}

The form can then be used and the data is sent to the server.
localizedAngular2Form_01

And then displayed in the Shop component.

localizedAngular2Form_02

Notes

Angular 2 forms have a few validation issues which makes me uncomfortable using it.

Links

https://angular.io/docs/ts/latest/guide/forms.html

https://auth0.com/blog/2016/05/03/angular2-series-forms-and-custom-validation/

http://odetocode.com/blogs/scott/archive/2016/05/02/the-good-and-the-bad-of-programming-forms-in-angular.aspx

http://blog.thoughtram.io/angular/2016/03/14/custom-validators-in-angular-2.html

Implementing Angular2 forms – Beyond basics (part 1)

https://docs.asp.net/en/latest/fundamentals/localization.html

https://www.nuget.org/profiles/damienbod

https://github.com/robisim74/angular2localization

https://angular.io

https://docs.asp.net/en/latest/fundamentals/localization.html



Pedro Félix: The OpenID Connect Cast of Characters

Introduction

The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.

oauth2-1

  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).

oauth2-2

An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.

federation

  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.

federation-2

In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.

oidc

I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.



Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.