Anuraj Parameswaran: Bundling and Minification in ASP.NET Core

This post is about Bundling and Minification in ASP.NET Core. Bundling and minification are two techniques you can use in ASP.NET to improve page load performance for your web application. Bundling combines multiple files into a single file. Minification performs a variety of different code optimizations to scripts and CSS, which results in smaller payloads. In ASP.NET Core RTM release Microsoft introduced “BundlerMinifier.Core” tool which will help you to bundle and minimize Javascript and style sheet files. Unlike previous versions MVC, the bundling and minification is happening on development time not in runtime. To use “BundlerMinifier.Core” first you need to add reference of BundlerMinifier.Core in the project.json tools section.


Anuraj Parameswaran: ASP.NET Core with Nginx as reverse proxy

This post is about running your ASP.NET Core application with Nginx as reverse proxy on Windows. Nginx is a web server. It can act as a reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache. Nginx runs on Unix, Linux, BSD variants, OS X, Solaris, AIX, HP-UX, and Windows. Released under the terms of a BSD-like license, Nginx is free and open source software. Few months back on K-MUG Techday, on a NodeJS session, I asked the question about using NodeJS in enterprise project, then I got introduced to Nginx and reverse proxy concepts in NodeJS. Similar to Node, ASP.NET Core is also supports the Kestrel hosting than IIS, it can be also used along with Nginx and can be hosted in Linux as well..


Anuraj Parameswaran: Running ASP.NET Core 1.0 in Docker

This post is about running your ASP.NET Core application on Docker for Windows. Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. Recently Docker introduced Docker for Windows and ASP.NET Team came up with Docker support for ASP.NET Core as well. To deploy ASP.NET Core application, first you need to download the docker for Windows. You can get it from here.


Dominick Baier: Identity Videos, Podcasts and Slides from Conference Season 2016/1

My plan was to cut down on conferences and travelling in general – this didn’t work out ;) I did more conferences in the first 6 months of 2016 than I did in total last year. weird.

Here are some of the digital artefacts:

NDC Oslo 2016: Authentication & secure API access for native & mobile Applications

DevSum 2016: What’s new in ASP.NET Core Security

DevSum 2016: Buzzfrog Podcast with Dag König

DevWeek 2016: Modern Applications need modern Identity

DevWeek 2016: Implementing OpenID Connect and OAuth 2.0 with IdentityServer

All my slides are on speakerdeck.


Filed under: .NET Security, ASP.NET, Conferences & Training, IdentityModel, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Filip Woj: Inheriting route attributes in ASP.NET Web API

I was recently working on a project, where I had a need to inherit routes from a generic base Web API controller. This is not supported by Web API out of the box, but can be enabled with a tiny configuration tweak. Let’s have a look.

The problem with inheriting attribute routes

If you look at the definition of the RouteAttribute in ASP.NET Web API, you will see that it’s marked as an “inheritable” attribute. As such, it’s reasonable to assume that if you use that attribute on a base controller, it will be respected in a child controller you create off the base one.

However, in reality, that is not the case, and that’s due to the internal logic in DefaultDirectRouteProvider – which is the default implementation of the way how Web API discovers attribute routes.

We discussed this class (and the entire extensibility point, as the direct route provider can be replaced) before – for example when implementing a centralized route prefix for Web API.

So if this is your generic Web API code, it will not work out of the box:

public abstract class GenericController<TEntity> : ApiController where TEntity : class, IMyEntityDefinition, new()
{
    private readonly IGenericRepository<TEntity> _repo;

    protected GenericController(IGenericRepository<TEntity> repo)
    {
        _repo = repo;
    }

    [Route("{id:int}")]
    public virtual async Task<IHttpActionResult> Get(int id)
    {
        var result = await _repo.FindAsync(id);
        if (result == null)
        {
            return NotFound();
        }

        return Ok(result);
    }
}

[RoutePrefix("api/items")]public class ItemController : GenericController<Item>
{
    public GenericController(IGenericRepository<Item> repo) : base(repo)
    {}
}

Ignoring the implementation details of the repository pattern here, assuming all your dependency injection is configured already – with the above controller, trying to hit api/items/{id} is going to produce 404.

The solution for inheriting attribute routes

One of the methods that this default direct route provider exposes as overrideable, is the one shown below. It is responsible for extracting route attributes from an action descriptor:

protected virtual IReadOnlyList<IDirectRouteFactory> GetActionRouteFactories(HttpActionDescriptor actionDescriptor)
        {
            // Ignore the Route attributes from inherited actions.
            ReflectedHttpActionDescriptor reflectedActionDescriptor = actionDescriptor as ReflectedHttpActionDescriptor;
            if (reflectedActionDescriptor != null &&
                reflectedActionDescriptor.MethodInfo != null &&
                reflectedActionDescriptor.MethodInfo.DeclaringType != actionDescriptor.ControllerDescriptor.ControllerType)
            {
                return null;
            }

            Collection<IDirectRouteFactory> newFactories = actionDescriptor.GetCustomAttributes<IDirectRouteFactory>(inherit: false);

            Collection<IHttpRouteInfoProvider> oldProviders = actionDescriptor.GetCustomAttributes<IHttpRouteInfoProvider>(inherit: false);

            List<IDirectRouteFactory> combined = new List<IDirectRouteFactory>();
            combined.AddRange(newFactories);

            foreach (IHttpRouteInfoProvider oldProvider in oldProviders)
            {
                if (oldProvider is IDirectRouteFactory)
                {
                    continue;
                }

                combined.Add(new RouteInfoDirectRouteFactory(oldProvider));
            }

            return combined;
        }

Without going into too much details about this code – it’s clearly visible that it specifically ignores inherited route attributes (route attributes implement IDirectRouteFactory interface).

So in order to make our initial sample generic controller work, we need to override the above method and read all inherited routes. This is extremely simple and is shown below:

public class InheritanceDirectRouteProvider : DefaultDirectRouteProvider
{
    protected override IReadOnlyList<IDirectRouteFactory> GetActionRouteFactories(HttpActionDescriptor actionDescriptor)
    {
        return actionDescriptor.GetCustomAttributes<IDirectRouteFactory>(true);
    }
}

This can now be registered at the application startup against your HttpConfiguration – which is shown in the next snippet as an extension method + OWIN Startup class.

public static class HttpConfigurationExtensions
{
    public static void MapInheritedAttributeRoutes(this HttpConfiguration config)
    {
        config.MapHttpAttributeRoutes(new InheritanceDirectRouteProvider());
    }
}

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var config = new HttpConfiguration();
        config.MapInheritedAttributeRoutes();
        app.UseWebApi(config);
    }
}

And that’s it!


Darrel Miller: Back to my core

I've spent a large part of the last two years playing the role of a technical marketeer.  Call it developer advocate, API Evangelist, or my favourite title, API Concierge, my role was to engage with developers and help them, in any way I could, to build better HTTP APIs.  I have really enjoyed the experience and had the opportunity to meet many great people.  However, the more you hear yourself talk about what people should do, the more you are reminded that you aren't actually doing the stuff you are talking about any more.  The time has come for me to stop just talking about building production systems and start doing it again.

Badge2

Code is the answer

Starting this month, I am joining Microsoft as a full time software developer.   I am rejoining the Azure API Management team, this time to actually help build the product.  I am happy to be working on a product that is all about helping people build better HTTP based applications in a shorter amount of time.  I'm also really happy to being on a team that really cares about how HTTP should be used and are determined to make these capabilities available to the widest possible audience.

API Management is one of those dreadfully named product categories that actually save developers real time and money when building APIs.  Do you really want to implement rate limiting, API token issuing and geolocated  HTTP caching?

As a platform for middleware, API Management products can help you solve all kinds of challenges related to security, deployment, scaling and versioning of HTTP based systems.  It’s definitely my cup of tea.

I am hoping to still have chance to do a few conferences a year and I definitely want to keep on blogging.  Perhaps you'll see some deeper technical content from me in the near future.  It's time to recharge those technical batteries and demonstrate that I can still walk the walk.

Interviews

Having just gone through the process of interviewing, I have some thoughts on the whole process.  I think it is fair to say that Microsoft have a fairly traditional interview process.  You spend a day taking to people from the hiring team and related teams.  You get the usual personal questions and questions about past experiences. When applying for a developer role you get a bunch of technical questions that usually require whiteboard coding on topics that are covered in college level courses.  I haven’t been in university for a very long time.  I can count the number of times I have had to reverse a linked list professionally on one hand.

These types of interview questions are typically met with scorn by experienced developers.  I have heard numerous people suggest alternative interview techniques that I believe would be more effective at determining if someone is a competent developer.

However, these are the hoops that candidates are asked to jump through.  It isn’t a surprise.  It is easy to find this out.  It is fairly easy to practice doing whiteboard coding and there are plenty of resources out there that demonstrate how to achieve many of these comp sci challenges.

I’ve heard developers say that if they were asked to perform such an irrelevant challenge on an interview that they would walk out.  I don’t look at it that way.  I consider it an arbitrary challenge and if I can do the necessary prep work to pass, then it is a reflection on my ability to deal with other challenges I may face.  Maybe these interviews are an artificial test, but I would argue so was university. I certainly didn’t learn how to write code while doing an engineering degree.

Remote Work

I’m not going to be moving to Redmond.  I’m going to continue living in Montreal and working for a Redmond based team.  We have one other developer who is remote, but is on the same timezone as the team.  It would be easier to do the job if I were in Redmond, but I can’t move for family reasons.  I’m actually glad that I can’t move, because I honestly think that remote work is the future for the tech industry.  Once a team gets used to working with remote team members, there really isn’t a downside and there are lots of upsides.

The tech industry has a major shortage of talent and a ridiculous tendancy to congregate in certain geographic locations, which causes significant economic problems.  Tech people don’t have any need to be physically close to collaborate.  We should take advantage of that.

But Microsoft?

There is lots of doom and gloom commentary around Microsoft in the circles that I frequent.  Lots of it is related to the issues around ASP.Net Core and .Net Core.  If you look a little into Microsoft’s history you will see whenever they attempt to make major changes that allow the next generation of products they get beaten up for it.  Windows Vista is a classic example.  It was perceived as huge failure, but it made the big changes that allowed Windows 7 to be successful. 

The Core stuff is attempting to do a major reset on 15 years of history.  Grumpiness is guaranteed.  It doesn’t worry me particularly.  Could they have done stuff better?  Sure.  Did I ever think that a few teams in Microsoft could have instigated such a radical amount of change? Nope, never. But it is going to take time.  Way more time than those who like living on the bleeding edge are going to be happy about.

There is a whole lot of change happening at Microsoft.  The majority of what I see is really encouraging.  The employees I have met so far are consistently enthusiastic about the company and many of the employees who have left the company will describe their time there very favourably.

Historically, Microsoft was notorious for its hated stack ranking performance review system.  I had heard that the system had been abolished but I had no idea what the replacement system was until last week.  Only time will tell whether the new system will actually work, but my initial impression is that it is going to have an extremely positive impact on Microsoft culture.  The essense of the system is that you are measured on your contributions to your team, the impact you have had on helping other employees succeed and how you have built on the work of others.  The system, as I understand it, is designed to reward collaboration within the company.  If that doesn’t have an impact on the infamous Microsoft org chart comic, I don’t know what will.

Building stuff is fun

I got hooked on the creative endevour of writing code 34 years ago and I hope to still be doing it for many more to come.


Anuraj Parameswaran: How to configure Kestrel URLs in ASP.NET Core RC2

This post is to about configuring Kestrel URLs. Prior RC2, you can configure the Kestrel URLs in the project.json using –server.urls option, inside the Web command section. And if nothing specified, it will use the default binding http://localhost:5000. As of RC2 we have a new unified toolchain (the .NET Core CLI) and ASP.NET Core applications are effectively just .NET Core Console Applications, commands are no more relevant. You can modify the main method to change the URLs using the UseUrls method.


Anuraj Parameswaran: Using Application Insights in ASP.NET Core

This post is to about using Application Insights in ASP.NET Core. Application Insights is an extensible analytics platform that monitors the performance and usage of your live ASP.NET Core web applications. To use Application Insights, you need to create one Application Insights. It is still in Preview mode, you can create one using portal.azure.com website.


Pedro Félix: Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g. id.example.com, app1.example.com, app2.example.com), as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google Chrome.app” –args –proxy-server=10.211.55.3:8888 –proxy-bypass-list=localhost, where 10.221.55.3 is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

hosting
As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.



Pedro Félix: Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use example.com subdomains (e.g. id.example.com, app1.example.com, app2.example.com), which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than 127.0.0.1. So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

127.0.0.2 app1.example.com
127.0.0.3 app2.example.com
127.0.0.4 id.example.com

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.



Anuraj Parameswaran: Using WebSockets in ASP.NET Core

This post is to about using WebSockets in your ASP.NET Core application.WebSockets is an advanced technology that makes it possible to open an interactive communication session between the user’s browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply. In ASP.NET Core Web Sockets is implemented as a middleware, so to use WebSockets in ASP.NET Core, you need to add the reference of WebSockets server package to the references section. And add WebSockets middleware to the configure method and need to handle the web socket requests.


Damien Bod: Import and Export CSV in ASP.NET Core

This article shows how to import and export csv data in an ASP.NET Core application. The InputFormatter and the OutputFormatter classes are used to convert the csv data to the C# model classes.

Code: https://github.com/damienbod/AspNetCoreCsvImportExport

2016.06.29: Updated to ASP.NET Core RTM

The LocalizationRecord class is used as the model class to import and export to and from csv data.

using System;

namespace AspNetCoreCsvImportExport.Model
{
    public class LocalizationRecord
    {
        public long Id { get; set; }
        public string Key { get; set; }
        public string Text { get; set; }
        public string LocalizationCulture { get; set; }
        public string ResourceKey { get; set; }
    }
}

The MVC Controller CsvTestController makes it possible to import and export the data. The Get method exports the data using the Accept header in the HTTP Request. Per default, Json will be returned. If the Accept Header is set to ‘text/csv’, the data will be returned as csv. The GetDataAsCsv method always returns csv data because the Produces attribute is used to force this. This makes it easy to download the csv data in a browser.

The Import method uses the Content-Type HTTP Request Header, to decide how to handle the request body. If the ‘text/csv’ is defined, the custom csv input formatter will be used.

using System.Collections.Generic;
using AspNetCoreCsvImportExport.Model;
using Microsoft.AspNetCore.Mvc;

namespace AspNetCoreCsvImportExport.Controllers
{
    [Route("api/[controller]")]
    public class CsvTestController : Controller
    {
        // GET api/csvtest
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(DummyData());
        }

        [HttpGet]
        [Route("data.csv")]
        [Produces("text/csv")]
        public IActionResult GetDataAsCsv()
        {
            return Ok( DummyData());
        }

        private static IEnumerable<LocalizationRecord> DummyData()
        {
            var model = new List<LocalizationRecord>
            {
                new LocalizationRecord
                {
                    Id = 1,
                    Key = "test",
                    Text = "test text",
                    LocalizationCulture = "en-US",
                    ResourceKey = "test"

                },
                new LocalizationRecord
                {
                    Id = 2,
                    Key = "test",
                    Text = "test2 text de-CH",
                    LocalizationCulture = "de-CH",
                    ResourceKey = "test"

                }
            };

            return model;
        }

        // POST api/csvtest/import
        [HttpPost]
        [Route("import")]
        public IActionResult Import([FromBody]List<LocalizationRecord> value)
        {
            if (!ModelState.IsValid)
            {
                return BadRequest(ModelState);
            }
            else
            {
                List<LocalizationRecord> data = value;
                return Ok();
            }
        }

    }
}

The csv input formatter implements the InputFormatter class. This checks if the context ModelType property is a type of IList and if so, converts the csv data to a List of Objects of type T using reflection. This is implemented in the read stream method. The implementation is very basic and will not work if you have more complex structures in your model class.

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Formatters;
using Microsoft.Net.Http.Headers;

namespace AspNetCoreCsvImportExport.Formatters
{
    /// <summary>
    /// ContentType: text/csv
    /// </summary>
    public class CsvInputFormatter : InputFormatter
    {
        private readonly CsvFormatterOptions _options;

        public CsvInputFormatter(CsvFormatterOptions csvFormatterOptions)
        {
            if (csvFormatterOptions == null)
            {
                throw new ArgumentNullException(nameof(csvFormatterOptions));
            }

            _options = csvFormatterOptions;
        }

        public override Task<InputFormatterResult> ReadRequestBodyAsync(InputFormatterContext context)
        {
            var type = context.ModelType;
            var request = context.HttpContext.Request;
            MediaTypeHeaderValue requestContentType = null;
            MediaTypeHeaderValue.TryParse(request.ContentType, out requestContentType);


            var result = readStream(type, request.Body);
            return InputFormatterResult.SuccessAsync(result);
        }

        public override bool CanRead(InputFormatterContext context)
        {
            var type = context.ModelType;
            if (type == null)
                throw new ArgumentNullException("type");

            return isTypeOfIEnumerable(type);
        }

        private bool isTypeOfIEnumerable(Type type)
        {

            foreach (Type interfaceType in type.GetInterfaces())
            {

                if (interfaceType == typeof(IList))
                    return true;
            }

            return false;
        }

        private object readStream(Type type, Stream stream)
        {
            Type itemType;
            var typeIsArray = false;
            IList list;
            if (type.GetGenericArguments().Length > 0)
            {
                itemType = type.GetGenericArguments()[0];
                list = (IList)Activator.CreateInstance(itemType);
            }
            else
            {
                typeIsArray = true;
                itemType = type.GetElementType();

                var listType = typeof(List<>);
                var constructedListType = listType.MakeGenericType(itemType);

                list = (IList)Activator.CreateInstance(constructedListType);
            }


            var reader = new StreamReader(stream);

            bool skipFirstLine = _options.UseSingleLineHeaderInCsv;
            while (!reader.EndOfStream)
            {
                var line = reader.ReadLine();
                var values = line.Split(_options.CsvDelimiter.ToCharArray());
                if(skipFirstLine)
                {
                    skipFirstLine = false;
                }
                else
                {
                    var itemTypeInGeneric = list.GetType().GetTypeInfo().GenericTypeArguments[0];
                    var item = Activator.CreateInstance(itemTypeInGeneric);
                    var properties = item.GetType().GetProperties();
                    for (int i = 0;i<values.Length; i++)
                    {
                        properties[i].SetValue(item, Convert.ChangeType(values[i], properties[i].PropertyType), null);
                    }

                    list.Add(item);
                }

            }

            if(typeIsArray)
            {
                Array array = Array.CreateInstance(itemType, list.Count);

                for(int t = 0; t < list.Count; t++)
                {
                    array.SetValue(list[t], t);
                }
                return array;
            }
            
            return list;
        }
    }
}

The csv output formatter is implemented using the code from Tugberk Ugurlu’s blog with some small changes. Thanks for this. This formatter uses ‘;’ to separate the properties and a new line for each object. The headers are added tot he first line.

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Formatters;

namespace AspNetCoreCsvImportExport.Formatters
{
    /// <summary>
    /// Original code taken from
    /// http://www.tugberkugurlu.com/archive/creating-custom-csvmediatypeformatter-in-asp-net-web-api-for-comma-separated-values-csv-format
    /// Adapted for ASP.NET Core and uses ; instead of , for delimiters
    /// </summary>
    public class CsvOutputFormatter :  OutputFormatter
    {
        private readonly CsvFormatterOptions _options;

        public string ContentType { get; private set; }

        public CsvOutputFormatter(CsvFormatterOptions csvFormatterOptions)
        {
            ContentType = "text/csv";
            SupportedMediaTypes.Add(Microsoft.Net.Http.Headers.MediaTypeHeaderValue.Parse("text/csv"));

            if (csvFormatterOptions == null)
            {
                throw new ArgumentNullException(nameof(csvFormatterOptions));
            }

            _options = csvFormatterOptions;

            //SupportedEncodings.Add(Encoding.GetEncoding("utf-8"));
        }

        protected override bool CanWriteType(Type type)
        {

            if (type == null)
                throw new ArgumentNullException("type");

            return isTypeOfIEnumerable(type);
        }

        private bool isTypeOfIEnumerable(Type type)
        {

            foreach (Type interfaceType in type.GetInterfaces())
            {

                if (interfaceType == typeof(IList))
                    return true;
            }

            return false;
        }

        public async override Task WriteResponseBodyAsync(OutputFormatterWriteContext context)
        {
            var response = context.HttpContext.Response;

            Type type = context.Object.GetType();
            Type itemType;

            if (type.GetGenericArguments().Length > 0)
            {
                itemType = type.GetGenericArguments()[0];
            }
            else
            {
                itemType = type.GetElementType();
            }

            StringWriter _stringWriter = new StringWriter();

            if (_options.UseSingleLineHeaderInCsv)
            {
                _stringWriter.WriteLine(
                    string.Join<string>(
                        _options.CsvDelimiter, itemType.GetProperties().Select(x => x.Name)
                    )
                );
            }


            foreach (var obj in (IEnumerable<object>)context.Object)
            {

                var vals = obj.GetType().GetProperties().Select(
                    pi => new {
                        Value = pi.GetValue(obj, null)
                    }
                );

                string _valueLine = string.Empty;

                foreach (var val in vals)
                {

                    if (val.Value != null)
                    {

                        var _val = val.Value.ToString();

                        //Check if the value contans a comma and place it in quotes if so
                        if (_val.Contains(","))
                            _val = string.Concat("\"", _val, "\"");

                        //Replace any \r or \n special characters from a new line with a space
                        if (_val.Contains("\r"))
                            _val = _val.Replace("\r", " ");
                        if (_val.Contains("\n"))
                            _val = _val.Replace("\n", " ");

                        _valueLine = string.Concat(_valueLine, _val, _options.CsvDelimiter);

                    }
                    else
                    {

                        _valueLine = string.Concat(string.Empty, _options.CsvDelimiter);
                    }
                }

                _stringWriter.WriteLine(_valueLine.TrimEnd(_options.CsvDelimiter.ToCharArray()));
            }

            var streamWriter = new StreamWriter(response.Body);
            await streamWriter.WriteAsync(_stringWriter.ToString());
            await streamWriter.FlushAsync();
        }
    }
}

The custom formatters need to be added to the MVC middleware, so that it knows how to handle media types ‘text/csv’.

public void ConfigureServices(IServiceCollection services)
{
  var csvFormatterOptions = new CsvFormatterOptions();
  
  services.AddMvc(options =>
  {
     options.InputFormatters.Add(new CsvInputFormatter(csvFormatterOptions));
     options.OutputFormatters.Add(new CsvOutputFormatter(csvFormatterOptions));
     options.FormatterMappings.SetMediaTypeMappingForFormat("csv", MediaTypeHeaderValue.Parse("text/csv"));
  })
}

When the data.csv link is requested, a csv type response is returned to the client, which can be saved. This data contains the header texts and the value of each property in each object. This can then be opened in excel.

http://localhost:10336/api/csvtest/data.csv

Id;Key;Text;LocalizationCulture;ResourceKey
1;test;test text;en-US;test
2;test;test2 text de-CH;de-CH;test

This data can then be used to upload the csv data to the server which is then converted back to a C# object. I use fiddler, postman or curl can also be used, or any HTTP Client where you can set the header Content-Type.


 http://localhost:10336/api/csvtest/import 

 User-Agent: Fiddler 
 Content-Type: text/csv 
 Host: localhost:10336 
 Content-Length: 110 


 Id;Key;Text;LocalizationCulture;ResourceKey 
 1;test;test text;en-US;test 
 2;test;test2 text de-CH;de-CH;test 

The following image shows that the data is imported correctly.

importExportCsv

Notes

The implementation of the InputFormatter and the OutputFormatter classes are specific for a list of simple classes with only properties. If you require or use more complex classes, these implementations need to be changed.

Links

http://www.tugberkugurlu.com/archive/creating-custom-csvmediatypeformatter-in-asp-net-web-api-for-comma-separated-values-csv-format

ASP.NET Core 1.0 MVC 6 Custom Protobuf Formatters

http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/

https://wildermuth.com/2016/03/16/Content_Negotiation_in_ASP_NET_Core



Damien Bod: ASP.NET Core, Angular2 with Webpack and Visual Studio

This article shows how Webpack could be used together with Visual Studio ASP.NET Core and Angular2. Both the client and the server side of the application is implemented inside one ASP.NET Core project which makes it easier to deploy.

vs_webpack_angular2

Code: https://github.com/damienbod/Angular2WebpackVisualStudio

Authors Fabian Gosebrink, Damien Bowden.
This post is hosted on both http://damienbod.com and http://offering.solutions/ and will be hosted on http://blog.noser.com afterwards.

2016.06.29: Updated to ASP.NET Core RTM
2016.06.26: Updated to Angular 2 rc3 and new routing
2016.06.17: Updated to Angular 2 rc2

Setting up the application

The ASP.NET Core application contains both the server side API services and also hosts the Angular 2 client application. The source code for the Angular 2 application is implemented in the angular2App folder. Webpack is then used to deploy the application, using the development build or a production build, which deploys the application to the wwwroot folder. This makes it easy to deploy the application using the standard tools from Visual Studio with the standard configurations.

npm configuration

The npm package.json configuration loads all the required packages for Angular 2 and Webpack. The Webpack packages are all added to the devDependencies. A “npm build” script and also a “npm buildProduction” are also configured, so that the client application can be built using Webpack from the cmd line using “npm build” or “npm buildProduction”. These two scripts just call the same cmd as the Webpack task runner.

{
    "version": "1.0.0",
    "description": "",
    "main": "wwwroot/index.html",
    "author": "",
    "license": "ISC",
    "scripts": {
        "build": "SET NODE_ENV=development && webpack -d --color",
        "buildProduction": "SET NODE_ENV=production && webpack -d --color",
        "tsc": "tsc",
        "tsc:w": "tsc -w",
        "typings": "typings",
        "postinstall": "typings install"
    },
    "dependencies": {

        "@angular/common": "2.0.0-rc.3",
        "@angular/compiler": "2.0.0-rc.3",
        "@angular/core": "2.0.0-rc.3",
        "@angular/forms": "0.1.1",
        "@angular/http": "2.0.0-rc.3",
        "@angular/platform-browser": "2.0.0-rc.3",
        "@angular/platform-browser-dynamic": "2.0.0-rc.3",
        "@angular/router": "3.0.0-alpha.8",
        "@angular/upgrade": "2.0.0-rc.3",
        "core-js": "^2.4.0",
        "reflect-metadata": "^0.1.3",
        "rxjs": "5.0.0-beta.6",
        "zone.js": "^0.6.12",

        "bootstrap": "^3.3.6",
        "extract-text-webpack-plugin": "^1.0.1"
    },
    "devDependencies": {
        "autoprefixer": "^6.3.2",
        "clean-webpack-plugin": "^0.1.9",
        "copy-webpack-plugin": "^2.1.3",
        "css-loader": "^0.23.0",
        "extract-text-webpack-plugin": "^1.0.1",
        "file-loader": "^0.8.4",
        "html-loader": "^0.4.0",
        "html-webpack-plugin": "^2.8.1",
        "jquery": "^2.2.0",
        "json-loader": "^0.5.3",
        "node-sass": "^3.4.2",
        "null-loader": "0.1.1",
        "postcss-loader": "^0.9.1",
        "raw-loader": "0.5.1",
        "rimraf": "^2.5.1",
        "sass-loader": "^3.1.2",
        "style-loader": "^0.13.0",
        "ts-helpers": "^1.1.1",
        "ts-loader": "0.8.2",
        "typescript": "1.8.10",
        "typings": "1.0.4",
        "url-loader": "^0.5.6",
        "webpack": "1.13.0"
    }
}

typings configuration

The typings are configured for webpack builds.

{
    "globalDependencies": {
        "core-js": "registry:dt/core-js#0.0.0+20160602141332",
        "node": "registry:dt/node#6.0.0+20160621231320"
    }
}

tsconfig configuration

The tsconfig is configured to use commonjs as the module.

{
    "compilerOptions": {
        "target": "es5",
        "module": "commonjs",
        "moduleResolution":  "node",
        "removeComments": true,
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true,
        "noEmitHelpers": false,
        "sourceMap": true
    },
    "exclude": [
        "node_modules"
    ],
    "compileOnSave": false,
    "buildOnSave": false
}

Webpack build

The Webpack development build >webpack -d just uses the source files and creates outputs for development. The production build copies everything required for the client application to the wwwroot folder, and uglifies the js files. The webpack -d –watch can be used to automatically build the dist files if a source file is changed.

The Webpack config file was created using the excellent gihub repository https://github.com/preboot/angular2-webpack. Thanks for this. Small changes were made to this, such as the process.env.NODE_ENV and Webpack uses different source and output folders to match the ASP.NET Core project. If you decide to use two different projects, one for server, and one for client, preboot or angular-cli, or both together would be a good choice for the client application.

Full webpack.config file

/// <binding ProjectOpened='Run - Development' />
var path = require('path');
var webpack = require('webpack');

var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
var Autoprefixer = require('autoprefixer');
var HtmlWebpackPlugin = require('html-webpack-plugin');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');

var isProd = (process.env.NODE_ENV === 'production');

module.exports = function makeWebpackConfig() {

    var config = {};

    // add debug messages
    config.debug = !isProd;

    // clarify output filenames
    var outputfilename = 'dist/[name].js';
    if (isProd) {
        //config.devtool = 'source-map';
        outputfilename = 'dist/[name].[hash].js';
    }

    if (!isProd) {
        config.devtool = 'eval-source-map';
    }


    config.entry = {
        'polyfills': './angular2App/polyfills.ts',
        'vendor': './angular2App/vendor.ts',
        'app': './angular2App/boot.ts' // our angular app
    };


    config.output = {
        path: root('./wwwroot'),
        publicPath: isProd ? '' : 'http://localhost:5000/',
        filename: outputfilename,
        chunkFilename: isProd ? '[id].[hash].chunk.js' : '[id].chunk.js'
    };

    config.resolve = {
        cache: true,
        root: root(),
        extensions: ['', '.ts', '.js', '.json', '.css', '.scss', '.html'],
        alias: {
            'app': 'angular2App/app'
        }
    };

    config.module = {
        loaders: [
            {
                test: /\.ts$/,
                loader: 'ts',
                query: {
                    'ignoreDiagnostics': [
                        2403, // 2403 -> Subsequent variable declarations
                        2300, // 2300 -> Duplicate identifier
                        2374, // 2374 -> Duplicate number index signature
                        2375, // 2375 -> Duplicate string index signature
                        2502 // 2502 -> Referenced directly or indirectly
                    ]
                },
                exclude: [/node_modules\/(?!(ng2-.+))/]
            },

            // copy those assets to output
            {
                test: /\.(png|jpe?g|gif|svg|woff|woff2|ttf|eot|ico)$/,
                loader: 'file?name=fonts/[name].[hash].[ext]?'
            },

            // Support for *.json files.
            {
                test: /\.json$/,
                loader: 'json'
            },

            // Load css files which are required in vendor.ts
            {
                test: /\.css$/,
                exclude: root('angular2App', 'app'),
                loader: "style!css"
            },

            // Extract all files without the files for specific app components
            {
                test: /\.scss$/,
                exclude: root('angular2App', 'app'),
                loader: 'raw!postcss!sass'
            },

            // Extract all files for specific app components
            {
                test: /\.scss$/,
                exclude: root('angular2App', 'style'),
                loader: 'raw!postcss!sass'
            },

            {
                test: /\.html$/,
                loader: 'raw'
            }
        ],
        postLoaders: [],
        noParse: [/.+zone\.js\/dist\/.+/, /.+angular2\/bundles\/.+/, /angular2-polyfills\.js/]
    };


    config.plugins = [
        new CleanWebpackPlugin(['./wwwroot/dist']),
       
        new webpack.DefinePlugin({
            'process.env': {
                NODE_ENV: JSON.stringify("production")
            }
        }),

        new CommonsChunkPlugin({
            name: ['vendor', 'polyfills']
        }),

        new HtmlWebpackPlugin({
            template: './angular2App/index.html',
            inject: 'body',
            chunksSortMode: packageSort(['polyfills', 'vendor', 'app'])
        }),

        new CopyWebpackPlugin([

            // copy all images to [rootFolder]/images
            { from: root('angular2App/images'), to: 'images' },

            // copy all fonts to [rootFolder]/fonts
            { from: root('angular2App/fonts'), to: 'fonts' }
        ])
    ];


    // Add build specific plugins
    if (isProd) {
        config.plugins.push(
            new webpack.NoErrorsPlugin(),
            new webpack.optimize.DedupePlugin(),
            new webpack.optimize.UglifyJsPlugin()
        );
    }

    config.postcss = [
        Autoprefixer({
            browsers: ['last 2 version']
        })
    ];

    config.sassLoader = {
        //includePaths: [path.resolve(__dirname, "node_modules/foundation-sites/scss")]
    };

    return config;
}();

// Helper functions
function root(args) {
    args = Array.prototype.slice.call(arguments, 0);
    return path.join.apply(path, [__dirname].concat(args));
}

function rootNode(args) {
    args = Array.prototype.slice.call(arguments, 0);
    return root.apply(path, ['node_modules'].concat(args));
}

function packageSort(packages) {
    // packages = ['polyfills', 'vendor', 'app']
    var len = packages.length - 1;
    var first = packages[0];
    var last = packages[len];
    return function sort(a, b) {
        // polyfills always first
        if (a.names[0] === first) {
            return -1;
        }
        // main always last
        if (a.names[0] === last) {
            return 1;
        }
        // vendor before app
        if (a.names[0] !== first && b.names[0] === last) {
            return -1;
        } else {
            return 1;
        }
    }
}

Lets dive into this a bit:

Firstly, all plugins are loaded which are required to process all the js, ts, … files which are included, or used in the project.

var path = require('path');
var webpack = require('webpack');

var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
var Autoprefixer = require('autoprefixer');
var HtmlWebpackPlugin = require('html-webpack-plugin');
var ExtractTextPlugin = require('extract-text-webpack-plugin');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var CleanWebpackPlugin = require('clean-webpack-plugin');

var isProd = (process.env.NODE_ENV === 'production');

The npm environment variable NODE_ENV is used to define the type of build, either a development build or a production build. The entries are configured depending on this parameter.

    config.entry = {
        'polyfills': './angular2App/polyfills.ts',
        'vendor': './angular2App/vendor.ts',
        'app': './angular2App/boot.ts' // our angular app
    };

The entries provide Webpack with the required information, where to start from, or where to hook in to. Three entry points are defined in this configuration. These strings point to the files required in the solution. The starting point for the app itself is provided in one of these files, boot.ts as a starting-point and also all vendor scripts minified in one file, the vendor.ts.

// Polyfill(s) for older browsers.
import 'core-js/client/core';

// Reflect Metadata.
import 'reflect-metadata';
// RxJS.
import 'rxjs';
// Zone.
import 'zone.js/dist/zone';

// Angular 2.
import '@angular/common';
import '@angular/compiler';
import '@angular/core';
import '@angular/http';
import '@angular/platform-browser';
import '@angular/platform-browser-dynamic';
import '@angular/router';

// Other libraries.
import 'jquery/src/jquery';
import 'bootstrap/dist/js/bootstrap';


import './css/bootstrap.css';
import './css/bootstrap-theme.css';

Webpack knows which paths to run and includes the corresponding files and packages.

The “loaders” section and the “modules” section in the configuration provides Webpack with the following information: which files it needs to get and how to read the files. The modules tells Webpack what to do with the files exactly. Like minifying or whatever.

In this project configuration, if a production node parameter is set, different plugins are pushed into the sections because the files should be treated differently.

Angular 2 index.html

The index.html contains all the references required for the Angular 2 client. The scripts are added as part of the build and not manually. The developer only needs to use the imports.

Source index.html file in the angular2App/public folder:

<!doctype html>
<html>
<head>
    <base href="./">

    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Angular 2 Webpack Demo</title>

    <meta http-equiv="content-type" content="text/html; charset=utf-8" />

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

</head>
<body>
    <my-app>Loading...</my-app>
</body>
</html>


And the produced build file in the wwwroot folder. The scripts for the app, vendor and boot have been added using Webpack. Hashes are used in a production build for cache busting.

<!doctype html>
<html>
<head>
    <base href="./">

    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Angular 2 Webpack Demo</title>

    <meta http-equiv="content-type" content="text/html; charset=utf-8" />

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    <link rel="stylesheet" href="css/bootstrap.css">
</head>
<body>
    <my-app>Loading...</my-app>
<script type="text/javascript" src="http://localhost:5000/dist/polyfills.js"></script><script type="text/javascript" src="http://localhost:5000/dist/vendor.js"></script><script type="text/javascript" src="http://localhost:5000/dist/app.js"></script></body>
</html>

Visual Studio tools

Webpack task runner from Mads Kristensen can be downloaded and used to send Webpack commands using the webpack.config.js file. The node NODE_ENV parameter is used to define the build type. The parameter can be set to “development”, or “production”.

vs_webpack_angular2_02

The Webpack task runner can also be used by double clicking the task. The execution results are then displayed in the task runner console.

vs_webpack_angular2_03

This runner provides a number of useful commands which can be activated automatically. These tasks can be attached to Visual Studio events by right clicking the task and selecting a binding. This adds a binding tag to the webpack.config.js file.

/// <binding ProjectOpened='Run - Development' />

Webpack SASS

SASS is used to style the SPA application. The SASS files can be built using the SASS loader. Webpack can build all the styles inline or as an external file, depending on your Webpack config.

{
  test: /\.scss$/,
  exclude: root('angular2App', 'app'),
  loader: ExtractTextPlugin.extract('style', 'css?sourceMap!postcss!sass')
},

Webpack Clean

clean-webpack-plugin is used to clean up the deployment folder inside the wwwroot. This ensures that the application uses the latest files.

The clean task can be configured as follows:

var CleanWebpackPlugin = require('clean-webpack-plugin');

And used in Webpack.

  new CleanWebpackPlugin(['./wwwroot/dist']),

Angular 2 component files

The Angular 2 components are slightly different to the standard example components. The templates and the styles use require, which adds the html or the css, scss to the file directly using Webpack, or as an external link depending on the Webpack config.

import { Observable } from 'rxjs/Observable';
import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { Http } from '@angular/http';
import { DataService } from '../services/DataService';


@Component({
    selector: 'homecomponent',
    template: require('./home.component.html'),
    directives: [CORE_DIRECTIVES],
    providers: [DataService]
})

export class HomeComponent implements OnInit {

    public message: string;
    public values: any[];

    constructor(private _dataService: DataService) {
        this.message = "Hello from HomeComponent constructor";
    }

    ngOnInit() {
        this._dataService
            .GetAll()
            .subscribe(data => this.values = data,
            error => console.log(error),
            () => console.log('Get all complete'));
    }
}

The ASP.NET Core API

The ASP.NET Core API is quite small and tiny. It just provides a demo CRUD service.

 [Route("api/[controller]")]
    public class ValuesController : Microsoft.AspNetCore.Mvc.Controller
    {
        // GET: api/values
        [HttpGet]
        public IActionResult Get()
        {
            return new JsonResult(new string[] { "value1", "value2" });
        }

        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            return new JsonResult("value");
        }

        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return new CreatedAtRouteResult("anyroute", null);
        }

        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return new OkResult();
        }

        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return new NoContentResult();
        }
    }

The Angular2 Http-Service

Note that in a normal environment, you should always return the typed classes and never the plain HTTP response like here. This application only has strings to return, and this is enough for the demo.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';

@Injectable()
export class DataService {

    private actionUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration) {

        this.actionUrl = _configuration.Server + 'api/values/';

        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    public GetAll = (): Observable =&gt; {
        return this._http.get(this.actionUrl).map((response: Response) =&gt; response.json());
    }

    public GetSingle = (id: number): Observable =&gt; {
        return this._http.get(this.actionUrl + id).map(res =&gt; res.json());
    }

    public Add = (itemName: string): Observable =&gt; {
        var toAdd = JSON.stringify({ ItemName: itemName });

        return this._http.post(this.actionUrl, toAdd, { headers: this.headers }).map(res =&gt; res.json());
    }

    public Update = (id: number, itemToUpdate: any): Observable =&gt; {
        return this._http
            .put(this.actionUrl + id, JSON.stringify(itemToUpdate), { headers: this.headers })
            .map(res =&gt; res.json());
    }

    public Delete = (id: number): Observable =&gt; {
        return this._http.delete(this.actionUrl + id);
    }
}

Notes:

The Webpack configuration could also build all of the scss and css files to a separate app.css or app.”hash”.css which could be loaded as a single file in the distribution. Some of the vendor js and css could also be loaded directly in the html header using the index.html file and not included in the Webpack build.

If you are building both the client application and the server application in separate projects, you could also consider angular-cli of angular2-webpack for the client application.

Debugging the Angular 2 in Visual Studio with breakpoints is not possible with this setup. The SPA app can be debugged in chrome.

Links:

https://github.com/preboot/angular2-webpack

https://webpack.github.io/docs/

https://github.com/jtangelder/sass-loader

https://github.com/petehunt/webpack-howto/blob/master/README.md

http://www.sochix.ru/how-to-integrate-webpack-into-visual-studio-2015/

http://sass-lang.com/

WebPack Task Runner from Mads Kristensen

http://blog.thoughtram.io/angular/2016/06/08/component-relative-paths-in-angular-2.html

https://angular.io/docs/ts/latest/guide/webpack.html

https://angular.io/docs/ts/latest/tutorial/toh-pt5.html

http://angularjs.blogspot.ch/2016/06/improvements-coming-for-routing-in.html?platform=hootsuite



Anuraj Parameswaran: How to host your ASP.NET Core in a Windows Service

This post is to about hosting your ASP.NET Core application as Windows Service. This implementation is not relevent on dotnet core, since Windows service feature is only available on Windows. First you need reference of “Microsoft.AspNetCore.Hosting” and “Microsoft.AspNetCore.Hosting.WindowsServices” in the project.json file.


Anuraj Parameswaran: Creating your first ASP.NET Core Web API with Swashbuckle

This post is to help developers on how to create interactive interface which represent their Restful API to provide a rich discovery, documentation and playground experience to their API consumers in ASP.NET Core Web API. First you need to create an ASP.NET Core web api project. In this post I am using the default project; I didn’t modified any code. Once you created the project you need add reference of Swashbuckle. For RC2 you need to add reference of 6.0.0-beta9 version.


Damien Bod: Adding SQL localization data using an Angular 2 form and ASP.NET Core

This article shows how SQL localized data can be added to a database using Angular 2 forms which can then be displayed without restarting the application. The ASP.NET Core localization is implemented using Localization.SqlLocalizer. This NuGet package is used to save and retrieve the dynamic localized data. This makes it possible to add localized data at run-time.

Code: https://github.com/damienbod/Angular2LocalizationAspNetCore

2016.06.28: Updated to Angular2 rc3, angular2localization 0.8.5 and dotnet RTM

Posts in this series

The ASP.NET Core API provides an HTTP POST action method which allows the user to add a new ProductCreateEditDto object to the application. The view model adds both product data and also localization data to the SQLite database using Entity Framework Core.

[HttpPost]
public IActionResult Post([FromBody]ProductCreateEditDto value)
{
	_productCudProvider.AddProduct(value);
	return Created("http://localhost:5000/api/ShopAdmin/", value);
}

The Angular 2 app uses the ProductService to send the HTTP POST request to the ShopAdmin service. The post methods sends the payload as a json object in the body of the request.

public CreateProduct = (product: ProductCreateEdit): Observable<ProductCreateEdit> => {
	let item: string = JSON.stringify(product);
	this.setHeaders();
	return this._http.post(this.actionUrlShopAdmin, item, {
		headers: this.headers
	}).map((response: Response) => <ProductCreateEdit>response.json())
	.catch(this.handleError);
}

The client model is the same as the server side view model. The ProductCreateEdit class has an array of localized records.

import { LocalizationRecord } from './LocalizationRecord';

export class ProductCreateEdit {
    Id: number;
    Name: string;
    Description: string;
    ImagePath: string;
    PriceEUR: number;
    PriceCHF: number;
    LocalizationRecords: LocalizationRecord[];
} 

export class LocalizationRecord {
    Key: string;
    Text: string;
    LocalizationCulture: string;
} 

The shop-admin.component.html template contains the form which is used to enter the data and this is then sent to the server using the product service. The forms in Angular 2 have changed a lot compared to Angular 1 forms. The form uses the ngFormModel and the ngControl to define the Angular 2 form specifics. These control items need to be defined in the corresponding ts file.

<form [ngFormModel]="productForm" (ngSubmit)="Create(productForm.value)">

    <div class="form-group" [ngClass]="{ 'has-error' : !name.valid && submitted }">
        <label class="control-label" for="name">{{ 'ADD_PRODUCT_NAME' | translate:lang }}</label>
        <em *ngIf="!name.valid && submitted">Required</em>
        <input id="name" type="text" class="form-control" placeholder="name" ngControl="name" [(ngModel)]="Product.Name">
    </div>

    <div class="form-group" [ngClass]="{ 'has-error' : !description.valid && submitted }">
        <label class="control-label" for="description">{{ 'ADD_PRODUCT_DESCRIPTION' | translate:lang }}</label>
        <em *ngIf="!description.valid && submitted">Required</em>
        <input id="description" type="text" class="form-control" placeholder="description" ngControl="description" [(ngModel)]="Product.Description">
    </div>

    <div class="form-group" [ngClass]="{ 'has-error' : !priceEUR.valid && submitted }">
        <label class="control-label" for="priceEUR">{{ 'ADD_PRODUCT_PRICE_EUR' | translate:lang }}</label>
        <em *ngIf="!priceEUR.valid && submitted">Required</em>
        <input id="priceEUR" type="number" class="form-control" placeholder="priceEUR" ngControl="priceEUR" [(ngModel)]="Product.PriceEUR">
    </div>

    <div class="form-group" [ngClass]="{ 'has-error' : !priceCHF.valid && submitted }">
        <label class="control-label" for="priceCHF">{{ 'ADD_PRODUCT_PRICE_CHF' | translate:lang }}</label>
        <em *ngIf="!priceCHF.valid && submitted">Required</em>
        <input id="priceCHF" type="number" class="form-control" placeholder="priceCHF" ngControl="priceCHF" [(ngModel)]="Product.PriceCHF">
    </div>
  

    <div class="form-group" [ngClass]="{ 'has-error' : !namede.valid && !namefr.valid && !nameit.valid && !nameen.valid && submitted }">
        <label>{{ 'ADD_PRODUCT_LOCALIZED_NAME' | translate:lang }}</label>
        <div class="row">
            <div class="col-md-3"><em>de</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Name_de" ngControl="namede" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>fr</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Name_fr" ngControl="namefr" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>it</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Name_it" ngControl="nameit" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>en</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Name_en" ngControl="nameen" #name="ngForm" />
            </div>
        </div>
    
    </div>

    <div class="form-group" [ngClass]="{ 'has-error' : !descriptionde.valid && !descriptionfr.valid && !descriptionit.valid && !descriptionen.valid && submitted }">
        <label>{{ 'ADD_PRODUCT_LOCALIZED_DESCRIPTION' | translate:lang }}</label>
        <div class="row">
            <div class="col-md-3"><em>de</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Description_de" ngControl="descriptionde" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>fr</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Description_fr" ngControl="descriptionfr" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>it</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Description_it" ngControl="descriptionit" #name="ngForm" />
            </div>
        </div>
        <div class="row">
            <div class="col-md-3"><em>en</em></div>
            <div class="col-md-9">
                <input class="form-control" type="text" [(ngModel)]="Description_en" ngControl="descriptionen" #name="ngForm" />
            </div>
        </div>

    </div>
    <div class="form-group">
        <button type="submit" [disabled]="saving" class="btn btn-primary">{{ 'ADD_PRODUCT_CREATE_NEW_PRODUCT' | translate:lang }}</button>
    </div>

</form>


The built in Angular 2 form components are imported from the ‘@angular/common’ library. The ControlGroup and the different Control items which are used in the HTML template need to be defined in the ts component file. The Control items are also added to the group in the builder function. The different Validators, or your own custom Validators can be added here. The Create method uses the Control items and the Product model to create the full product item and send the data to the Shop Admin Controller on the server. When successfully created, the user is redirected to the Shop component showing all products in the selected language.

import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES, NgForm, FORM_DIRECTIVES, FormBuilder, Control, ControlGroup, Validators } from '@angular/common';
import { Observable } from 'rxjs/Observable';
import { Http } from '@angular/http';
import { Product } from '../services/Product';
import { ProductCreateEdit } from  '../services/ProductCreateEdit';
import { Locale, LocaleService, LocalizationService} from 'angular2localization/angular2localization';
import { ProductService } from '../services/ProductService';
import { TranslatePipe } from 'angular2localization/angular2localization';
import { Router} from '@angular/router';

@Component({
    selector: 'shopadmincomponent',
    template: require('./shop-admin.component.html'),
    directives: [CORE_DIRECTIVES],
    pipes: [TranslatePipe]
})

export class ShopAdminComponent extends Locale implements OnInit  {

    public message: string;
    public Product: ProductCreateEdit;
    public Currency: string;

    public Name_de: string;
    public Name_fr: string;
    public Name_it: string;
    public Name_en: string;
    public Description_de: string;
    public Description_fr: string;
    public Description_it: string;
    public Description_en: string;

    productForm: ControlGroup;
    name: Control;
    description: Control;
    priceEUR: Control;
    priceCHF: Control;
    namede: Control;
    namefr: Control;
    nameit: Control;
    nameen: Control;
    descriptionde: Control;
    descriptionfr: Control;
    descriptionit: Control;
    descriptionen: Control;
    submitAttempt: boolean = false;
    saving: boolean = false;

    constructor(
        private router: Router,
        public _localeService: LocaleService,
        public localization: LocalizationService,
        private _productService: ProductService,
        private builder: FormBuilder) {

        super(null, localization);

        this.message = "shop-admin.component";

        this._localeService.languageCodeChanged.subscribe(item => this.onLanguageCodeChangedDataRecieved(item));

        this.buildForm();
        
    }

    ngOnInit() {
        console.log("ngOnInit ShopAdminComponent");
        // TODO Get product if Id exists
        this.initProduct();

        this.Currency = this._localeService.getCurrentCurrency();
        if (!(this.Currency === "CHF" || this.Currency === "EUR")) {
            this.Currency = "CHF";
        }
    }

    buildForm(): void {
        this.name = new Control('', Validators.required);
        this.description = new Control('', Validators.required);
        this.priceEUR = new Control('', Validators.required);
        this.priceCHF = new Control('', Validators.required);

        this.namede = new Control('', Validators.required);
        this.namefr = new Control('', Validators.required);
        this.nameit = new Control('', Validators.required);
        this.nameen = new Control('', Validators.required);

        this.descriptionde = new Control('', Validators.required);
        this.descriptionfr = new Control('', Validators.required);
        this.descriptionit = new Control('', Validators.required);
        this.descriptionen = new Control('', Validators.required);

        this.productForm = this.builder.group({
            name: ['', Validators.required],
            description: ['', Validators.required],
            priceEUR: ['', Validators.required],
            priceCHF: ['', Validators.required],
            namede: ['', Validators.required],
            namefr: ['', Validators.required],
            nameit: ['', Validators.required],
            nameen: ['', Validators.required],
            descriptionde: ['', Validators.required],
            descriptionfr: ['', Validators.required],
            descriptionit: ['', Validators.required],
            descriptionen: ['', Validators.required]
        });
    }

    public Create() {

        this.submitAttempt = true;

        if (this.productForm.valid) {
            this.saving = true;

            this.Product.LocalizationRecords = [];
            this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "de-CH", Text: this.Name_de });
            this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "fr-CH", Text: this.Name_fr });
            this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "it-CH", Text: this.Name_it });
            this.Product.LocalizationRecords.push({ Key: this.Product.Name, LocalizationCulture: "en-US", Text: this.Name_en });

            this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "de-CH", Text: this.Description_de });
            this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "fr-CH", Text: this.Description_fr });
            this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "it-CH", Text: this.Description_it });
            this.Product.LocalizationRecords.push({ Key: this.Product.Description, LocalizationCulture: "en-US", Text: this.Description_en });

            this._productService.CreateProduct(this.Product)
                .subscribe(data => {
                    this.saving = false;
                    this.router.navigate(['/shop']);
                }, error => {
                    this.saving = false;
                    console.log(error)
                },
                () => this.saving = false);
        } 
  
    }

    private onLanguageCodeChangedDataRecieved(item) {
        console.log("onLanguageCodeChangedDataRecieved Shop Admin");
        console.log(item + " : "+ this._localeService.getCurrentLanguage());
    }

    private initProduct() {
        this.Product = new ProductCreateEdit();      
    }

}

The form can then be used and the data is sent to the server.
localizedAngular2Form_01

And then displayed in the Shop component.

localizedAngular2Form_02

Notes

Angular 2 forms have a few validation issues which makes me uncomfortable using it.

Links

https://angular.io/docs/ts/latest/guide/forms.html

https://auth0.com/blog/2016/05/03/angular2-series-forms-and-custom-validation/

http://odetocode.com/blogs/scott/archive/2016/05/02/the-good-and-the-bad-of-programming-forms-in-angular.aspx

http://blog.thoughtram.io/angular/2016/03/14/custom-validators-in-angular-2.html

Implementing Angular2 forms – Beyond basics (part 1)

https://docs.asp.net/en/latest/fundamentals/localization.html

https://www.nuget.org/profiles/damienbod

https://github.com/robisim74/angular2localization

https://angular.io

https://docs.asp.net/en/latest/fundamentals/localization.html



Anuraj Parameswaran: Unit test ASP.NET Core Applications with MSTest

This post is about running aspnet core unit tests with MS Test. Few days back Microsoft announced support of MS Test for ASP.NET Core RC2 applications. And MS released two nuget packages to support the development as well as tooling. Similar to XUnit, MS Test also require reference of MS Test Framework and tool to run the unit tests aka Test Runner. So in the project.json file add reference of “dotnet-test-mstest”, which the test runner and “MSTest.TestFramework” is the framework.


Anuraj Parameswaran: Using ASP.NET Core RC2 in Appveyor

This post is about using ASP.NET Core RC2 in Appveyor for Continuous Integration. Recently Microsoft released RC2 version of ASP.NET Core. But for Windows OS, there is an installer exe available.(Unlike DNX there is not commandline / powershell install options.) In this post I am downloading the binaries and extracting to the location.


Pedro Félix: The OpenID Connect Cast of Characters

Introduction

The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.

oauth2-1

  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).

oauth2-2

An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.

federation

  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.

federation-2

In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.

oidc

I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.



Ben Foster: How to configure Kestrel URLs in ASP.NET Core RC2

Prior to the release of ASP.NET Core RC2 Kestrel would be configured as part of the command bindings in project.json:

"commands": {
  "web": "Microsoft.AspNet.Server.Kestrel --server.urls=http://localhost:60000;http://localhost:60001;"
},

If no URLs were specified, a default binding of http://localhost:5000 would be used.

As of RC2 we have a new unified toolchain (the .NET Core CLI) and ASP.NET Core applications are effectively just .NET Core Console Applications. They have a single entry point where we programatically configure and run the web host:

public static void Main(string[] args)
{
    var host = new WebHostBuilder()
        .UseKestrel()
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

Here we're adding support for both Kestrel and IIS hosts via the appropriate extension methods.

When we upgraded SaasKit to RC2 we used the UseUrls extension to configure the URLs Kestrel would bind to:

var host = new WebHostBuilder()
    .UseKestrel()
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseUrls("http://localhost:60000", "http://localhost:60001")
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

I didn't really like this approach as we're hard-coding URLs. Fortunately it's still possible to load the Kestrel configuration from an external file.

First create a hosting.json file in the root of your application with your required bindings. Separate multiple URLs with a semi-colon:

{
  "server.urls": "http://localhost:60000;http://localhost:60001"
}

Next update Program.cs to load your hosting configuration, then use the UseConfiguration extension to pass the configuration to the WebHostBuilder:

public static void Main(string[] args)
{
    var config = new ConfigurationBuilder()
        .SetBasePath(Directory.GetCurrentDirectory())
        .AddJsonFile("hosting.json", optional: true)
        .Build();

    var host = new WebHostBuilder()
        .UseKestrel()
        .UseConfiguration(config)
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup<Startup>()
        .Build();

    host.Run();
}

If you're launching Kestrel with Visual Studio you may also need to update launchSettings.json with the correct launchUrl:

"RC2HostingDemo": {
  "commandName": "Project",
  "launchBrowser": true,
  "launchUrl": "http://localhost:60000/api/values",
  "environmentVariables": {
    "ASPNETCORE_ENVIRONMENT": "Development"
  }
}

Now the web application will listen on the URLs configured in hosting.json:

Hosting environment: Development
Content root path: C:\Users\ben\Source\RC2HostingDemo\src\RC2HostingDemo
Now listening on: http://localhost:60000
Now listening on: http://localhost:60001
Application started. Press Ctrl+C to shut down.


Damien Bod: Creating and requesting SQL localized data in ASP.NET Core

This article shows how localized data can be created and used in a running ASP.NET Core application without restarting. The Localization.SqlLocalizer package is used to to get and localize the data, and also to save the resources to a database. Any database which is supported by Entity Framework Core can be used.

Code: https://github.com/damienbod/Angular2LocalizationAspNetCore

2016.06.28: Updated to Angular2 rc3, angular2localization 0.8.5 and dotnet RTM

Posts in this series

Configuring the localization

The Localization.SqlLocalizer package is configured in the Startup class in the ConfigureServices method. In this example, a SQLite database is used to store and retrieve the data. The LocalizationModelContext DbContext needs to be configured for the SQL Localization. The LocalizationModelContext class is defined inside the Localization.SqlLocalizer package.

The AddSqlLocalization extension method is used to define the services and initial the SQL localization when required. The UseTypeFullNames options is set to true, so that the Full Type names are used to retrieve the localized data. The different supported cultures are also defined as required.

public void ConfigureServices(IServiceCollection services)
{
	services.AddTransient<IProductRequestProvider, ProductRequestProvider>();
	services.AddTransient<IProductCudProvider, ProductCudProvider>();
	
	// init database for localization
	var sqlConnectionString = Configuration["DbStringLocalizer:ConnectionString"];

	services.AddDbContext<LocalizationModelContext>(options =>
		options.UseSqlite(
			sqlConnectionString,
			b => b.MigrationsAssembly("Angular2LocalizationAspNetCore")
		)
	);

	services.AddDbContext<ProductContext>(options =>
	  options.UseSqlite( sqlConnectionString )
	);

	// Requires that LocalizationModelContext is defined
	services.AddSqlLocalization(options => options.UseTypeFullNames = true);
	
	services.Configure<RequestLocalizationOptions>(
		options =>
			{
				var supportedCultures = new List<CultureInfo>
				{
					new CultureInfo("en-US"),
					new CultureInfo("de-CH"),
					new CultureInfo("fr-CH"),
					new CultureInfo("it-CH")
				};

				options.DefaultRequestCulture = new RequestCulture(culture: "en-US", uiCulture: "en-US");
				options.SupportedCultures = supportedCultures;
				options.SupportedUICultures = supportedCultures;
			});
			
	services.AddMvc()
		.AddViewLocalization()
		.AddDataAnnotationsLocalization();
}

The UseRequestLocalization is used to define the localization in the Startup Configure method.

var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
app.UseRequestLocalization(locOptions.Value);

The database also needs to be created. This can be done using Entity Framework Core migrations.

>
> dotnet ef migrations add LocalizationMigrations --context LocalizationModelContext
>
> dotnet ef database update --context LocalizationModelContext
>

Now the SQL Localization is ready to use.

Saving the localized data

The application creates products with localized data using the ShopAdmin API. A test method AddTestData is used to add dummy data to the database and call the provider logic. This will later be replaced by an Angular 2 form component in the third part of this series.

using Angular2LocalizationAspNetCore.Models;
using Angular2LocalizationAspNetCore.Providers;
using Angular2LocalizationAspNetCore.ViewModels;
using Microsoft.AspNetCore.Mvc;

namespace Angular2LocalizationAspNetCore.Controllers
{
    [Route("api/[controller]")]
    public class ShopAdminController : Controller
    {
        private readonly IProductCudProvider _productCudProvider;

        public ShopAdminController(IProductCudProvider productCudProvider)
        {
            _productCudProvider = productCudProvider;
        }

        //[HttpGet("{id}")]
        //public IActionResult Get(long id)
        //{
        //    return Ok(_productCudProvider.GetProductCudProvider(id));
        //}

        [HttpPost]
        public void Post([FromBody]ProductCreateEditDto value)
        {
            _productCudProvider.AddProduct(value);
        }

        // Test method to add data
        // http://localhost:5000/api/ShopAdmin/AddTestData/description/name
        [HttpGet]
        [Route("AddTestData/{description}/{name}")]
        public IActionResult AddTestData(string description, string name)
        {
            var product = new ProductCreateEditDto
            {
                Description = description,
                Name = name,
                ImagePath = "",
                PriceCHF = 2.40,
                PriceEUR = 2.20,
                LocalizationRecords = new System.Collections.Generic.List<Models.LocalizationRecordDto>
                {
                    new LocalizationRecordDto { Key= description, LocalizationCulture = "de-CH", Text = $"{description} de-CH" },
                    new LocalizationRecordDto { Key= description, LocalizationCulture = "it-CH", Text = $"{description} it-CH" },
                    new LocalizationRecordDto { Key= description, LocalizationCulture = "fr-CH", Text = $"{description} fr-CH" },
                    new LocalizationRecordDto { Key= description, LocalizationCulture = "en-US", Text = $"{description} en-US" },
                    new LocalizationRecordDto { Key= name, LocalizationCulture = "de-CH", Text = $"{name} de-CH" },
                    new LocalizationRecordDto { Key= name, LocalizationCulture = "it-CH", Text = $"{name} it-CH" },
                    new LocalizationRecordDto { Key= name, LocalizationCulture = "fr-CH", Text = $"{name} fr-CH" },
                    new LocalizationRecordDto { Key= name, LocalizationCulture = "en-US", Text = $"{name} en-US" }
                }
            };
            _productCudProvider.AddProduct(product);
            return Ok("completed");
        }
    }
}

The ProductCudProvider uses the LocalizationModelContext, and the ProductCudProvider class to save the data to the database. The class creates the entities from the View model DTO and adds them to the database. Once saved the IStringExtendedLocalizerFactory interface method ResetCache is used to reset the cache of the localized data. The cache could also be reset for each Type if required.

using Angular2LocalizationAspNetCore.Models;
using Angular2LocalizationAspNetCore.Resources;
using Angular2LocalizationAspNetCore.ViewModels;
using Localization.SqlLocalizer.DbStringLocalizer;

namespace Angular2LocalizationAspNetCore.Providers
{
    public class ProductCudProvider : IProductCudProvider
    {
        private LocalizationModelContext _localizationModelContext;
        private ProductContext _productContext;
        private IStringExtendedLocalizerFactory _stringLocalizerFactory;

        public ProductCudProvider(ProductContext productContext, 
            LocalizationModelContext localizationModelContext,
            IStringExtendedLocalizerFactory stringLocalizerFactory)
        {
            _productContext = productContext;
            _localizationModelContext = localizationModelContext;
            _stringLocalizerFactory = stringLocalizerFactory;
        }

        public void AddProduct(ProductCreateEditDto product)
        {
            var productEntity = new Product
            {
                Description = product.Description,
                ImagePath = product.ImagePath,
                Name = product.Name,
                PriceCHF = product.PriceCHF,
                PriceEUR = product.PriceEUR
            };
            _productContext.Products.Add(productEntity);

            _productContext.SaveChanges();

            foreach(var record in product.LocalizationRecords)
            {
                _localizationModelContext.Add(new LocalizationRecord
                {
                    Key = $"{productEntity.Id}.{record.Key}",
                    Text = record.Text,
                    LocalizationCulture = record.LocalizationCulture,
                    ResourceKey = typeof(ShopResource).FullName
                });
            }

            _localizationModelContext.SaveChanges();
            _stringLocalizerFactory.ResetCache();
        }
    }
}

Requesting the localized data

The Shop API is used to request the product data with the localized fields. The GetAvailableProducts method returns all products localized in the current culture.

using Angular2LocalizationAspNetCore.Providers;
using Microsoft.AspNetCore.Mvc;

namespace Angular2LocalizationAspNetCore.Controllers
{
    [Route("api/[controller]")]
    public class ShopController : Controller
    {
        private readonly IProductRequestProvider _productRequestProvider;

        public ShopController(IProductRequestProvider productProvider)
        {
            _productRequestProvider = productProvider;
        }

        // http://localhost:5000/api/shop/AvailableProducts
        [HttpGet("AvailableProducts")]
        public IActionResult GetAvailableProducts()
        {
            return Ok(_productRequestProvider.GetAvailableProducts());
        }
    }
}

The ProductRequestProvider is used to get the data from the database. Each product description and name are localized. The Localization data is retrieved from the database for the first request, and then read from the cache, unless the localization data was updated. The IStringLocalizer is used to localize the data.

using System;
using System.Collections.Generic;
using System.Linq;
using Angular2LocalizationAspNetCore.Models;
using Angular2LocalizationAspNetCore.Resources;
using Angular2LocalizationAspNetCore.ViewModels;
using Localization.SqlLocalizer.DbStringLocalizer;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Localization;

namespace Angular2LocalizationAspNetCore.Providers
{
    public class ProductRequestProvider : IProductRequestProvider
    {
        private IStringLocalizer _stringLocalizer;
        private IStringExtendedLocalizerFactory _stringLocalizerFactory;
        private ProductContext _productContext;

        public ProductRequestProvider(IStringExtendedLocalizerFactory stringLocalizerFactory,
            ProductContext productContext)
        {
            _stringLocalizerFactory = stringLocalizerFactory;
            _stringLocalizer = _stringLocalizerFactory.Create(typeof(ShopResource));
            _productContext = productContext;
        }

        public List<ProductDto> GetAvailableProducts()
        {
            var products = _productContext.Products.OrderByDescending(dataEventRecord => EF.Property<DateTime>(dataEventRecord, "UpdatedTimestamp")).ToList(); 
            List<ProductDto> data = new List<ProductDto>();
            foreach(var t in products)
            {
                data.Add(new ProductDto() {
                    Id = t.Id,
                    Description = _stringLocalizer[$"{t.Id}.{t.Description}"],
                    Name = _stringLocalizer[$"{t.Id}.{t.Name}"],
                    ImagePath = t.ImagePath,
                    PriceCHF = t.PriceCHF,
                    PriceEUR = t.PriceEUR
                });
            }

            return data;
        }
    }
}

The products with localized data can now be added and updated without restarting the application and using the standard ASP.NET Core localization.

Any suggestions, pull requests or ways of improving the Localization.SqlLocalizer NuGet package are very welcome. Please contact me or create issues.

Links

https://docs.asp.net/en/latest/fundamentals/localization.html

https://www.nuget.org/profiles/damienbod

https://github.com/robisim74/angular2localization

https://angular.io

http://docs.asp.net/en/1.0.0-rc2/fundamentals/localization.html



Damien Bod: Released SQL Localization NuGet package for ASP.NET Core, dotnet

I have released a simple SQL Localization NuGet package which can be used with ASP.NET Core and any database supported by Entity Framework Core. The localization can be used like the default ASP.NET Core localization.

I would be grateful for feedback, new feature requests, pull requests, or ways of improving this package.

NuGet | Issues | Code

Examples:

https://github.com/damienbod/AspNet5Localization/tree/master/AspNet5Localization/src/AspNet5Localization

https://github.com/damienbod/Angular2LocalizationAspNetCore

Release History

Version 1.0.2

  • Updated to dotnet RTM

Version 1.0.1

  • Added Unique constraint for key, culture
  • Fixed type full name cache bug

Version 1.0.0

  • Initial release
  • Runtime localization updates
  • Cache support, reset cache
  • ASP.NET DI support
  • Supports any Entity Framework Core database

Basic Usage ASP.NET Core

Add the NuGet package to the project.json file

"dependencies": {
        "Localization.SqlLocalizer": "1.0.0.0",

Add the DbContext and use the AddSqlLocalization extension method to add the SQL Localization package.

public void ConfigureServices(IServiceCollection services)
{
	// init database for localization
	var sqlConnectionString = Configuration["DbStringLocalizer:ConnectionString"];

	services.AddDbContext<LocalizationModelContext>(options =>
		options.UseSqlite(
			sqlConnectionString,
			b => b.MigrationsAssembly("Angular2LocalizationAspNetCore")
		)
	);

	// Requires that LocalizationModelContext is defined
	services.AddSqlLocalization(options => options.UseTypeFullNames = true);

Create your database

dotnet ef migrations add Localization --context LocalizationModelContext

dotnet ef database update Localization --context LocalizationModelContext

And now it can be used like the default localization.
See Microsoft ASP.NET Core Documentation for Globalization and localization

Add the standard localization configuration to your Startup ConfigureServices method:

services.Configure<RequestLocalizationOptions>(
	options =>
		{
			var supportedCultures = new List<CultureInfo>
			{
				new CultureInfo("en-US"),
				new CultureInfo("de-CH"),
				new CultureInfo("fr-CH"),
				new CultureInfo("it-CH")
			};

			options.DefaultRequestCulture = new RequestCulture(culture: "en-US", uiCulture: "en-US");
			options.SupportedCultures = supportedCultures;
			options.SupportedUICultures = supportedCultures;
		});
		
services.AddMvc()
	.AddViewLocalization()
	.AddDataAnnotationsLocalization();

And also in the configure method:

var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
            app.UseRequestLocalization(locOptions.Value);

Use like the standard localization.

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Localization;

namespace AspNet5Localization.Controllers
{
    [Route("api/[controller]")]
    public class AboutController : Controller
    {
        private readonly IStringLocalizer<SharedResource> _localizer;
        private readonly IStringLocalizer<AboutController> _aboutLocalizerizer;

        public AboutController(IStringLocalizer<SharedResource> localizer, IStringLocalizer<AboutController> aboutLocalizerizer)
        {
            _localizer = localizer;
            _aboutLocalizerizer = aboutLocalizerizer;
        }

        [HttpGet]
        public string Get()
        {
            // _localizer["Name"] 
            return _aboutLocalizerizer["AboutTitle"];
        }
    }
}

Links:

Microsoft ASP.NET Core Documentation for Globalization and localization



Filip Woj: Running multiple ASP.NET Web API pipelines side by side

Over the past 4 years or so, I have worked on many Web API projects, for a lot of different clients, and I thought I have seen almost everything.

Last week I came across an interesting new (well, at least to me) scenario though – with the requirement to run two Web API pipelines side by side, in the same process. Imagine having /api as one Web API “instance”, and then having /dashboard as completely separate one, with it’s own completely custom configuration (such as formatter settings, authentication or exception handling). And all of that running in the same process.

More after the jump.

The problem

On IIS, normally you would solve it having to separate web application assemblies and deploying them into different virtual directories. It gets more interesting, if you try to do this in the same process though.

Sure, this is what OWIN allows you to do – run multiple frameworks or “branches”, side by side. But turns out this is not that easy with the Web API OWIN integration.

Normally you’d go about doing this the following, right?

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.Map("/api", map =>
        {
            var config = new HttpConfiguration();
            config.MapHttpAttributeRoutes();


            // configure the API 1 here

            map.UseWebApi(config);
        });

        app.Map("/dashboard", map =>
        {
            var config = new HttpConfiguration();
            config.MapHttpAttributeRoutes();


            // configure the API 2 here

            map.UseWebApi(config);
        });
    }
}

Seems reasonable, typical usage of the OWIN middleware architecture.

The problem with this set up is that Web API doesn’t necessarily lend itself very well to such set up. This is mainly related to the way Web API discovers controllers and routes.

Since you want to have two pipelines side by side, in the same assembly, you want to have only specific controllers visible to specific pipeline.

By default Web API will look throughout the entire AppDomain for controllers and attribute routes, meaning that both of your pipelines will discover everything and you will end up with a huge mess.

The solution

In order to address that you can define the convention to divide your controllers into specific “areas” or, as we called it from the beginning, pipelines.

Then you can tell Web API configuration object, to only deal with this specific subset of types when discovering controllers and mapping attribute routes. This can be achieved in several ways – namespaces, attributes and so on. The simplest of which is probably using a base marker type though.

Let’s introduce two base controller types representing two different Web API pipelines that we defined earlier – /api and /dashboard.

public abstract class MyApiController : ApiController { }

public abstract class DashboardApiController : ApiController { }

Now we need to be able to set up HttpConfiguration, that’s used to define our Web API server, to respect these base controllers when looking for types that should be used as controllers and when discovering attribute routes.

This can be done by introducing a custom IHttpControllerTypeResolver and IDirectRouteProvider, that would only work with controllers that subclass a specific base class. These implementations are shown below.

public class TypedHttpControllerTypeResolver<TBaseController> : DefaultHttpControllerTypeResolver where TBaseController : IHttpController
{
    public TypedHttpControllerTypeResolver()
        : base(IsDashboardController)
    { }

    internal static bool IsDashboardController(Type t)
    {
        if (t == null) throw new ArgumentNullException("t");

        return
            t.IsClass &&
            t.IsVisible &&
            !t.IsAbstract &&
            typeof (TBaseController).IsAssignableFrom(t);
    }
}

 public class TypedDirectRouteProvider<TBaseController> : DefaultDirectRouteProvider where TBaseController : IHttpController
{
    public override IReadOnlyList<RouteEntry> GetDirectRoutes(HttpControllerDescriptor controllerDescriptor, IReadOnlyList<HttpActionDescriptor> actionDescriptors,
        IInlineConstraintResolver constraintResolver)
    {
        if (typeof(TBaseController).IsAssignableFrom(controllerDescriptor.ControllerType))
        {
            var routes = base.GetDirectRoutes(controllerDescriptor, actionDescriptors, constraintResolver);
            return routes;
        }

        return new RouteEntry[0];
    }
}

In both cases, we are extending the default implementations – DefaultHttpControllerTypeResolver and DefaultDirectRouteProvider.

While Web API will still treat all AppDomain types as potential “controller” candidates, in TypedHttpControllerTypeResolver we use the same default constraint to discover controllers (must be class, must be public, must not be abstract) that Web API uses, but additionally we throw in the requirement to subclass our predefined base controller.

Similarly, for TypedDirectRouteProvider, Web API will inspect all controllers in the AppDomain, but we can define that routes should be picked up only from those controllers that extend our predefined base.

The final step is just to wire this in, which is very easy – the revised Startup class is shown below.

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.Map("/api", map =>
        {
            var config = CreateTypedConfiguration<MyApiController>();
            config.MapHttpAttributeRoutes();


            // configure the API 1 here

            map.UseWebApi(config);
        });

        app.Map("/dashboard", map =>
        {
            var config = CreateTypedConfiguration<DashboardApiController>();
            config.MapHttpAttributeRoutes();


            // configure the API 2 here

            map.UseWebApi(config);
        });
    }

    private static HttpConfiguration CreateTypedConfiguration<TBaseController>() where TBaseController : IHttpController
    {
        var config = new HttpConfiguration();

        config.Services.Replace(typeof(IHttpControllerTypeResolver), new TypedHttpControllerTypeResolver<TBaseController>());
        config.MapHttpAttributeRoutes(new TypedDirectRouteProvider<TBaseController>());

        return config;
    }       

}

And that’s it! No more cross-pipeline conflicts, and two Web API instances running side by side.


Dominick Baier: IdentityServer4 on ASP.NET Core RC2

This week was quite busy ;) Besides doing a couple of talks and workshops at SDD in London – we also updated all the IdentityServer4 bits to RC2.

Many thanks to all the people in the community that were part of this effort!

Here are the relevant links:

IdentityServer4 repo / nuget
AccessTokenValidation repo / nuget
Samples repo

Now that RC2 is finally released, we will continue our work on IdentityServer4. Expect more changes and frequent updates soon. stay tuned!


Filed under: ASP.NET, IdentityServer, OAuth, OpenID Connect, WebAPI


Filip Woj: IP Filtering in ASP.NET Web API

One of the functionalities I had to use fairly often on different ASP.NET Web API projects that I was involved in in the past was IP filtering – restricting access to the whole API, or to parts of it, based on the caller’s IP address.

I thought it might be useful to share this here. More after the jump.

Configuration

Whenever you build a functionality like that, there are two roads you might wanna take:
– as a whitelist – meaning deny the majority of the callers, and only let some predefined ones through
– as a blacklist – meaning allow the majority of the callers, and only block some predefines ones

While you could approach the task from many different angles, for the purpose of this post, let’s assume you want to have it configurable from the web.config file, and that you will use a whitelist approach (reject everyone, unless they are in the config).

Configuration in ASP.NET is far from the friendliest (at least until we can get ASP.NET Core), and there is some ugly configuration code we will have to write – to deal with configuration elements, sections and so on. We will be extending the types from System.Configuration to provide a reasonably friendly user experience.

We could probably achieve the same with just appSettings but having a dedicated configuration section would be a bit more elegant for the end user.

So let’s imagine we will want to have configuration like this:

<configSections>
    <section name="ipFiltering" type="Strathweb.IpFiltering.Configuration.IpFilteringSection, Strathweb.IpFiltering" />
  </configSections>
  <ipFiltering> 
    <ipAddresses>
      <add address="192.168.0.11" />
    </ipAddresses>
  </ipFiltering>

To achieve this, here are our nasty ASP.NET configuration components. First the section:

public class IpFilteringSection : ConfigurationSection
{
    [ConfigurationProperty("ipAddresses", IsDefaultCollection = true)]
    public IpAddressElementCollection IpAddresses
    {
        get { return (IpAddressElementCollection)this["ipAddresses"]; }
        set { this["ipAddresses"] = value; }
    }
}

Next, the addresses collections:

[ConfigurationCollection(typeof(IpAddressElement))]
public class IpAddressElementCollection : ConfigurationElementCollection
{
    protected override ConfigurationElement CreateNewElement()
    {
        return new IpAddressElement();
    }

    protected override object GetElementKey(ConfigurationElement element)
    {
        return ((IpAddressElement)element).Address;
    }
}

And finally the individual entries:

public class IpAddressElement : ConfigurationElement
{
    [ConfigurationProperty("address", IsKey = true, IsRequired = true)]
    public string Address
    {
        get { return (string)this["address"]; }
        set { this["address"] = value; }
    }

    [ConfigurationProperty("denied", IsRequired = false)]
    public bool Denied
    {
        get { return (bool)this["denied"]; }
        set { this["denied"] = value; }
    }
}

Implementation of IP filtering

Once we got the configuration out of the way, the usage can take four forms:

  • a message handler (wrapping your entire Web API)
  • a filter (applied on a specific action only)
  • an HttpRequestMessage extension methods (so that it can be called anywhere, i.e. inside a controller)
  • and, as a bonus, an OWIN middleware (wrapping your entire OWIN pipeline, if you are using it)

We should actually start with the extension method, as it’s going to be the base for everything – in the other components (except for OWIN middleware), we will just call that.

The extension method is shown below:

public static bool IsIpAllowed(this HttpRequestMessage request)
    {
        if (!request.GetRequestContext().IsLocal)
        {
            var ipAddress = request.GetClientIpAddress();
            var ipFiltering = ConfigurationManager.GetSection("ipFiltering") as IpFilteringSection;
            if (ipFiltering != null && ipFiltering.IpAddresses != null && ipFiltering.IpAddresses.Count > 0)
            {
                if (ipFiltering.IpAddresses.Cast<IpAddressElement>().Any(ip => (ipAddress == ip.Address && !ip.Denied)))
                {
                    return true;
                }

                return false;
            }
        }

        return true;

So in the extension method, we check if the request is local, and if it isn’t we will proceed to grabbing the IP address from the request.

Then we consult our configuration and if the caller’s address is found in our configuration and check whether the IP should be allowed or not. This check could be done in a more elaborate way (see for example here) – but for our use case it’s enough to just compare them in a simple way, without working about stuff like IP ranges.

We made use of another extension method in the above snippet – (request.GetClientIpAddress()) – I blogged about it a while back and it has also been added to WebApiContrib but for the record, here it is:

public static class HttpRequestMessageExtensions
{
    private const string HttpContext = "MS_HttpContext";
    private const string RemoteEndpointMessage = "System.ServiceModel.Channels.RemoteEndpointMessageProperty";
    private const string OwinContext = "MS_OwinContext";

    public static string GetClientIpAddress(this HttpRequestMessage request)
    {
        //Web-hosting
        if (request.Properties.ContainsKey(HttpContext))
        {
            dynamic ctx = request.Properties[HttpContext];
            if (ctx != null)
            {
                return ctx.Request.UserHostAddress;
            }
        }
        //Self-hosting
        if (request.Properties.ContainsKey(RemoteEndpointMessage))
        {
            dynamic remoteEndpoint = request.Properties[RemoteEndpointMessage];
            if (remoteEndpoint != null)
            {
                return remoteEndpoint.Address;
            }
        }
        //Owin-hosting
        if (request.Properties.ContainsKey(OwinContext))
        {
            dynamic ctx = request.Properties[OwinContext];
            if (ctx != null)
            {
                return ctx.Request.RemoteIpAddress;
            }
        }
        return null;
    }
}

So now we can proceed towards building our individual components, as they will all rely on the extension method that we created.

First a filter:

public class IpFilterAttribute : AuthorizeAttribute
{
    protected override bool IsAuthorized(HttpActionContext actionContext)
    {
        return actionContext.Request.IsIpAllowed();
    }
}

And now a handler:

public class IpFilterHandler : DelegatingHandler
{
    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        if (request.IsIpAllowed())
        {
            return await base.SendAsync(request, cancellationToken);
        }

        return request.CreateErrorResponse(HttpStatusCode.Forbidden, "Cannot view this resource");
    }
}

We could make an OWIN middleware too. For this we will need a new extension method – working directly off the OWIN dictionary and grabbing the IP information from there.

For Project Katana purposes, we can make the middleware Microsoft specific, and as such we could build the extension method off IOwinContext instead of the raw OWIN dictionary (since Web API on OWIN relies on Katana anyway).

That code is shown below, and is quite similar to the HttpRequestMessage extension code we wrote a bit earlier.

public static class OwinContextExtensions
{
    public static bool IsIpAllowed(this IOwinContext ctx)
    {
        var ipAddress = ctx.Request.RemoteIpAddress;
        var ipFiltering = ConfigurationManager.GetSection("ipFiltering") as IpFilteringSection;
        if (ipFiltering != null && ipFiltering.IpAddresses != null && ipFiltering.IpAddresses.Count > 0)
        {
            if (ipFiltering.IpAddresses.Cast<IpAddressElement>().Any(ip => (ipAddress == ip.Address && !ip.Denied)))
            {
                return true;
            }

            return false;
        }

        return true;
    }
}

And finally, let’s add the middleware itself (again – Katana specific middleware to be precise):

public class IpFilterMiddleware : OwinMiddleware
{
    public IpFilterMiddleware(OwinMiddleware next) : base(next)
    {
    }

    public override async Task Invoke(IOwinContext context)
    {
        if (context.IsIpAllowed())
        {
            await Next.Invoke(context);
        } 
        else 
        {
            context.Response.StatusCode = 403;
        }
    }
}

And that’s it! You can now use the components on:

  • individual actions or controller (the filter)
  • on the whole API (the message handler)
  • on the whole OWIN pipeline (the middleware)
  • or whereever you see fit (the Request extension method)

Of course you can make it much better and more robust, add support for submasks and such, add extra dynamic configuration, instead of static IP look up from web.config and so on – but hopefully this will be a good starting point.

All the source code for this article is at Github.


Ben Foster: How to use TraceSource with Azure Diagnostics

Recently I was trying to diagnose a production issue involving OWIN Authentication middleware and found that trace output was not being written to Azure logs.

The middleware used Microsoft's OWIN Logging framework which uses TraceSource internally.

The first step was to enable the Microsoft.Owin trace switch, which can be done in web.config:

<configuration>
  <system.diagnostics>
    <switches>
      <add name="Microsoft.Owin" value="Verbose" />
    </switches>
  </system.diagnostics>
</configuration>

This is enough to write OWIN trace output to Visual Studio's output window but to get it working in Azure we also have to set up the appropriate trace listeners:

<configuration>
  <system.diagnostics>
    <sharedListeners>
      <add name="AzureDriveTraceListener" type="Microsoft.WindowsAzure.WebSites.Diagnostics.AzureDriveTraceListener, Microsoft.WindowsAzure.WebSites.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
    </sharedListeners>
    <sources>
      <source name="Microsoft.Owin" switchName="Microsoft.Owin" switchType="System.Diagnostics.SourceSwitch">
        <listeners>
          <add name="AzureDriveTraceListener"/>
        </listeners>
      </source>
    </sources>
    <switches>
      <add name="Microsoft.Owin" value="Verbose" />
    </switches>
  </system.diagnostics>
</configuration>

Here I'm using the AzureDriveTraceListener which is enough for file system application logging (and streaming logs). If you want to write to Azure Blob or Table storage you can use the following listeners:

<add name="AzureTableTraceListener" type="Microsoft.WindowsAzure.WebSites.Diagnostics.AzureTableTraceListener, Microsoft.WindowsAzure.WebSites.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
<add name="AzureBlobTraceListener" type="Microsoft.WindowsAzure.WebSites.Diagnostics.AzureBlobTraceListener, Microsoft.WindowsAzure.WebSites.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

I recommend making the above changes using a web.config transformation otherwise you'll be sending output to Azure during development.


Damien Bod: Angular 2 Localization with an ASP.NET Core MVC Service

This article shows how localization can be implemented in Angular 2 for static UI translations and also for localized data requested from a MVC service. The MVC service is implemented using ASP.NET Core. This post is the first of a 3 part series. The following posts will implement the service to use a database and also implement an Angular 2 form to add dynamic data which can be used in the localized views.

Code: https://github.com/damienbod/Angular2LocalizationAspNetCore

2016.06.28: Updated to Angular2 rc3, angular2localization 0.8.5 and dotnet RTM
2016.06.17: Updated to Angular2 rc2 and angular2localization 0.8.4
2016.05.31: Using Webpack for Angular 2 app, angular2localization version 0.8.1, updated by Roberto Simonetti.
2016.05.16: Updated to ASP.NET Core RC2 dotnet
2016.05.13: Updated to angular2localization version 0.7.1
2016.05.07: Updated by Roberto Simonetti to Angular 2 RC1 , thanks

Posts in this series

Creating the Angular 2 app and adding angular2localization

The project is setup using Visual Studio using ASP.NET Core MVC. The npm package.json file is configured to include the required frontend dependencies. angular2localization from Roberto Simonetti is used for the Angular 2 localization.

{
    "version": "1.0.0",
    "description": "",
    "main": "wwwroot/index.html",
    "author": "",
    "license": "ISC",
    "scripts": {
        "tsc": "tsc",
        "tsc:w": "tsc -w",
        "build": "SET NODE_ENV=development && webpack -d --color",
        "buildProduction": "SET NODE_ENV=production && webpack -d --color",
        "typings": "typings",
        "postinstall": "typings install"
    },
    "dependencies": {
        "@angular/common": "2.0.0-rc.3",
        "@angular/compiler": "2.0.0-rc.3",
        "@angular/core": "2.0.0-rc.3",
        "@angular/forms": "0.1.1",
        "@angular/http": "2.0.0-rc.3",
        "@angular/platform-browser": "2.0.0-rc.3",
        "@angular/platform-browser-dynamic": "2.0.0-rc.3",
        "@angular/upgrade": "2.0.0-rc.3",
        "@angular/router": "3.0.0-alpha.8",
        "core-js": "^2.4.0",
        "reflect-metadata": "^0.1.3",
        "rxjs": "5.0.0-beta.6",
        "zone.js": "^0.6.12",

        "bootstrap": "^3.3.6",
        "angular2localization": "0.8.5"
    },
    "devDependencies": {
        "jquery": "^2.2.0",
        "autoprefixer": "^6.3.2",
        "clean-webpack-plugin": "^0.1.9",
        "copy-webpack-plugin": "^2.1.3",
        "css-loader": "^0.23.0",
        "extract-text-webpack-plugin": "^1.0.1",
        "file-loader": "^0.8.4",
        "html-loader": "^0.4.0",
        "html-webpack-plugin": "^2.8.1",
        "json-loader": "^0.5.3",
        "node-sass": "^3.4.2",
        "null-loader": "0.1.1",
        "postcss-loader": "^0.9.1",
        "raw-loader": "0.5.1",
        "rimraf": "^2.5.1",
        "sass-loader": "^3.1.2",
        "style-loader": "^0.13.0",
        "ts-helpers": "^1.1.1",
        "ts-loader": "0.8.2",
        "typescript": "1.8.10",
        "typings": "1.0.4",
        "url-loader": "^0.5.6",
        "webpack": "1.13.0"
    }
}

The tsconfig.json is configured to use the module commonjs so that we can use webpack.

{
    "compilerOptions": {
        "target": "es5",
        "module": "commonjs",
        "moduleResolution":  "node",
        "removeComments": true,
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true,
        "noEmitHelpers": false,
        "sourceMap": true
    },
    "exclude": [
        "node_modules"
    ],
    "compileOnSave": false,
    "buildOnSave": false
}

The typings is configured as shown in the following code block. If the npm packages are updated, the typings definitions in the solution folder sometimes need to be deleted manually, because the existing files are not always removed.

{
    "globalDependencies": {
        "core-js": "registry:dt/core-js#0.0.0+20160602141332",
        "node": "registry:dt/node#6.0.0+20160621231320"
    }
}

webpack.config.js

var webpack = require("webpack");

module.exports = {
    entry: {
        "vendor": "./wwwroot/app/vendor",
        "app": "./wwwroot/app/boot"
    },
    output: {
        path: __dirname,
        filename: "./wwwroot/dist/[name].bundle.js"
    },
    resolve: {
        extensions: ['', '.ts', '.js']
    },
    devtool: 'source-map',
    module: {
        loaders: [
          {
              test: /\.ts/,
              loaders: ['ts-loader'],
              exclude: /node_modules/
          }
        ]
    },
    plugins: [
      new webpack.optimize.CommonsChunkPlugin(/* chunkName= */"vendor", /* filename= */"./wwwroot/dist/vendor.bundle.js")
    ]
}

The UI localization resource files MUST be saved in UTF-8, otherwise the translations will not be displayed correctly, and IE 11 will also throw exceptions. Here is an example of the locale-de.json file. The path definitions are defined in the AppComponent typescript file.

{
    "HELLO": "Hallo",
    "NAV_MENU_HOME": "Aktuell",
    "NAV_MENU_SHOP": "Online-Shop"
}

The index HTML file adds all the Javascript dependencies directly and not using the system loader. These can all be found in the libs folder of the wwwroot. The files are deployed to the libs folder from the node_modules using gulp.

<!DOCTYPE html>
<html>
<head>
    <base href="/">
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Angular 2 ASP.NET Core api</title>

    <meta http-equiv="content-type" content="text/html; charset=utf-8" />

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    <link rel="stylesheet" href="css/bootstrap.css">
</head>
<body>

    <div class="container body-content">
        <my-app>Loading...</my-app>

        <script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=Intl.~locale.en-US,Intl.~locale.de-CH,Intl.~locale.it-CH,Intl.~locale.fr-CH"></script>

        <!--loads libraries-->
        <script src="dist/vendor.bundle.js"></script>
        <!--loads the application-->
        <script src="dist/app.bundle.js"></script>
    </div>
    
</body>
</html>


]

The AppComponent loads and uses the Angular 2 localization npm package. The languages, country and currency are defined in this component. For this app, de-CH, fr-CH, it-CH and en-US are used, and CHF or EUR can be used as a currency. The ChangeCulture function is used to set the required values.

import {Component, OnInit} from '@angular/core';
import {Routes, Router, ROUTER_DIRECTIVES} from '@angular/router';

// Services.
import {Locale, LocaleService, LocalizationService} from 'angular2localization/angular2localization';
// Pipes.
import {TranslatePipe} from 'angular2localization/angular2localization';

// Components.
import { HomeComponent } from './home/home.component';
import { ShopComponent } from './shop/shop.component'; 
import { ProductService } from './services/ProductService';

@Component({
    selector: 'my-app',
    templateUrl: 'app/app.component.html',
    styleUrls: ['app/app.component.css'],
    directives: [ROUTER_DIRECTIVES],
    providers: [LocalizationService, LocaleService, ProductService],
    pipes: [TranslatePipe]
})

@Routes([
    { path: '/home', component: HomeComponent },
    { path: '/shop', component: ShopComponent }
])

export class AppComponent extends Locale {

    constructor(
        private router: Router,
        public locale: LocaleService,
        public localization: LocalizationService,
        private _productService: ProductService
    ) {
        super(null, localization);
        // Adds a new language (ISO 639 two-letter code).
        this.locale.addLanguage('de');
        this.locale.addLanguage('fr');
        this.locale.addLanguage('it');
        this.locale.addLanguage('en');

        this.locale.definePreferredLocale('en', 'US', 30);

        this.localization.translationProvider('./i18n/locale-'); // Required: initializes the translation provider with the given path prefix.
        this.localization.updateTranslation(); // Need to update the translation.

    }

    ngOnInit() {

        this.router.navigate(['/home']);

    }

    public ChangeCulture(language: string, country: string, currency: string) {
        this.locale.setCurrentLocale(language, country);
        this.locale.setCurrentCurrency(currency);
        this.localization.updateTranslation();
    }

    public ChangeCurrency(currency: string) {
        this.locale.setCurrentCurrency(currency);
    }
}

The HTML template uses bootstrap and defines the input links and the routing links which are used in the application. The translate pipe is used to display the text in the correct language.

<div class="container" style="margin-top: 15px;">
   
    <nav class="navbar navbar-inverse">
        <div class="container-fluid">
            <div class="navbar-header">
                <a [routerLink]="['/home']" class="navbar-brand"><img src="images/damienbod.jpg" height="40" style="margin-top:-10px;" /></a>
            </div>
            <ul class="nav navbar-nav">
                <li><a [routerLink]="['/home']">{{ 'NAV_MENU_HOME' | translate:lang }}</a></li>
                <li><a [routerLink]="['/shop']">{{ 'NAV_MENU_SHOP' | translate:lang }}</a></li>
            </ul>

            <ul class="nav navbar-nav navbar-right">
                <li><a (click)="ChangeCulture('de','CH', 'CHF')">de</a></li>
                <li><a (click)="ChangeCulture('fr','CH', 'CHF')">fr</a></li>
                <li><a (click)="ChangeCulture('it','CH', 'CHF')">it</a></li>
                <li><a (click)="ChangeCulture('en','US', 'CHF')">en</a></li>
            </ul>

            <ul class="nav navbar-nav navbar-right">
                <li>
                    <div class="navbar" style="margin-bottom:0;">
                        <form class="navbar-form pull-left">
                            <select (change)="ChangeCurrency($event.target.value)" class="form-control">
                                <option *ngFor="let currency of ['CHF', 'EUR']">{{currency}}</option>
                            </select>
                        </form>
                    </div>
                </li>             
            </ul>
        </div>
    </nav>

    <router-outlet></router-outlet>

</div>

Implementing the ProductService

The ProductService can be used to access the localized data from the ASP.NET Core MVC service. This service uses the LocaleService to get the current language and the current country and sends this in a HTTP GET request to the server service api. The data is then returned as required. The localization can be set by adding ?culture=de-CH to the URL.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map';
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { Product } from './Product';
import { LocaleService } from 'angular2localization/angular2localization';

@Injectable()
export class ProductService {
    private actionUrl: string;
    private headers: Headers;
    private isoCode: string;

    constructor(private _http: Http, private _configuration: Configuration, public _locale: LocaleService) {
        this.actionUrl = `${_configuration.Server}api/Shop/`;       
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    // http://localhost:5000/api/Shop/AvailableProducts?culture=de-CH
    // http://localhost:5000/api/Shop/AvailableProducts?culture=it-CH
    // http://localhost:5000/api/Shop/AvailableProducts?culture=fr-CH
    // http://localhost:5000/api/Shop/AvailableProducts?culture=en-US
    public GetAvailableProducts = (): Observable<Product[]> => {
        console.log(this._locale.getCurrentLanguage());
        console.log(this._locale.getCurrentCountry());
        this.isoCode = `${this._locale.getCurrentLanguage()}-${this._locale.getCurrentCountry()}`; 

        this.setHeaders();
        return this._http.get(`${this.actionUrl}AvailableProducts?culture=${this.isoCode}`, {
            headers: this.headers
        }).map(res => res.json());
    }   
}


Using the localization to display data

The ShopComponent uses the localized data from the server. This service uses the @Output countryCodeChanged and currencyCodeChanged event properties from the LocaleService so that when the UI culture is changed, the data is got from the server and displayed as required. The TranslatePipe is used in the HTML to display the frontend static localization tranformations.

import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { Observable } from 'rxjs/Observable';
import { Http } from '@angular/http';
import { Product } from '../services/Product';
import { LocaleService } from 'angular2localization/angular2localization';
import { ProductService } from '../services/ProductService';
import {TranslatePipe} from 'angular2localization/angular2localization';

@Component({
    selector: 'shopcomponent',
    templateUrl: 'app/shop/shop.component.html',
    directives: [CORE_DIRECTIVES],
    pipes: [TranslatePipe]
})

export class ShopComponent implements OnInit {

    public message: string;
    public Products: Product[];
    public Currency: string;
    public Price: string;

    constructor(
        public _locale: LocaleService,
        private _productService: ProductService) {
        this.message = "shop.component";

        this._locale.countryCodeChanged.subscribe(item => this.onCountryChangedDataRecieved(item));
        this._locale.currencyCodeChanged.subscribe(currency => this.onChangedCurrencyRecieved(currency));
        
    }

    ngOnInit() {
        console.log("ngOnInit ShopComponent");
        this.GetProducts();

        this.Currency = this._locale.getCurrentCurrency();
        if (!(this.Currency === "CHF" || this.Currency === "EUR")) {
            this.Currency = "CHF";
        }
    }

    public GetProducts() {
        console.log('ShopComponent:GetProducts starting...');
        this._productService.GetAvailableProducts()
            .subscribe((data) => {
                this.Products = data;
            },
            error => console.log(error),
            () => {
                console.log('ProductService:GetProducts completed');
            }
            );
    } 

    private onCountryChangedDataRecieved(item) {
        this.GetProducts();
        console.log("onProductDataRecieved");
        console.log(item);
    }

    private onChangedCurrencyRecieved(currency) {
        this.Currency = currency;
        console.log("onChangedCurrencyRecieved");
        console.log(currency);
    }
}

The Shop component HTML template displays the localized data.

<div class="panel-group" >

    <div class="panel-group" *ngIf="Products">

        <div class="mcbutton col-md-4" style="margin-left: -15px; margin-bottom: 10px;" *ngFor="let product of Products">
            <div class="panel panel-default" >
                <div class="panel-heading" style="color: #9d9d9d;background-color: #222;">
                    {{product.name}}
                    <span style="float:right;" *ngIf="Currency === 'CHF'">{{product.priceCHF}} {{Currency}}</span>
                    <span style="float:right;" *ngIf="Currency === 'EUR'">{{product.priceEUR}} {{Currency}}</span>
                </div>
                <div class="panel-body" style="height: 200px;">
                    <!--<img src="images/mc1.jpg" style="width: 100%;margin-top: 20px;" />-->
                    {{product.description}}
                </div>
            </div>
        </div>
    </div>

</div>

ASP.NET Core MVC service

The ASP.NET Core MVC service uses the ShopController to provide the data for the Angular 2 application. This just returns a list of Projects using a HTTP GET request.

The IProductProvider interface is used to get the data. This is added to the controller using construction injection and needs to be registered in the Startup class.

using Angular2LocalizationAspNetCore.Providers;
using Microsoft.AspNetCore.Mvc;

namespace Angular2LocalizationAspNetCore.Controllers
{
    [Route("api/[controller]")]
    public class ShopController : Controller
    {
        private readonly IProductProvider _productProvider;

        public ShopController(IProductProvider productProvider)
        {
            _productProvider = productProvider;
        }

        // http://localhost:5000/api/shop/AvailableProducts
        [HttpGet("AvailableProducts")]
        public IActionResult GetAvailableProducts()
        {
            return Ok(_productProvider.GetAvailableProducts());
        }
    }
}

The ProductDto is used in the GetAvailableProducts to return the localized data.

namespace Angular2LocalizationAspNetCore.ViewModels
{
    public class ProductDto
    {
        public long Id { get; set; }

        public string Name { get; set; }

        public string Description { get; set; }

        public string ImagePath { get; set; }

        public double PriceEUR { get; set; }

        public double PriceCHF { get; set; }
    }
}

The ProductProvider which implements the IProductProvider interface returns a list of localized products using resource files and a in memory list. This is just a dummy implementation to simulate data responses with localized data.

using System.Collections.Generic;
using Angular2LocalizationAspNetCore.Models;
using Angular2LocalizationAspNetCore.Resources;
using Angular2LocalizationAspNetCore.ViewModels;
using Microsoft.Extensions.Localization;

namespace Angular2LocalizationAspNetCore.Providers
{
    public class ProductProvider : IProductProvider
    {
        private IStringLocalizer<ShopResource> _stringLocalizer;

        public ProductProvider(IStringLocalizer<ShopResource> localizer)
        {
            _stringLocalizer = localizer;
        }

        public List<ProductDto> GetAvailableProducts()
        {
            var dataSimi = InitDummyData();
            List<ProductDto> data = new List<ProductDto>();
            foreach(var t in dataSimi)
            {
                data.Add(new ProductDto() {
                    Id = t.Id,
                    Description = _stringLocalizer[t.Description],
                    Name = _stringLocalizer[t.Name],
                    ImagePath = t.ImagePath,
                    PriceCHF = t.PriceCHF,
                    PriceEUR = t.PriceEUR
                });
            }

            return data;
        }

        private List<Product> InitDummyData()
        {
            List<Product> data = new List<Product>();
            data.Add(new Product() { Id = 1, Description = "Mini HTML for content", Name="HTML wiz", ImagePath="", PriceCHF = 2.40, PriceEUR= 2.20  });
            data.Add(new Product() { Id = 2, Description = "R editor for data anaylsis", Name = "R editor", ImagePath = "", PriceCHF = 45.00, PriceEUR = 40 });
            return data;
        }
    }
}

In the second part of this series, this ProductProvider will be re-implemented to use SQL localization and use only data from a database.

When the application is opened, the language, country and currency can be changed as required.

Application in de-CH with currency CHF
angula2Localization_01

Application in fr-CH with currency EUR
angula2Localization_02

Notes

Angular 2 loads slowly, need to use Webpack or some other tool.

Links

https://github.com/robisim74/angular2localization

https://angular.io

https://docs.asp.net/en/latest/fundamentals/localization.html



Damien Bod: Creating an Angular 2 Component for Plotly

This article shows how the Plotly javascript library can be used inside an Angular 2 Component. The Angular 2 component can then be used anywhere inside an application using only the Angular Component selector. The data used for the chart is provided in an ASP.NET Core MVC application using Elasticsearch.

Code: https://github.com/damienbod/Angular2ComponentPlotly

2016.06.26: Updated to Angular2 rc3 and new routing
2016.06.21: Updated to Angular2 rc2 and angular-cli beta 6
2016.05.06: Updated to ASP.NET Core RC2 and ElasticsearchCrud dotnet RC2
2016.05.06: Updated Angular 2 to rc1
Split solution into 2 projects, UI and service.
Service is a ASP.NET core MVC service RC1
UI in an Angular2 RC1 project running on nodejs.

Setup
The Angular 2 project is setup using angular-cli. This project is then run from the command line using nodejs. The ASP.NET Core MVC service needs to be running before the application can request the data.

Angular 2 Plotly Component

The Plotly component is defined using the plotlychart selector. This Angular 2 selector can then be used in templates to add the component to existing ones. The component uses the template property to define the HTML template. The Plotly component has 4 input properties. The properties are used to pass the chart data into the component and also define if the chart raw data should be displayed or not. The raw data and the layout data are displayed in the HTML template using pipes.

The Plotly javascript library has no typescript definitions. Because of this, the ‘declare var’ is used so that the Plotly javascript library can be used inside the typescript class.

import { Component, EventEmitter, Input, Output, OnInit, ElementRef} from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';

declare var Plotly: any;

@Component({
  moduleId: module.id,
  selector: 'plotlychart',
   template: `
<div style="margin-bottom:100px;">
    <div id="myPlotlyDiv"
         name="myPlotlyDiv"
         style="width: 480px; height: 400px;">
        <!-- Plotly chart will be drawn inside this DIV -->
    </div>
</div>

<div *ngIf="displayRawData">
    raw data:
    <hr />
    <span>{{data | json}}</span>
    <hr />
    layout:
    <hr />
    <span>{{layout | json}}</span>
    <hr />
</div>
`,
  styleUrls: ['plotly.component.css'],
  directives: [CORE_DIRECTIVES, ROUTER_DIRECTIVES]
})

export class PlotlyComponent implements OnInit {

    @Input() data: any;
    @Input() layout: any;
    @Input() options: any;
    @Input() displayRawData: boolean;

    ngOnInit() {
        console.log("ngOnInit PlotlyComponent");
        console.log(this.data);
        console.log(this.layout);

        Plotly.newPlot('myPlotlyDiv', this.data, this.layout, this.options);
    }
}

The Plotly library is used inside an Angular 2 component. This needs the be added in the head of the index.html file where the app is bootstrapped.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <base href="/">
  
  <title>ASP.NET Core 1.0 Angular 2</title>

    <!-- inject:css -->
    <link rel="stylesheet" href="css/bootstrap.css">
    <!-- endinject -->

  <link rel="icon" type="image/x-icon" href="favicon.ico">
  <meta name="viewport" content="width=device-width, initial-scale=1">

  <script src="app/plotly/plotly.min.js"></script>
  <script src="js/jquery.min.js"></script>
  <script src="js/bootstrap.js"></script>

</head>
<body>
  <angular2plotly-app>Loading...</angular2plotly-app>

  <script src="vendor/es6-shim/es6-shim.js"></script>
  <script src="vendor/reflect-metadata/Reflect.js"></script>
  <script src="vendor/systemjs/dist/system-polyfills.js"></script>
  <script src="vendor/systemjs/dist/system.src.js"></script>
  <script src="vendor/zone.js/dist/zone.js"></script>

  <script>
    System.import('system-config.js').then(function () {
      System.import('main');
    }).catch(console.error.bind(console));
  </script>
</body>
</html>

Using the Angular 2 Plotly Component

The Plotly component is used in the RegionComponent component. This component gets the data from the server, and adds it using the defined input parameters of the Plotly directive component. The HTML template uses the plotlychart directive. This takes four properties with the required Json objects. The component is only used after the data has been got from the server using Angular 2 ngIf.

<div *ngIf="PlotlyData">
    <plotlychart 
          [data]="PlotlyData"  
          [layout]="PlotlyLayout" 
          [options]="PlotlyOptions" 
          [displayRawData]="true">
    </plotlychart>
</div>

The RegionComponent adds an import for the PlotlyComponent, the required model classes and the Angular 2 service which are used to retrieve the chart data from the server. The PlotlyComponent is defined as a directive inside the @Component. When the component is initialized, the service GetRegionBarChartData function is used to GET the data and returns an GeographicalCountries observable object. This data is then used to prepare the Json objects which can be used to create the Plotly chart. In this demo, the data is prepared for a vertical bar chart. See the Plotly Javascript documentation for this.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Router, ActivatedRoute } from '@angular/router';
import { CORE_DIRECTIVES } from '@angular/common';

import { Observable }  from 'rxjs/Observable';

import { PlotlyComponent } from '../plotly/plotly.component';
import { SnakeDataService } from '../snake-data.service';
import { GeographicalRegion } from '../models/GeographicalRegion';
import { GeographicalCountries } from '../models/GeographicalCountries';
import { BarTrace } from '../models/BarTrace';

@Component({
  moduleId: module.id,
  selector: 'app-region',
  templateUrl: 'region.component.html',
  styleUrls: ['region.component.css'],
  directives: [CORE_DIRECTIVES, PlotlyComponent]
})

export class RegionComponent implements OnInit, OnDestroy   {

    private id: number;
    public message: string;
    private sub: any;
    public GeographicalCountries: GeographicalCountries;

    private name: string;
    public PlotlyLayout: any;
    public PlotlyData: any;
    public PlotlyOptions: any;

    constructor(
        private _snakeDataService: SnakeDataService,
        private _route: ActivatedRoute,
        private _router: Router
    ) {
        this.message = "region";
    }

	ngOnInit() {     

        this.sub = this._route.params.subscribe(params => {
            this.name = params['name']; 
            if (!this.GeographicalCountries) {
            this.getGetRegionBarChartData();
        } 
        });      
    }

    ngOnDestroy() {
        this.sub.unsubscribe();
    }

    private getGetRegionBarChartData() {
        console.log('RegionComponent:getData starting...');
        this._snakeDataService
            .GetRegionBarChartData(this.name)
            .subscribe(data => this.setReturnedData(data),
            error => console.log(error),
            () => console.log('Get GeographicalCountries completed for region'));
    }

    private setReturnedData(data: any) {
        this.GeographicalCountries = data;
        this.PlotlyLayout = {
            title: this.GeographicalCountries.RegionName + ": Number of snake bite deaths",
            height: 500,
            width: 1200
        };

        this.PlotlyData = [
            {
                x: this.GeographicalCountries.X,
                y: this.getYDatafromDatPoint(),
                name: "Number of snake bite deaths",
                type: 'bar',
                orientation: 'v'
            }
        ];

        console.log("recieved plotly data");
        console.log(this.PlotlyData);
    }

    private getYDatafromDatPoint() {
        return this.GeographicalCountries.NumberOfDeathsHighData.Y;
    }
}

The Angular 2 service is used to access the ASP.NET Core MVC service. This uses the Http, Response, and Headers from the angular2/http import. The service is marked as an @Injectable() object. The headers are used the configure the HTTP request with the standard headers. An Observable of type T is returned, which can be consumed by the calling component. A promise could also be returned, if required.

The service is added to the Angular 2 application using component providers. This is a singleton object for the component where the providers are defined and all child components of this component. In this demo application, the SnakeDataService is defined in the top level AppComponent component.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from './app.constants';
import { GeographicalRegion } from './models/GeographicalRegion';
import { GeographicalCountries } from './models/GeographicalCountries';

@Injectable()
export class SnakeDataService {

    private actionUrl: string;
    private headers: Headers;
  
    constructor(private _http: Http, private _configuration: Configuration) {
        this.actionUrl = `${_configuration.Server}api/SnakeData/`;   
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');
    }

    public GetGeographicalRegions = (): Observable<GeographicalRegion[]> => {
        this.setHeaders();
        return this._http.get(`${this.actionUrl}GeographicalRegions`, {
            headers: this.headers
        }).map(res => res.json());
    }

    public GetRegionBarChartData = (region: string): Observable<GeographicalCountries> => {
        this.setHeaders();
        return this._http.get(`${this.actionUrl}RegionBarChart/${region}`, {
            headers: this.headers
        }).map(res => res.json());
    }
}

The model classes are used to define the service DTOs used in the HTTP requests. These model classes contain all the data required to produce a Plotly bar chart.

import { BarTrace } from './BarTrace';

export class GeographicalCountries {
    NumberOfCasesLowData: BarTrace;
    NumberOfCasesHighData: BarTrace;
    NumberOfDeathsLowData: BarTrace;
    NumberOfDeathsHighData: BarTrace;
    RegionName: string;
    X: string[];
}

export class BarTrace {
    Y: number[];
}

export class GeographicalRegion {
    Name: string;
    Countries: number;
    NumberOfCasesHigh: number;
    NumberOfDeathsHigh: number;
    DangerHigh: boolean;
}

ASP.NET Core MVC API using Elasticsearch

An ASP.NET Core MVC service is used as a data source for the Angular 2 application. Details on how this is setup can be found here:

Plotly charts using Angular, ASP.NET Core 1.0 and Elasticsearch

Notes:

One problem when developing with Angular 2 router, it that when something goes wrong, no logs, or diagnostics exist for this router. This is a major disadvantage compared to Angular UI Router. For example, if a child component has a run time problem, the ngInit method is not called for any component with the first request. This has nothing to do with the real problem, and you have no information on how to debug this. Big pain.

Links

https://angular.io/docs/ts/latest/guide/router.html

http://www.codeproject.com/Articles/1087605/Angular-typescript-configuration-and-debugging-for

https://auth0.com/blog/2016/01/25/angular-2-series-part-4-component-router-in-depth/

https://github.com/johnpapa/angular-styleguide

https://mgechev.github.io/angular2-style-guide/

https://toddmotto.com/component-events-event-emitter-output-angular-2

http://blog.thoughtram.io/angular/2016/03/21/template-driven-forms-in-angular-2.html

http://raibledesigns.com/rd/entry/getting_started_with_angular_2

https://toddmotto.com/transclusion-in-angular-2-with-ng-content

http://www.bennadel.com/blog/3062-creating-an-html-dropdown-menu-component-in-angular-2-beta-11.htm

http://asp.net-hacker.rocks/2016/04/04/aspnetcore-and-angular2-part1.html

https://plot.ly/javascript/

https://github.com/alonho/angular-plotly

https://github.com/angular/angular-cli

https://www.elastic.co/products/elasticsearch



Ben Foster: Handling unresolved tenants in SaasKit

The core component of SaasKit's multi-tenancy library is tenant-resolution middleware that attempts to identity a tenant based on information available in the current request, for example, the hostname or current user.

When a tenant cannot be found the middleware does nothing. A call to HttpContext.GetTenantContext<TTenant> or HttpContext.GetTenant<TTenant> will return null. This means that if any of your controllers/classes have a dependency on your tenant type, you'll get an exception:

Unable to resolve service for type 'AspNetMvcSample.AppTenant' while attempting to activate 'AspNetMvcSample.Controllers.HomeController'.

This is because the built in DI cannot resolve an instance of your tenant class. Internally it's just trying to pull the tenant out of HttpContext which of course returns null.

It's down to you to decide how you want to handle unresolved tenants, and you have a few options.

Provide a default instance

If a tenant cannot be resolved for the current request you can return a default instance. You can do this in your tenant resolver:

public Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
{
    TenantContext<AppTenant> tenantContext = null;

    var tenant = tenants.FirstOrDefault(t => 
        t.Hostnames.Any(h => h.Equals(context.Request.Host.Value.ToLower())));

    if (tenant == null)
    {
        tenant = new AppTenant { Name = "Default Tenant", Hostnames = new string[0] };
    }

    tenantContext = new TenantContext<AppTenant>(tenant);

    return Task.FromResult(tenantContext);
}

With this in place, the middleware will always return a tenant preventing your application from blowing up. Of course you'll need to ensure that your default tenant has all the necessary information your application needs to run (default connection strings etc.).

Redirecting to another site

Alternatively you may wish to redirect the user somewhere else if a tenant can not be resolved such as a specific landing or on-boarding page on your marketing site.

SaasKit has built-in middleware for doing exactly this. Just specify the redirect URL and whether the redirect should be permanent (301):

app.UseMultitenancy<AppTenant>();
app.UseMiddleware<TenantUnresolvedRedirectMiddleware<AppTenant>>("http://saaskit.net", false);

Currently there isn't an extension method for registering the middleware so you'll have to use the UseMiddleware method and provide the arguments as above.

Now if I browse to my application and no matching tenant is found I'll be redirected to the SaasKit home page.

Redirecting to a page in the same site

If you want to redirect to a page in the same site the easiest approach is to redirect to a static HTML page. Create a standard HTML page and make sure you place it in your wwwroot folder. Then add the redirect middleware:

app.UseMiddleware<TenantUnresolvedRedirectMiddleware<AppTenant>>("/tenantnotfound.html", false);

Providing you're using the Static Files middleware and that it is registered before SaasKit, it will serve the specified HTML page and short-circuit the request.

If you want to redirect to dynamic page (one served by ASP.NET) then you can't use the provided redirect middleware as you'll end up in a redirect loop, since the middleware redirect is invoked on every request.

A solution to this is to have your tenant resolver return a default tenant (see above) then add some middleware that checks if the current tenant instance is the default tenant then do a redirect:

app.Map(
    new PathString("/onboarding"),
    branch => branch.Run(async ctx =>
    {
        await ctx.Response.WriteAsync("Onboarding");
    })
);

app.UseMultitenancy<AppTenant>();

app.Use(async (ctx, next) =>
{
    if (ctx.GetTenant<AppTenant>().Name == "Default")
    {
        ctx.Response.Redirect("/onboarding");
    } 
    else
    {
        await next();
    }
});

app.UseMiddleware<LogTenantMiddleware>();

The above example is quite primitive but illustrates this behaviour. First I created an "Onboarding" page at /onboarding using a simple middleware delegate. If you're redirecting to an MVC controller you don't need to do this.

Then (after the tenant resolution middleware has been registered) I add another middleware delegate that checks the current tenant and if it's the default, redirects them to my on-boarding page.

Doing something else

If you want to do something else when a tenant cannot be resolved just create a middleware component and obtain the current tenant context. I chose not to build this into the framework because it's really this simple:

public class CustomTenantMiddleware
{
    RequestDelegate next;

    public CustomTenantMiddleware(RequestDelegate next)
    {
        this.next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        var tenantContext = context.GetTenantContext<AppTenant>();

        if (tenantContext == null)
        {
            // do whatever you want
        }
    }
}

I've updated the samples on GitHub.


Ben Foster: How to perform partial resource updates with JSON Patch and ASP.NET Core

Something that is becoming more common in HTTP APIs is the ability to perform partial updates to resources rather than replacing the entire resource with a PUT request.

JSON Patch is a format for describing changes to a JSON document. It can be used to avoid sending a whole document when only a part has changed. When used in combination with the HTTP PATCH method it allows partial updates for HTTP APIs in a standards compliant way.

The specification defines the following operations:

  • Add - Adds a value to an object or inserts it into an array
  • Remove - Removes a value from an object or array
  • Replace - Replaces a value. Equivalent to a "remove" followed by an "add"
  • Copy - Copy a value from one location to another within the JSON document. Both from and path are JSON Pointers.
  • Move - Move a value from one location to the other. Both from and path are JSON Pointers.
  • Test - Tests that the specified value is set in the document. If the test fails then the patch as a whole should not apply.

Patching resources in ASP.NET Core

A relatively undocumented feature in ASP.NET Core is the support for JSON Patch which I found whilst browsing through the ASP.NET repos on GitHub.

Everything other than the "test" operation is supported. Given that C# is a static language we do get slightly different behaviour depending on the object being patched. For example, a "remove" operation will not actually remove the property from an instance of a C# class.

Getting started

To get started we'll add a reference to Microsoft.AspNet.JsonPatch in project.json:

"dependencies": {
  "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
  "Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
  "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
  "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
  "Microsoft.AspNet.JsonPatch": "1.0.0-rc1-final",
},

To test out the various Patch operations I'm going to use the following class:

public class Contact
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }

    public List<string> Links { get; set; }
}

This will allow us to test patch operations against properties and arrays.

I then created a simple HTTP API that accepts a patch in the form of a JsonPatchDocument (part of Microsoft.AspNet.JsonPatch) and applies it to an existing contact. We then return the original and patched versions so we can view what has changed:

[Route("api/[controller]")]
public class ContactsController : Controller
{
    private Contact contact = new Contact
    {
        FirstName = "Ben",
        LastName = "Foster",
        Age = 30,
        Links = new List<string> { "http://benfoster.io" }
    };

    [HttpPatch]
    public IActionResult Patch([FromBody]JsonPatchDocument<Contact> patch)
    {
        var patched = contact.Copy();
        patch.ApplyTo(patched, ModelState);

        if (!ModelState.IsValid)
        {
            return new BadRequestObjectResult(ModelState);
        }

        var model = new
        {
            original = contact,
            patched = patched
        };

        return Ok(model);
    }
}

With our API ready we can now test various JSON Patch operations by sending PATCH requests to /api/contacts.

Setting property values

To set a property value we send a replace operation. Note that we always send an array of operations even if you're only sending a single operation. Let's update the FirstName and Age properties:

PATCH /api/contacts
[
    {
      "op": "replace",
      "path": "/firstname",
      "value": "Benjamin"
    },
    {
      "op": "replace",
      "path": "/age",
      "value": "29"
    },
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Benjamin",
        "LastName": "Foster",
        "Age": 29,
        "Links": [
            "http://benfoster.io"
        ]
    }
}

You can see that the patched contact has had it's FirstName and Age properties updated.

If you look back at the code for the controller you'll see we're passing the ModelState to the patch document. This will populate ModelState with any patch errors. For example, let's try and replace a property that does not exist:

PATCH /api/contacts
[
    {
      "op": "replace",
      "path": "/foo",
      "value": "Bar"
    }
]

Response:

{
    "Contact": [
        "The property at path '/foo' could not be removed."
    ]
}

Resetting property values

To reset a property value we use the remove operation. Again this is due to the fact that we can't actually remove the property from the C# class definition so instead the ASP.NET Core implementation will set a property back to it's default value i.e. default(T).

PATCH /api/contacts
[
    {
      "op": "remove",
      "path": "/lastname"
    },
    {
      "op": "remove",
      "path": "/age"
    },
    {
      "op": "remove",
      "path": "/links"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": null,
        "Age": 0,
        "Links": null
    }
}

Adding items to an array

Suppose I want to add additional links to my contact. To add items to an array we use the add operation and include the index of the item in the path. To add an item to the end of the array use the - operator:

PATCH /api/contacts
[
    {
      "op": "add",
      "path": "/links/-",
      "value": "http://twitter.com/benfosterdev"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io",
            "http://twitter.com/benfosterdev"
        ]
    }
}

If we wanted to add a link to the beginning of the array instead we can specify an index:

PATCH /api/contacts
[
    {
      "op": "add",
      "path": "/links/0",
      "value": "http://twitter.com/benfosterdev"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://twitter.com/benfosterdev",
            "http://benfoster.io"
        ]
    }
}

Note that if we provide an index that does not exist we'll get an error:

PATCH /api/contacts
[
    {
      "op": "add",
      "path": "/links/5",
      "value": "http://twitter.com/benfosterdev"
    }
]

Response:

{
    "Contact": [
        "For operation 'add' on array property at path '/links/5', the index is larger than the array size."
    ]
}

Removing items from an array

To remove items from an array we use the remove operation and similar to above provide the index of the item we want to remove. Let's remove the first link from our contact:

PATCH /api/contacts
[
    {
      "op": "remove",
      "path": "/links/0"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": []
    }
}

Replacing items in an array

Suppose instead of just removing the first link from our contact we wanted to replace it. This time we use the replace operation and provide the path of the item to replace:

PATCH /api/contacts
[
    {
      "op": "replace",
      "path": "/links/0",
      "value": "http://github.com/benfoster"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://github.com/benfoster"
        ]
    }
}

Moving items in an array

Finally, let's test out the move operation to, you guessed it, move items in an array. This time we specify the from path of the item we wish to move and the path where it should be moved to.

PATCH:
[
    {
      "op": "add",
      "path": "/links/-",
      "value": "http://twitter.com/benfosterdev"
    },
    {
      "op": "add",
      "path": "/links/-",
      "value": "http://github.com/benfoster"
    },
    {
      "op": "move",
      "from": "/links/2",
      "path": "/links/0"
    }
]

Response:

{
    "original": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://benfoster.io"
        ]
    },
    "patched": {
        "FirstName": "Ben",
        "LastName": "Foster",
        "Age": 30,
        "Links": [
            "http://github.com/benfoster",
            "http://benfoster.io",
            "http://twitter.com/benfosterdev"
        ]
    }
}

So what happened here? Well first we added two new links by providing an add operation and then we moved the last link (the GitHub one) to the beginning of the array with a move operation.

Patching dynamic values

One nice feature of the JSON Patch library is that it also supports C# dynamic objects making it possible to use add/remove operations to add or remove properties. It even supports nested objects:

[HttpPatch("dynamic")]
public IActionResult Patch([FromBody]JsonPatchDocument patch)
{
    dynamic obj = new ExpandoObject();
    patch.ApplyTo(obj);

    return Ok(obj);
}

Let's try and recreate our contact using JSON Patch operations:

PATCH /api/contacts/dynamic
[
    {
      "op": "add",
      "path": "/FirstName",
      "value": "Ben"
    },
    {
      "op": "add",
      "path": "/LastName",
      "value": "Ben"
    },
    {
      "op": "add",
      "path": "/Links",
      "value": [{
        "url": "http://benfoster.io",
        "title": "Blog"
      }]
    },
    {
      "op": "add",
      "path": "/Links/-",
      "value": [{
        "url": "http://twitter.com/benfosterdev",
        "title": "Twitter"
      }]
    },
]

Response:

{
    "FirstName": "Ben",
    "LastName": "Ben",
    "Links": [
        {
            "url": "http://benfoster.io",
            "title": "Blog"
        },
        [
            {
                "url": "http://twitter.com/benfosterdev",
                "title": "Twitter"
            }
        ]
    ]
}

Wrapping up

Use the ASP.NET Core Json Patch library to support partial updates (patches) in your APIs using JSON Patch operations.

Nancy Support

Shortly after publishing this post, Nancy core contributor and evangelist Jonathan Channon added support for the ASP.NET Core JSON Patch library to Nancy. He moves fast, real fast.


Ben Foster: Customising model-binding conventions in ASP.NET Core

A pattern I use when building Web APIs is to create commands to represent an API operation and models to represent resources or results. We share these "common" objects with our .NET client so we can be sure we're using the same parameters names/types.

Here's an excerpt from Fabrik's API for creating a project:

public HttpResponseMessage Post(int siteId, AddProjectCommand command)
{
    var project = new CMS.Domain.Project(
        session.GetSiteId(siteId),
        command.Title,
        command.Slug,
        command.Summary,
        command.ContentType,
        command.Content,
        command.Template,
        command.Tags,
        command.Published,
        command.Private);

    session.Store(project); 

    var model = CreateProjectModel(project);
    var link = Url.Link(RouteNames.DefaultRoute, new { controller = "projects", siteId = siteId, id = project.Id.ToIntId() });

    return Created(model, new Uri(link));
}

We also use commands for GET operations that have multiple parameters such as search endpoints. So instead of:

public IActionResult GetProjects(string searchTerm = null, int page = 1, int pageSize = 10) 
{

}

We have a GetProjectsCommand:

public class GetProjectsCommand 
{
    public string SearchTerm { get; set; }
    [MinValue(1, ErrorMessage = "Page must be greater than 0.")]
    public int Page { get; set; } = 1;
    public int PageSize { get; set; } = 20;
}

This provides a single place to encapsulate our default values and validation rules, keeping our controllers nice and lean.

 Model-binding in ASP.NET Core MVC

To bind complex types to query strings in ASP.NET Web API we had to change the parameter binding rules. This is because the default was to bind complex types from the HTTP Request body.

When implementing the above pattern in ASP.NET Core I was pleasantly surprised to see that the following worked out of the box:

// GET: api/values
[HttpGet]
public IEnumerable<string> Get(GetValuesCommand command)
{

}

I thought that perhaps the framework detected that this was a HTTP GET request and therefore bound the parameter values from the query string instead.

Actually this is not the case - in ASP.NET Core, complex types are not bound from the request body by default. Instead you have to opt-in to body-based binding with the FromBodyAttribute:

// POST api/values
[HttpPost]
public void Post([FromBody]AddValueCommand command)
{
}

This seems an odd default given that (in my experience) binding complex types from the request body is far more common.

In any case, we can customise the default model-binding behaviour by providing a convention:

public class CommandParameterBindingConvention : IActionModelConvention
{
    public void Apply(ActionModel action)
    {
        if (action == null)
        {
            throw new ArgumentNullException(nameof(action));
        }

        foreach (var parameter in action.Parameters)
        {
            if (typeof(ICommand).IsAssignableFrom((parameter.ParameterInfo.ParameterType)))
            {
                parameter.BindingInfo = parameter.BindingInfo ?? new BindingInfo();
                parameter.BindingInfo.BindingSource = BindingSource.Body;
            }
        }
    }
}

Which is registered like so:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(options =>
    {
        options.Conventions.Add(new CommandParameterBindingConvention());
    });
}

This convention checks to see if the parameter type implements ICommand (a marker interface I created) and if so, instructs the framework to bind the values for this parameter from the request body.

All I have to do then is update my command with this interface:

public class AddValueCommand : ICommand
{
    public string Value { get; set; }
}

Then I can drop the unnecessary [FromBody] attribute:

// POST api/values
[HttpPost]
public void Post(AddValueCommand command)
{
}

Much better!

Changing the default behaviour

If you'd like to change the default behaviour to always bind complex types from the body, here's a gist for that courtesy of Tugberk.


Damien Bod: Angular2 secure file download without using an access token in URL or cookies

This article shows how an Angular 2 SPA client can download files using an access token without passing it to the resource server in the URL. The access token is only used in the HTTP Header.

If the access token is sent in the URL, this will be saved in server logs, routing logs, browser history, or copy/pasted by users and sent to other users in emails etc. If the user does not log out after using the application, the access token will still be valid until a token timeout. Due to this, it is better to not send an access token in the URL.

The article shows how this could be implemented without using cookies and without sending the access token in the URL. The application is implemented using OpenID Connect Implicit Flow with IdentityServer4 with ASP.NET Core.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow

2016.06.26: Updated Angular 2 to rc3, new Angular 2 routing, IdentityServer4 beta 3, connect/endsession implemented
2016.05.22: Updated to ASP.NET Core RC2 dotnet
2016.05.07: Updated to Angular 2 rc1
2016.06.21: Updated to Angular 2 rc2

Posts in this series:

Angular 2 Client

The Angular 2 application uses the SecureFileService to download the files securely. The SecurityService is injected in this service using dependency injection and the access token can be accessed through this service. The DownloadFile function implements the download service. The access token is added to the HTTP request headers using the Headers from the ‘angular2/http’ component. This is then added in the setHeaders function.

The service calls GenerateOneTimeAccessToken with the file id. The HTTP request returns an access id which can be used once within 30 seconds. This access id is then used in a second HTTP request in the URL which downloads the required file. It does not matter if this is copied as it cannot be reused.

import { Injectable } from '@angular/core';
import { Http, Response, Headers } from '@angular/http';
import 'rxjs/add/operator/map';
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { SecurityService } from '../services/SecurityService';

@Injectable()
export class SecureFileService {

    private actionUrl: string;
    private fileExplorerUrl: string;
    private headers: Headers;

    constructor(private _http: Http, private _configuration: Configuration, private _securityService: SecurityService) {
        this.actionUrl = `${_configuration.FileServer}api/Download/`; 
        this.fileExplorerUrl = `${_configuration.FileServer }api/FileExplorer/`;    
    }

    public DownloadFile(id: string) {
        this.setHeaders();
        let oneTimeAccessToken = "";
        this._http.get(`${this.actionUrl}GenerateOneTimeAccessToken/${id}`, {
            headers: this.headers
        }).map(
            res => res.text()
            ).subscribe(
            data => {
                oneTimeAccessToken = data;
                
            },
            error => this._securityService.HandleError(error),
            () => {
                console.log(`open DownloadFile for file ${id}: ${this.actionUrl}${oneTimeAccessToken}`);
                window.open(`${this.actionUrl}${oneTimeAccessToken}`);
            });
    }

    public GetListOfFiles = (): Observable<string[]> => {
        this.setHeaders();
        return this._http.get(this.fileExplorerUrl, {
            headers: this.headers
        }).map(res => res.json());
    }

    private setHeaders() {
        this.headers = new Headers();
        this.headers.append('Content-Type', 'application/json');
        this.headers.append('Accept', 'application/json');

        var token = this._securityService.GetToken();

        if (token !== "") {
            this.headers.append('Authorization', 'Bearer ' + token);
        }
    }
}

The DownloadFileById function in the SecureFilesComponent component is used to call the service DownloadFile(id) function which downloads the file as explained above.

import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { SecureFileService } from './SecureFileService';
import { SecurityService } from '../services/SecurityService';
import { Observable }       from 'rxjs/Observable';

@Component({
    selector: 'securefiles',
    templateUrl: 'app/securefile/securefiles.component.html',
    directives: [CORE_DIRECTIVES],
    providers: [SecureFileService]
})

export class SecureFilesComponent implements OnInit {

    public message: string;
    public Files: string[];
   
    constructor(private _secureFileService: SecureFileService, public securityService: SecurityService) {
        this.message = "Secure Files download";
    }

    ngOnInit() {
      this.getData();
    }

    public DownloadFileById(id: any) {
        this._secureFileService.DownloadFile(id);
    }

    private getData() {
        this._secureFileService.GetListOfFiles()
            .subscribe(data => this.Files = data,
            error => this.securityService.HandleError(error),
            () => console.log('Get all completed'));
    }
}

The component HTML template creates a list of files which can be download by the current authorized user and adds these in the browser as links.

<div class="col-md-12" *ngIf="securityService.IsAuthorized" >
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body">
            <table class="table">
                <thead>
                    <tr>
                        <th>Name</th>
                    </tr>
                </thead>
                <tbody>
                    <tr style="height:20px;" *ngFor="#file of Files" >
                        <td><a (click)="DownloadFileById(file)">Download {{file}}</a></td>
                    </tr>
                </tbody>
            </table>

        </div>
    </div>
</div>

The SecurityService is used to login and get the access token and the token id from Identity Server 4. This is explained in the previous post Angular2 OpenID Connect Implicit Flow with IdentityServer4.

Implementing the File Server

The server API implements a GenerateOneTimeAccessToken method to start the download. This method authorizes using policies and checks if the file id exists. A HttpNotFound is returned, if the file id does not exist. It then validates, if the file exists on the file server. The method also checks, if the user is an administrator, and uses this to validate that the user is authorized to access the file. The AddFileIdForUseOnceAccessId method is then used to generate a use once access id for this file.

[Authorize("securedFilesUser")]
[HttpGet("GenerateOneTimeAccessToken/{id}")]
public IActionResult GenerateOneTimeAccessToken(string id)
{
	if (!_securedFileProvider.FileIdExists(id))
	{
		return HttpNotFound($"File id does not exist: {id}");
	}

	var filePath = $"{_appEnvironment.ApplicationBasePath}/SecuredFileShare/{id}";
	if (!System.IO.File.Exists(filePath))
	{
		return HttpNotFound($"File does not exist: {id}");
	}

	var adminClaim = User.Claims.FirstOrDefault(x => x.Type == "role" && x.Value == "securedFiles.admin");
	if (_securedFileProvider.HasUserClaimToAccessFile(id, adminClaim != null))
	{
		// TODO generate a one time access token
		var oneTimeToken = _securedFileProvider.AddFileIdForUseOnceAccessId(filePath);
		return Ok(oneTimeToken);
	}

	// returning a HTTP Forbidden result.
	return new HttpStatusCodeResult(403);
}

The download file API can be used with the use once access id parameter. This method uses the access id to retrieve the file path using the GetFileIdForUseOnceAccessId method. If the access id is valid, the file can be downloaded using the FileContentResult.

[AllowAnonymous]
[HttpGet("{accessId}")]
public IActionResult Get(string accessId)
{
	var filePath = _securedFileProvider.GetFileIdForUseOnceAccessId(accessId);
	if(!string.IsNullOrEmpty(filePath))
	{
		var fileContents = System.IO.File.ReadAllBytes(filePath);
		return new FileContentResult(fileContents, "application/octet-stream");
	}

	// returning a HTTP Forbidden result.
	return new HttpStatusCodeResult(401);
}

The UseOnceAccessIdService is responsible for generating the access id and validating it when using it. The AddFileIdForUseOnceAccessId method creates a new UseOnceAccessId object which creates a random string which is used for the download access id. The object saves the time stamp as a property and also the file path which will be available for download for 30 seconds. This object is then saved to a in-memory list. The UseOnceAccessIdService service is registered as a singleton in the startup class.

The GetFileIdForUseOnceAccessId removes any objects which are older than 30 seconds. It then retrieves the UseOnceAccessId object if it still exists and returns the file path from this. The object is then deleted. This prevents that the file can be downloaded twice using the same key.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace ResourceFileServer.Providers
{
    public class UseOnceAccessIdService
    {
        /// <summary>
        /// One time tokens live for a max of 30 seconds
        /// </summary>
        private double _timeToLive = 30.0;
        private static object lockObject = new object();

        private List<UseOnceAccessId> _useOnceAccessIds = new List<UseOnceAccessId>();

        public string GetFileIdForUseOnceAccessId(string useOnceAccessId)
        {
            var fileId = string.Empty;

            lock(lockObject) {

                // Max 30 seconds to start download after requesting one time token.
                _useOnceAccessIds.RemoveAll(t => t.Created < DateTime.UtcNow.AddSeconds(-_timeToLive));

                var item = _useOnceAccessIds.FirstOrDefault(t => t.AccessId == useOnceAccessId);
                if (item != null)
                {
                    fileId = item.FileId;
                    _useOnceAccessIds.Remove(item);
                }
            }

            return fileId;
        }

        public string AddFileIdForUseOnceAccessId(string filePath)
        {
            var useOnceAccessId = new UseOnceAccessId(filePath);
            lock (lockObject)
            {
                _useOnceAccessIds.Add(useOnceAccessId);
            }
            return useOnceAccessId.AccessId;
        }
    }
}

The UseOnceAccessId object is used to save a request for a download and generates the random access id string in the constructor.

using System;

namespace ResourceFileServer.Providers
{
    internal class UseOnceAccessId
    {
        public UseOnceAccessId(string fileId)
        {
            Created = DateTime.UtcNow;
            AccessId = CreateAccessId();
            FileId = fileId;
        }

        public DateTime Created { get; }

        public string AccessId { get; }

        public string FileId { get; }

        private string CreateAccessId()
        {
            SecureRandom secureRandom = new SecureRandom();
            return secureRandom.Next() + Guid.NewGuid().ToString();
        }
    }
}

Now the files can be downloaded from the resource file server without using the access token in the URL or without using cookies.

secureNoAccessInUrl_01

Notes: Thanks for Alistair for pointing this out in the comments of the previous post. Maybe it would be nice, if Identity Server 4 could support this using ‘use once tokens’, then the standard authorization middleware could be used on the resource server.

Links

http://openid.net/specs/openid-connect-core-1_0.html

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/aspnet/Security

https://github.com/IdentityServer/IdentityServer4.AccessTokenValidation

http://www.filipekberg.se/2013/07/12/are-you-serving-files-insecurely-in-asp-net/

Announcing IdentityServer for ASP.NET 5 and .NET Core

https://github.com/IdentityServer/IdentityServer4

https://github.com/IdentityServer/IdentityServer4.Samples

The State of Security in ASP.NET 5 and MVC 6: OAuth 2.0, OpenID Connect and IdentityServer

http://connect2id.com/learn/openid-connect

https://github.com/FabianGosebrink/Angular2-ASPNETCore-SignalR-Demo

Getting Started with ASP NET Core 1 and Angular 2 in Visual Studio 2015

http://benjii.me/2016/01/angular2-routing-with-asp-net-core-1/

http://tattoocoder.azurewebsites.net/angular2-aspnet5-spa-template/

Cross-platform Single Page Applications with ASP.NET Core 1.0, Angular 2 & TypeScript



Damien Bod: Angular 2 child routing and components

This article shows how Angular 2 child routing can be set up together with Angular 2 components. An Angular 2 component can contain it’s own routing, which makes it easy to reuse or test the components in an application.

Code: Angular 2 app host with ASP.NET Core

2016.06.26: Updated Angular 2 to rc3, new Angular 2 routing
2016.06.21: Updated to Angular 2 rc2
2016.05.07: Updated to Angular 2 rc1

An Angular 2 app bootstraps a single main component and the routing is usually defined in this component. To use Angular 2 routing, ‘@angular/router-deprecated’ is imported. The child routing is set up using “/…” and is defined in the @RouteConfig which tells the Angular 2 application, that this child component contains its own routing. The main application routing does not need to know anything about this.

The main routes are defined in the app.routes.ts

import { provideRouter, RouterConfig } from '@angular/router';
import { ForbiddenComponent } from './forbidden/forbidden.component';
import { HomeComponent } from './home/home.component';
import { UnauthorizedComponent } from './unauthorized/unauthorized.component';
import { SecurityService } from './services/SecurityService';
import { SecureFilesComponent } from './securefile/securefiles.component';
import { DataEventRecordsRoutes } from './dataeventrecords/dataeventrecords.routes';

export const routes: RouterConfig = [
    { path: '', component: HomeComponent },
    { path: 'Forbidden', component: ForbiddenComponent },
    { path: 'Unauthorized', component: UnauthorizedComponent },
    { path: 'securefile/securefiles',  component: SecureFilesComponent },
    ...DataEventRecordsRoutes
];

export const APP_ROUTER_PROVIDERS = [
    provideRouter(routes)
];

This is then added to the application boot.ts using the APP_ROUTER_PROVIDERS.

import { bootstrap } from '@angular/platform-browser-dynamic';
import { HTTP_PROVIDERS } from '@angular/http';
import { AppComponent } from './app.component';
import { Configuration } from './app.constants';
import { SecurityService } from './services/SecurityService';
import { APP_ROUTER_PROVIDERS } from './app.routes';

bootstrap(AppComponent, [
    APP_ROUTER_PROVIDERS,
    HTTP_PROVIDERS,
    Configuration,
    SecurityService
]);

The routes can then be used in the app without anymore definitions.

import {Component} from '@angular/core';
import { ROUTER_DIRECTIVES} from '@angular/router';
import {ForbiddenComponent} from './forbidden/forbidden.component';
import {UnauthorizedComponent} from './unauthorized/unauthorized.component';
import {SecurityService} from './services/SecurityService';
import {SecureFilesComponent} from './securefile/securefiles.component';

import {DataEventRecordsComponent} from './dataeventrecords/dataeventrecords.component';
import { DataEventRecordsService } from './dataeventrecords/DataEventRecordsService';

@Component({
    selector: 'my-app',
    templateUrl: 'app/app.component.html',
    styleUrls: ['app/app.component.css'],
    directives: [ROUTER_DIRECTIVES],
    providers: [
        DataEventRecordsService
    ]
})

export class AppComponent {

    constructor(public securityService: SecurityService) {  
    }

    ngOnInit() {
        console.log("ngOnInit _securityService.AuthorizedCallback");

        if (window.location.hash) {
            this.securityService.AuthorizedCallback();
        }      
    }

    public Login() {
        console.log("Do login logic");
        this.securityService.Authorize(); 
    }

    public Logout() {
        console.log("Do logout logic");
        this.securityService.Logoff();
    }
}

The corresponding HTML template for the main component contains the router-outlet directive. This is where the child routing content will be displayed. “[routerLink]” bindings can be used to define routing links.

<div class="container" style="margin-top: 15px;">
    <!-- Static navbar -->
    <nav class="navbar navbar-default">
        <div class="container-fluid">
            <div class="navbar-header">
                <button aria-controls="navbar" aria-expanded="false" data-target="#navbar" data-toggle="collapse" class="navbar-toggle collapsed" type="button">
                    <span class="sr-only">Toggle navigation</span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                </button>
                <a [routerLink]="['/dataeventrecords']" class="navbar-brand"><img src="images/damienbod.jpg" height="40" style="margin-top:-10px;" /></a>
            </div>
            <div class="navbar-collapse collapse" id="navbar">
                <ul class="nav navbar-nav">
                    <li><a [routerLink]="['/dataeventrecords']">DataEventRecords</a></li>
                    <li><a [routerLink]="['/dataeventrecords/create']">Create DataEventRecord</a></li>
                    <li><a [routerLink]="['/securefile/securefiles']">Secured Files Download</a></li>

                    <li><a class="navigationLinkButton" *ngIf="!securityService.IsAuthorized" (click)="Login()">Login</a></li>
                    <li><a class="navigationLinkButton" *ngIf="securityService.IsAuthorized" (click)="Logout()">Logout</a></li>
              
                </ul>
            </div><!--/.nav-collapse -->
        </div><!--/.container-fluid -->
    </nav>

    <router-outlet></router-outlet>

</div>

A child component in angular 2 can also contain its own routes. These can be defined in a routes.ts file, and then added to the parent routing definition using the … notation.

import { provideRouter, RouterConfig } from '@angular/router';
import { DataEventRecordsComponent } from '../dataeventrecords/dataeventrecords.component';
import { DataEventRecordsListComponent } from '../dataeventrecords/dataeventrecords-list.component';
import { DataEventRecordsCreateComponent } from '../dataeventrecords/dataeventrecords-create.component';
import { DataEventRecordsEditComponent } from '../dataeventrecords/dataeventrecords-edit.component';

export const DataEventRecordsRoutes: RouterConfig = [
    {
        path: 'dataeventrecords',
        component: DataEventRecordsComponent,
        children: [
            {
                path: '',
                component: DataEventRecordsListComponent,
            },
            {
                path: 'create',
                component: DataEventRecordsCreateComponent
            },
            {
                path: 'edit/:id',
                component: DataEventRecordsEditComponent
            }
        ]
    }
];

The HTML template for this root component just defines the router-outlet directive. This could be done inline in the typescript file.

<router-outlet></router-outlet>

The list component is used as the default component inside the DataEventRecord component. This gets a list of DataEventRecord items using the DataEventRecordsService service and displays them in the UI using its HTML template.

import { Component, OnInit } from '@angular/core';
import { CORE_DIRECTIVES } from '@angular/common';
import { SecurityService } from '../services/SecurityService';
import { Observable }       from 'rxjs/Observable';
import { Router, ROUTER_DIRECTIVES } from '@angular/router';

import { DataEventRecordsService } from '../dataeventrecords/DataEventRecordsService';
import { DataEventRecord } from './models/DataEventRecord';

@Component({
    selector: 'dataeventrecords-list',
    templateUrl: 'app/dataeventrecords/dataeventrecords-list.component.html',
    directives: [CORE_DIRECTIVES, ROUTER_DIRECTIVES]
})

export class DataEventRecordsListComponent implements OnInit {

    public message: string;
    public DataEventRecords: DataEventRecord[];
   
    constructor(
        private _dataEventRecordsService: DataEventRecordsService,
        public securityService: SecurityService,
        private _router: Router) {
        this.message = "DataEventRecords";
    }

    ngOnInit() {
        this.getData();
    }

    public Delete(id: any) {
        console.log("Try to delete" + id);
        this._dataEventRecordsService.Delete(id)
            .subscribe((() => console.log("subscribed")),
            error => this.securityService.HandleError(error),
            () => this.getData());
    }

    private getData() {
        console.log('DataEventRecordsListComponent:getData starting...');
        this._dataEventRecordsService
            .GetAll()
            .subscribe(data => this.DataEventRecords = data,
            error => this.securityService.HandleError(error),
            () => console.log('Get all completed'));
    }

}

The template for the list component uses the “[routerLink]” so that each item can be opened and updated using the edit child component.

<div class="col-md-12" *ngIf="securityService.IsAuthorized" >
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body">
            <table class="table">
                <thead>
                    <tr>
                        <th>Name</th>
                        <th>Timestamp</th>
                    </tr>
                </thead>
                <tbody>
                    <tr style="height:20px;" *ngFor="let dataEventRecord of DataEventRecords" >
                        <td>
                            <a *ngIf="securityService.HasAdminRole" href="" [routerLink]="['/dataeventrecords/edit/' + dataEventRecord.Id]" >{{dataEventRecord.Name}}</a>
                            <span *ngIf="!securityService.HasAdminRole">{{dataEventRecord.Name}}</span>
                        </td>
                        <td>{{dataEventRecord.Timestamp}}</td>
                        <td><button (click)="Delete(dataEventRecord.Id)">Delete</button></td>
                    </tr>
                </tbody>
            </table>

        </div>
    </div>
</div>

The update or edit component uses the ActivatedRoute so that the id of the item can be read from the URL. This is then used to get the item from the ASP.NET Core MVC service using the DataEventRecordsService service. This is implemented in the ngOnInit and the OnDestroy function.

import { Component, OnInit, OnDestroy } from '@angular/core';
import { Router, ActivatedRoute } from '@angular/router';
import { CORE_DIRECTIVES } from '@angular/common';
import { SecurityService } from '../services/SecurityService';

import { DataEventRecordsService } from '../dataeventrecords/DataEventRecordsService';
import { DataEventRecord } from './models/DataEventRecord';

@Component({
    selector: 'dataeventrecords-edit',
    templateUrl: 'app/dataeventrecords/dataeventrecords-edit.component.html',
    directives: [CORE_DIRECTIVES]
})

export class DataEventRecordsEditComponent implements OnInit, OnDestroy   {

    private id: number;
    public message: string;
    private sub: any;
    public DataEventRecord: DataEventRecord;

    constructor(
        private _dataEventRecordsService: DataEventRecordsService,
        public securityService: SecurityService,
        private _route: ActivatedRoute,
        private _router: Router
    ) {
        this.message = "DataEventRecords Edit";
    }
    
    ngOnInit() {     
        console.log("IsAuthorized:" + this.securityService.IsAuthorized);
        console.log("HasAdminRole:" + this.securityService.HasAdminRole);

        this.sub = this._route.params.subscribe(params => {
            let id = +params['id']; // (+) converts string 'id' to a number
            if (!this.DataEventRecord) {
                this._dataEventRecordsService.GetById(id)
                    .subscribe(data => this.DataEventRecord = data,
                    error => this.securityService.HandleError(error),
                    () => console.log('DataEventRecordsEditComponent:Get by Id complete'));
            } 
        });      
    }

    ngOnDestroy() {
        this.sub.unsubscribe();
    }

    public Update() {
        // router navigate to DataEventRecordsList
        this._dataEventRecordsService.Update(this.id, this.DataEventRecord)
            .subscribe((() => console.log("subscribed")),
            error => this.securityService.HandleError(error),
            () => this._router.navigate(['/dataeventrecords']));
    }
}

The component loads the data from the service async, which means this item can be null or undefined. Because of this, it is important that *ngIf is used to check if it exists, before using it in the input form. The (click) event calls the Update function, which updates the item on the ASP.NET Core server.

<div class="col-md-12" *ngIf="securityService.IsAuthorized" >
    <div class="panel panel-default">
        <div class="panel-heading">
            <h3 class="panel-title">{{message}}</h3>
        </div>
        <div class="panel-body">
            <div  *ngIf="DataEventRecord">
                <div class="row" >
                    <div class="col-xs-2">Id</div>
                    <div class="col-xs-6">{{DataEventRecord.Id}}</div>
                </div>

                <hr />
                <div class="row">
                    <div class="col-xs-2">Name</div>
                    <div class="col-xs-6">
                        <input type="text" [(ngModel)]="DataEventRecord.Name" style="width: 100%" />
                    </div>
                </div>
                <hr />
                <div class="row">
                    <div class="col-xs-2">Description</div>
                    <div class="col-xs-6">
                        <input type="text" [(ngModel)]="DataEventRecord.Description" style="width: 100%" />
                    </div>
                </div>
                <hr />
                <div class="row">
                    <div class="col-xs-2">Timestamp</div>
                    <div class="col-xs-6">{{DataEventRecord.Timestamp}}</div>
                </div>
                <hr />
                <div class="row">
                    <div class="col-xs-2">
                        <button (click)="Update()">Update</button>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>

The new routing with child routing in Angular2 makes it better or possible to separate or group your components as required and easy to test. These could be delivered as separate modules or whatever.

Links

https://angular.io/docs/ts/latest/guide/router.html

http://www.codeproject.com/Articles/1087605/Angular-typescript-configuration-and-debugging-for

https://auth0.com/blog/2016/01/25/angular-2-series-part-4-component-router-in-depth/

https://github.com/johnpapa/angular-styleguide

https://mgechev.github.io/angular2-style-guide/

https://github.com/mgechev/codelyzer

https://toddmotto.com/component-events-event-emitter-output-angular-2

http://blog.thoughtram.io/angular/2016/03/21/template-driven-forms-in-angular-2.html

http://raibledesigns.com/rd/entry/getting_started_with_angular_2

https://toddmotto.com/transclusion-in-angular-2-with-ng-content

http://www.bennadel.com/blog/3062-creating-an-html-dropdown-menu-component-in-angular-2-beta-11.htm

http://asp.net-hacker.rocks/2016/04/04/aspnetcore-and-angular2-part1.html



Ben Foster: Using Role Claims in ASP.NET Identity Core

One new feature of ASP.NET Identity is Role Claims. Since there's little documentation on how to use them I thought I'd put together a quick demo.

A Role Claim is a statement about a Role. When a user is a member of a role, they automatically inherit the role's claims.

An example of where this feature could be used is for handling application permissions. Roles provide a mechanism to group related users. Permissions determine what members of those roles can do.

Here's the code I'm using to set up my roles, permissions and role membership (warning, it's demo quality):

public async Task<IActionResult> Setup()
{
    var user = await userManager.FindByIdAsync(User.GetUserId());

    var adminRole = await roleManager.FindByNameAsync("Admin");
    if (adminRole == null)
    {
        adminRole = new IdentityRole("Admin");
        await roleManager.CreateAsync(adminRole);

        await roleManager.AddClaimAsync(adminRole, new Claim(CustomClaimTypes.Permission, "projects.view"));
        await roleManager.AddClaimAsync(adminRole, new Claim(CustomClaimTypes.Permission, "projects.create"));
        await roleManager.AddClaimAsync(adminRole, new Claim(CustomClaimTypes.Permission, "projects.update"));
    }

    if (!await userManager.IsInRoleAsync(user, adminRole.Name))
    {
        await userManager.AddToRoleAsync(user, adminRole.Name);
    }

    var accountManagerRole = await roleManager.FindByNameAsync("Account Manager");

    if (accountManagerRole == null)
    {
        accountManagerRole = new IdentityRole("Account Manager");
        await roleManager.CreateAsync(accountManagerRole);

        await roleManager.AddClaimAsync(accountManagerRole, new Claim(CustomClaimTypes.Permission, "account.manage"));
    }

    if (!await userManager.IsInRoleAsync(user, accountManagerRole.Name))
    {
        await userManager.AddToRoleAsync(user, accountManagerRole.Name);
    }

    return Ok();
}

Here I'm defining two roles with the following permissions:

  • Admin - View, Create, Update Projects
  • Account Manager - Manage Accounts

I've got an API endpoint that spits out the user's claims:

public IActionResult Index()
{
    var claims = User.Claims.Select(claim => new { claim.Type, claim.Value }).ToArray();
    return Json(claims);
}

After running the role setup I can see that my user has the permissions claims we set up for both roles:

[
   {
      "Type":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier",
      "Value":"d96ec201-d984-4cf8-a226-dc58d2a92b52"
   },
   {
      "Type":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
      "Value":"ben@example.org"
   },
   {
      "Type":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
      "Value":"Admin"
   },
   {
      "Type":"http://example.org/claims/permission",
      "Value":"projects.view"
   },
   {
      "Type":"http://example.org/claims/permission",
      "Value":"projects.create"
   },
   {
      "Type":"http://example.org/claims/permission",
      "Value":"projects.update"
   },
   {
      "Type":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
      "Value":"Account Manager"
   },
   {
      "Type":"http://example.org/claims/permission",
      "Value":"account.manage"
   }
]     

The code for this is the built-in UserClaimsPrincipalFactory class:

if (UserManager.SupportsUserRole)
{
   var roles = await UserManager.GetRolesAsync(user);
   foreach (var roleName in roles)
   {
       id.AddClaim(new Claim(Options.ClaimsIdentity.RoleClaimType, roleName));
       if (RoleManager.SupportsRoleClaims)
       {
           var role = await RoleManager.FindByNameAsync(roleName);
           if (role != null)
           {
               id.AddClaims(await RoleManager.GetClaimsAsync(role));
           }
       }
   }
}

Note that the above code doesn't check for duplicate claims, so if a user is a member of roles that shared the same permissions they would end up with multiple permission claims of the same value.

You could customise this behaviour by providing your own implementation of IUserClaimsPrincipalFactory which I covered in a previous post.

Checking for permissions

To then authorise user actions based on their permissions we can create a custom policy (one of the new authorisation features in ASP.NET Core):

services.AddAuthorization(options =>
{
    options.AddPolicy("View Projects", 
        policy => policy.RequireClaim(CustomClaimTypes.Permission, "projects.view"));
});

This is applied by decorating our controller and/or actions with the Authorize attribute.

[Authorize("View Projects")]
public IActionResult Index(int siteId)
{
    return View();
}

Tip: Use constants for policy and permission names.

In a future post I'll cover the new authorisation features in more depth.


Ben Foster: Multi-tenant Dependency Injection in ASP.NET Core

If you're not already familiar with Dependency Injection in ASP.NET Core there's a great write up on it here.

The TLDR is that dependency injection is now built into the framework so we can wave goodbye to the various different flavours of service locator we inevitably ended up having to use in existing ASP.NET frameworks. The new architecture makes it easier to build loosely coupled applications that are easy to extend and test.

Dependency Scopes

A dependency scope indicates the scope of an instance created by a DI tool. The most common scopes are:

  • Singleton - Instance created once per application
  • Request - Instance created once per HTTP Request
  • Transient - Instance created each time one is requested

These three scopes are supported OOTB and in most cases are all you will need.

This is true even for multi-tenant applications. Suppose you're using Entity Framework and each tenant has their own database. Since the recommended pattern is to create a DbContext instance per request, this will work fine in multi-tenant environments since we resolve the tenant per request. See my previous post for an example of this.

The missing scope - Tenant-Singleton

If a singleton is created once per application, you can probably guess that a tenant-singleton is created once per tenant.

So when might you need this scope? Think of any object that is expensive to create or needs to maintain state yet should be isolated for each tenant. Good examples would be NHibernate's Session Factory, RavenDB's Document Store or ASP.NET's Memory Cache.

First attempt - IServiceScopeFactory

I'm not going to go into a deep overview of the built-in DI - this article does a very good job of that.

Essentially IServiceScopeFactory is the interface responsible for creating IServiceScope instances which are in turn responsible for managing the lifetime of IServiceProvider - which is the interface we use to resolve dependencies i.e. IServiceProvider.GetService(type).

In ASP.NET Core, a IServiceScope is created per request to handle request-scoped dependencies. I figured I just needed to change how this was created, specifically:

  1. Get the tenant service scope
  2. If the tenant service scope doesn't exist (first request to tenant), create and configure it
  3. Create a child scope from the tenant scope for the request

These three steps rely on three important features available in most popular DI tools:

  • The ability to create child containers
  • The ability to configure containers at runtime
  • Resolving dependencies from a parent container if not explicitly configured on the child (bubbling up)

Unfortunately the built-in IServiceProvider does not support any of these features. For more "advanced" scenarios the ASP.NET team advocate using another DI tool (my thoughts on that later).

StructureMap is my favourite DI tool and since it supports .NET Core, it was relatively straightforward to create a IServiceScopeFactory implementation that handled the above requirements. In the SM world this meant:

  • Obtaining the tenant container
  • If the tenant container doesn't exist create a child container from the root application container and configure it
  • Create a nested request container

The code for doing that:

public virtual IServiceScope CreateScope()
{
    var tenantContext = GetTenantContext(Container.GetInstance<IHttpContextAccessor>().HttpContext);
    var tenantProfileName = GetTenantProfileName(tenantContext);

    var tenantProfileContainer = Container.GetProfile(tenantProfileName);
    ConfigureTenantProfileContainer(tenantProfileContainer, tenantContext);

    return new StructureMapServiceScope(tenantProfileContainer.GetNestedContainer());
}

With my SM implementation plugged in I was ready to give my sample app a whirl - which much to my dismay, blew up!

Tenant who?

A bit of an oversight on my part was understanding how and when the service scope factory was invoked. It happens at the very start of the request pipeline, before SaasKit's tenant resolution moddleware kicks in. This meant that the service scope factory was unable to access the current tenant. Hmm...

Whilst it is possible to push the tenant resolution middleware further up the pipeline I didn't really like the idea that to work correctly SaasKit would be dependent on the order of middleware, since we have little control over this in consuming apps.

Orchard to the rescue

Sebastien Ros is one of the developers on the Orchard Team and also happens to be a regular participant in the SaasKit gitter room.

He told me how they were handling this scenario in Orchard - using middleware to replace the service provider created per request (HttpContext.RequestServices).

Long story short, I was able to achieve something similar with StructureMap. Here's the invoke method from the relevant middleware:

public async Task Invoke(HttpContext context, Lazy<ITenantContainerBuilder<TTenant>> builder)
{
    Ensure.Argument.NotNull(context, nameof(context));

    var tenantContext = context.GetTenantContext<TTenant>();

    if (tenantContext != null)
    {
        var tenantContainer = await GetTenantContainerAsync(tenantContext, builder);

        using (var requestContainer = tenantContainer.GetNestedContainer())
        {
            // Replace the request IServiceProvider created by IServiceScopeFactory
            context.RequestServices = requestContainer.GetInstance<IServiceProvider>();
            await next.Invoke(context);
        }
    }
}

With a little syntactic sugar it's possible to configure tenant-scoped singletons:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.AddMultitenancy<AppTenant, AppTenantResolver>();

    var container = new Container();
    container.Populate(services);

    container.Configure(c =>
    {
        // Application Services
    });

    container.ConfigureTenants<AppTenant>(c =>
    {
        // Tenant Scoped Services
        c.For<IMessageService>().Singleton().Use<MessageService>();
    });

    return container.GetInstance<IServiceProvider>();
}

We're using the StructureMap.Dnx package which integrates StructureMap with ASP.NET Core.

In the above code I'm registering MessageService as a singleton in the tenant container. The request container is a nested container created from the tenant container. When an instance of IMessageService is requested it will be resolved from the tenant container. You can read more on the behaviour of nested containers in the StructureMap Docs.

The final step is to register the middleware:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseMultitenancy<AppTenant>();
    app.UseTenantContainers<AppTenant>(); // <-- this one
}

Overriding default dependencies

Configuring tenant dependencies in this way makes it possible to override default dependencies per tenant.

The ConfigureTenants method has an overload that gives you access to the current tenant, making it possible to do things like this:

container.ConfigureTenants<AppTenant>((tenant, config) =>
{
    if (tenant.DbType == DbType.RavenDB)
    {
        config.For<IRepository>().Use<RavenDbRepository>();
    }
    else
    {
        config.For<IRepository>().Use<SqlServerRepository>();
    }
});

You could also register your default dependencies in the root container and then override them for specific tenants:

container.Configure(c =>
{
    c.For<IBackupRepository>().Use<AzureBackupRepository>();
});

container.ConfigureTenants<AppTenant>((tenant, c) =>
{
    if (tenant.Name == "Contoso")
    {
        c.For<IBackupRepository>().Use<AmazonBackupRepository>();
    }
});

If your tenant dependency configuration is quite large and you don't want all of this code sitting in Startup.cs you can instead create a class that implements ITenantContainerBuilder<T>:

public interface ITenantContainerBuilder<TTenant>
{
    Task<IContainer> BuildAsync(TTenant tenant);
}

For example:

public class AppTenantContainerBuilder : ITenantContainerBuilder<AppTenant>
{
    private IContainer container;

    public AppTenantContainerBuilder(IContainer container)
    {
        this.container = container;
    }

    public Task<IContainer> BuildAsync(AppTenant tenant)
    {
        var tenantContainer = container.CreateChildContainer();
        tenantContainer.Configure(config =>
        {
            if (tenant.Name == "Tenant 1")
            {
                config.ForSingletonOf<IMessageService>().Use<OtherMessageService>();
            }
            else
            {
                config.ForSingletonOf<IMessageService>().Use<MessageService>();
            }
        });

        return Task.FromResult(tenantContainer);
    }
}

Ready to try it?

Add the SaasKit.Multitenancy.StructureMap package to your project.json or find it on NuGet.

Final thoughts on DI in ASP.NET Core

I'm a little disappointed I couldn't get this working with the built-in DI. Even the final solution feels like a bit of a hack.

The 3 most common DI tools in .NET are StructureMap, Autofac and Ninject. All three support child containers and runtime configuration. It seems like an oversight to not include these features in the built-in DI or at the very least make them opt-in. Having a response of “use another DI tool” only works if you provide the necessary extensibility points to make use of their “advanced” features.

On another note, I was thinking about the above implementation and how this could be adapted to support Autofac and Ninject:

internal class MultitenantContainerMiddleware<TContainer, TTenant>
{
    private readonly RequestDelegate next;

    public MultitenantContainerMiddleware(RequestDelegate next)
    {
        Ensure.Argument.NotNull(next, nameof(next));
        this.next = next;
    }

    public async Task Invoke(HttpContext context, Lazy<ITenantContainerBuilder<TContainer, TTenant>> builder)
    {
        Ensure.Argument.NotNull(context, nameof(context));

        var tenantContext = context.GetTenantContext<TTenant>();

        if (tenantContext != null)
        {
            var tenantContainer = await GetTenantContainerAsync(tenantContext, builder);

            using (var requestContainer = tenantContainer.GetNestedContainer())
            {
                // Replace the request IServiceProvider created by IServiceScopeFactory
                context.RequestServices = requestContainer.GetInstance<IServiceProvider>();
                await next.Invoke(context);
            }
        }
    }

    private async Task<TContainer> GetTenantContainerAsync(
        TenantContext<TTenant> tenantContext,
        Lazy<ITenantContainerBuilder<TContainer, TTenant>> builder)
    {
        var tenantContainer = tenantContext.GetTenantContainer<TContainer>();

        if (tenantContainer == null)
        {
            tenantContainer = await builder.Value.BuildAsync(tenantContext.Tenant);
            tenantContext.SetTenantContainer(tenantContainer);
        }

        return tenantContainer;
    }
}

We could use this middleware for most DI tools and make it part of the core SaasKit project. Each DI tool would just provide an implementation of ITenantContainerBuilder.

The only code that still needs to be abstracted is:

using (var requestContainer = tenantContainer.GetNestedContainer())
{
    // Replace the request IServiceProvider created by IServiceScopeFactory
    context.RequestServices = requestContainer.GetInstance<IServiceProvider>();
    await next.Invoke(context);
}

So we could extend ITenantContainerBuilder to include methods for creating a child container or IServiceProvider instance.

But then we've just created our own DI system and thrown away the built-in one.

1 step forward, 2 steps back.

Download

Get the sample here.


Ben Foster: Customising claims transformation in ASP.NET Core Identity

I've been testing out the new version of ASP.NET Identity and had the need to include additional claims in the ClaimIdentity generated when a user is authenticated.

Transforming Claims Identity

ASP.NET Core supports Claims Transformation out of the box. Just create a class that implements IClaimsTransformer:

public class ClaimsTransformer : IClaimsTransformer
{
    public Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
    {
        ((ClaimsIdentity)principal.Identity).AddClaim(new Claim("ProjectReader", "true"));
        return Task.FromResult(principal);
    }
}

To register the claims transformer, add the following inside your Configure method in Startup.cs:

app.UseClaimsTransformation(new ClaimsTransformationOptions
{
    Transformer = new ClaimsTransformer()
});

One problem with the current implementation of the claims transformation middleware is that claims transformer instances have to be created during configuration. This means no DI making it difficult to handle loading claim information from a database.

Note: I'm told this will be fixed in RC2.

Claims Identity Creation

In my application I'd extended the default ApplicationUser class with additional properties for first and last name. I wanted these properties to be included in the generated ClaimsIdentity:

public class ApplicationUser : IdentityUser
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

ASP.NET Core Identity has a SignInManager<TUser> responsible for signing users into your application. Internally it uses a IUserClaimsPrincipalFactory<TUser> to generate a ClaimsPrincipal from your user.

The default implementation only includes username and user identifier claims. To add additional claims we can create our own implementation of IUserClaimsPrincipalFactory<TUser> or derive from the default UserClaimsPrincipalFactory:

public class AppClaimsPrincipalFactory : UserClaimsPrincipalFactory<ApplicationUser, IdentityRole>
{
    public AppClaimsPrincipalFactory(
        UserManager<ApplicationUser> userManager,
        RoleManager<IdentityRole> roleManager,
        IOptions<IdentityOptions> optionsAccessor) : base(userManager, roleManager, optionsAccessor)
    {
    }

    public async override Task<ClaimsPrincipal> CreateAsync(ApplicationUser user)
    {
        var principal = await base.CreateAsync(user);

        ((ClaimsIdentity)principal.Identity).AddClaims(new[] {
            new Claim(ClaimTypes.GivenName, user.FirstName),
            new Claim(ClaimTypes.Surname, user.LastName),
        });

        return principal;
    }
}

To register the custom factory we add the following to the ConfigureServices method in startup.cs after the services.AddIdentity() call:

services.AddScoped<IUserClaimsPrincipalFactory<ApplicationUser>, AppClaimsPrincipalFactory>();


Damien Bod: Secure file download using IdentityServer4, Angular2 and ASP.NET Core

This article shows how a secure file download can be implemented using Angular 2 with an OpenID Connect Implicit Flow using IdentityServer4. The resource server needs to process the access token in the query string and the NuGet package IdentityServer4.AccessTokenValidation makes it very easy to support this. The default security implementation jwtBearerHandler reads the token from the header.

Code: https://github.com/damienbod/AspNet5IdentityServerAngularImplicitFlow/tree/secureDownloadWithAccessTokenInURL

Note: This branch and post will not be updated. Use the follow up post and code for this requiement.

Other posts in this series:

The Secure File Resource Server

The required packages for the resource server are defined in the project.json file in the dependencies. The authorization packages and the IdentityServer4.AccessTokenValidation package need to be added.

"dependencies": {
	"Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final",
	"Microsoft.AspNet.Mvc": "6.0.0-rc1-final",
	"Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final",
	"Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final",
	"Microsoft.Extensions.Configuration.FileProviderExtensions": "1.0.0-rc1-final",
	"Microsoft.Extensions.Configuration.Json": "1.0.0-rc1-final",
	"Microsoft.Extensions.Logging": "1.0.0-rc1-final",
	"Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final",
	"Microsoft.Extensions.Logging.Debug": "1.0.0-rc1-final",

	"Microsoft.AspNet.Authorization": "1.0.0-rc1-final",
	"Microsoft.AspNet.Authentication.JwtBearer": "1.0.0-rc1-final",
	"Microsoft.AspNet.Cors": "6.0.0-rc1-final",
	"Microsoft.AspNet.Diagnostics": "1.0.0-rc1-final",
	"IdentityServer4.AccessTokenValidation": "1.0.0-beta3"
},

The UseIdentityServerAuthentication extension from the NuGet IdentityServer4.AccessTokenValidation package can be used to read the access token from the query string. Normally this is done in the HTTP headers, but for file upload, this is not so easy, if your not using cookies. As the application uses OpenID Connect Implicit Flow, tokens are being used. The options.TokenRetriever = TokenRetrieval.FromQueryString() is used to configure the ASP.NET Core middleware to authenticate and authorize using the access token in the query string.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole(Configuration.GetSection("Logging"));
	loggerFactory.AddDebug();

	app.UseIISPlatformHandler();

	app.UseCors("corsGlobalPolicy");

	app.UseStaticFiles();

	JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
	app.UseIdentityServerAuthentication(options =>
	{
		options.Authority = "https://localhost:44345/";
		options.ScopeName = "securedFiles";
		options.ScopeSecret = "securedFilesSecret";

		options.AutomaticAuthenticate = true;
		// required if you want to return a 403 and not a 401 for forbidden responses
		options.AutomaticChallenge = true;
		options.TokenRetriever = TokenRetrieval.FromQueryString();
	});
	app.UseMvc();
}

An AuthorizeFilter is used to validate if the requesting access token has the scope “securedFiles”. The “securedFilesUser” policy is used to validate that the requesting token has the role “securedFiles.user”. The two policies are used in the MVC6 controllers as attributes.

var securedFilesPolicy = new AuthorizationPolicyBuilder()
	.RequireAuthenticatedUser()
	.RequireClaim("scope", "securedFiles")
	.Build();

services.AddAuthorization(options =>
{
	options.AddPolicy("securedFilesUser", policyUser =>
	{
		policyUser.RequireClaim("role", "securedFiles.user");
	});
});

The FileExplorerController is used to return the possible secure files which can be downloaded. The authorization policies which were defined in the startup class are used here. If the requesting token has the claim “securedFiles.admin”, all files will be returned in the payload of the HTTP GET.

using System.Linq;
using Microsoft.AspNet.Mvc;
using Microsoft.AspNet.Authorization;
using Microsoft.Extensions.PlatformAbstractions;
using ResourceFileServer.Providers;

namespace ResourceFileServer.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class FileExplorerController : Controller
    {
        private readonly IApplicationEnvironment _appEnvironment;
        private readonly ISecuredFileProvider _securedFileProvider;

        public FileExplorerController(ISecuredFileProvider securedFileProvider, IApplicationEnvironment appEnvironment)
        {
            _securedFileProvider = securedFileProvider;
            _appEnvironment = appEnvironment;
        }

        [Authorize("securedFilesUser")]
        [HttpGet]
        public IActionResult Get()
        {
            var adminClaim = User.Claims.FirstOrDefault(x => x.Type == "role" && x.Value == "securedFiles.admin");
            var files = _securedFileProvider.GetFilesForUser(adminClaim != null);

            return Ok(files);
        }
    }
}

The DownloadController is used for file download requests. The requesting token must have the claim of type scope and value “securedFiles” and also the claim of type role and the value “securedFiles.user”. In the demo application, one file requires the claim with type role and value “securedFiles.admin”.

The Get method checks if the file id exists on the server. If the id does not exist, a not found is returned. This protects against file sniffing. Thanks to Imran Baloch and Filip Ekberg for pointing this out. Here’s a great post about this:

http://www.filipekberg.se/2013/07/12/are-you-serving-files-insecurely-in-asp-net/

The GET method also checks if the file exists. If it does not exist, a 400 response is returned. It then checks, if the requesting token has the authorization to access the file. If this is ok, the file is returned as an “application/octet-stream” response.

using System.Linq;
using Microsoft.AspNet.Mvc;
using Microsoft.AspNet.Authorization;
using Microsoft.Extensions.PlatformAbstractions;
using ResourceFileServer.Providers;

namespace ResourceFileServer.Controllers
{
    [Authorize]
    [Route("api/[controller]")]
    public class DownloadController : Controller
    {
        private readonly IApplicationEnvironment _appEnvironment;
        private readonly ISecuredFileProvider _securedFileProvider;

        public DownloadController(ISecuredFileProvider securedFileProvider, IApplicationEnvironment appEnvironment)
        {
            _securedFileProvider = securedFileProvider;
            _appEnvironment = appEnvironment;
        }

        [Authorize("securedFilesUser")]
        [HttpGet("{id}")]
        public IActionResult Get(string id)
        {
            if(!_securedFileProvider.FileIdExists(id))
            {
                return HttpNotFound($"File Id does not exist: {id}");
            }

            var filePath = $"{_appEnvironment.ApplicationBasePath}/SecuredFileShare/{id}";
            if(!System.IO.File.Exists(filePath))
            {
                 return HttpNotFound($"File does not exist: {id}");
            }

            var adminClaim = User.Claims.FirstOrDefault(x => x.Type == "role" && x.Value == "securedFiles.admin");
            if(_securedFileProvider.HasUserClaimToAccessFile(id, adminClaim != null))
            {
                var fileContents = System.IO.File.ReadAllBytes(filePath);
                return new FileContentResult(fileContents, "application/octet-stream");
            }

            return HttpUnauthorized();
        }
    }
}

Angular 2 client

The Angular 2 application uses the SecureFileService to access the server APIs. The access_token parameter is added to the query string for the file download resource server. This is different to the standard way of adding the access token to the header. The GetDownloadfileUrl method is used to create the URL for the download link.

import { Injectable } from 'angular2/core';
import { Http, Response, Headers } from 'angular2/http';
import 'rxjs/add/operator/map'
import { Observable } from 'rxjs/Observable';
import { Configuration } from '../app.constants';
import { SecurityService } from '../services/SecurityService';

@Injectable()
export class SecureFileService {

    private actionUrl: string;
    private fileExplorerUrl: string;

    constructor(private _http: Http, private _configuration: Configuration, private _securityService: SecurityService) {
        this.actionUrl = _configuration.FileServer + 'api/Download/'; 
        this.fileExplorerUrl = _configuration.FileServer + 'api/FileExplorer/';    
    }

    public GetDownloadfileUrl(id: string): string {
        var token = this._securityService.GetToken();
        return this.actionUrl + id + "?access_token=" + token;
    }

    public GetListOfFiles = (): Observable<string[]> => {
        var token = this._securityService.GetToken();

        return this._http.get(this.fileExplorerUrl + "?access_token=" + token, {
        }).map(res => res.json());
    }

}

The SecureFilesComponent is used to open a new window and get the secure file from the server using the URL created in the SecureFileService GetDownloadfileUrl method.

import { Component, OnInit } from 'angular2/core';
import { CORE_DIRECTIVES } from 'angular2/common';
import { SecureFileService } from '../services/SecureFileService';
import { SecurityService } from '../services/SecurityService';
import { Observable }       from 'rxjs/Observable';
import { Router } from 'angular2/router';

@Component({
    selector: 'securefiles',
    templateUrl: 'app/securefiles/securefiles.component.html',
    directives: [CORE_DIRECTIVES],
    providers: [SecureFileService]
})

export class SecureFilesComponent implements OnInit {

    public message: string;
    public Files: string[];
   
    constructor(private _secureFileService: SecureFileService, public securityService: SecurityService, private _router: Router) {
        this.message = "Secure Files download";
    }

    ngOnInit() {
      this.getData();
    }

    public GetFileById(id: any) {
        window.open(this._secureFileService.GetDownloadfileUrl(id));
    }

    private getData() {
        this._secureFileService.GetListOfFiles()
            .subscribe(data => this.Files = data,
            error => this.securityService.HandleError(error),
            () => console.log('Get all completed'));
    }
}

After a successful login, the available files are displayed in a HTML table.
secureFile_download_01

The file can be downloaded using the access token. If a non-authorized user tries to download a file, a 403 will be returned or if an incorrect access token or no access token is used in the HTTP request, a 401 will be returned.

secureFile_download_02
Notes:

Using IdentityServer4.AccessTokenValidation, support for access tokens in the query string is very easy to implement in an ASP.NET Core application. One problem, is when support for tokens in both the request header and the query string needs to be supported in one web application.

Links

http://openid.net/specs/openid-connect-core-1_0.html

http://openid.net/specs/openid-connect-implicit-1_0.html

https://github.com/aspnet/Security

https://github.com/IdentityServer/IdentityServer4.AccessTokenValidation

http://www.filipekberg.se/2013/07/12/are-you-serving-files-insecurely-in-asp-net/

Announcing IdentityServer for ASP.NET 5 and .NET Core

https://github.com/IdentityServer/IdentityServer4

https://github.com/IdentityServer/IdentityServer4.Samples

The State of Security in ASP.NET 5 and MVC 6: OAuth 2.0, OpenID Connect and IdentityServer

http://connect2id.com/learn/openid-connect

https://github.com/FabianGosebrink/Angular2-ASPNETCore-SignalR-Demo

Getting Started with ASP NET Core 1 and Angular 2 in Visual Studio 2015

http://benjii.me/2016/01/angular2-routing-with-asp-net-core-1/

http://tattoocoder.azurewebsites.net/angular2-aspnet5-spa-template/

Cross-platform Single Page Applications with ASP.NET Core 1.0, Angular 2 & TypeScript



Henrik F. Nielsen: Announcing ASP.NET WebHooks Release Candidate 1 (Link)

We are very excited to announce the availability of ASP.NET WebHooks Release Candidate 1. ASP.NET WebHooks provides support for receiving WebHooks from other parties as well as sending WebHooks so that you can notify other parties about changes in your service:

Currently ASP.NET WebHooks targets ASP.NET Web API 2 and ASP.NET MVC 5. It is available as Open Source on GitHub, and you can use it as preview packages from Nuget. For feedback, fixes, and suggestions, you can use GitHub, StackOverflow using the tag asp.net-webhooks, or send me a tweet.

For the full announcement, please see the blog Announcing ASP.NET WebHooks Release Candidate 1.

Have fun!

Henrik


Pedro Félix: Using Fiddler for an Android and Windows VM development environment

In this post I describe the development environment that I use when creating Android apps that rely on ASP.NET based Web applications and Web APIs.

  • The development machine is a MBP running OS X with Android Studio.
  • Android virtual devices are run on Genymotion, which uses VirtualBox underneath.
  • Web applications and Web APIs are hosted on a Windows VM running on Parallels over the OS X host.

I use the Fiddler proxy to enable connectivity between Android and the ASP.NET apps, as well as to provide me full visibility on the HTTP messages. Fiddler also enables me to use HTTPS even on this development environment.

The main idea is to use Fiddler as the Android’s system HTTP proxy, in conjunction with a port forwarding rule that maps a port on the OS X host to the Windows VM. This is depicted in the following diagram.

android

 

The required configuration steps are:

  1. Start Fiddler on the Windows VM and allow remote computers to connect
    • Fiddler – Tools – Fiddler Options – Connections – check “Allow remote computers to connect”.
    • This will make Fiddler listen on 0.0.0.0:8888.
  2. Enable Fiddler to intercept HTTPS traffic
    • Fiddler – Tools – Fiddler Options –  HTTPS – check “Decrypt HTTPS traffic”.
    • This will add a new root certificate to the “Trusted Root Certification Authorities” Windows certificate store.
  3. Define a port forwarding rule mapping TCP port 8888 on the OS X host to port TCP 8888 on the Windows guest (where Fiddler is listening).
    • Parallels – Preferences – Network:change settings – Port forward rules  – add “TCP:8888 -> Windows:8888”.
  4. Check which “host-only network” is the Android VM using
    • VirtualBox Android VM – Settings – Network – Name (e.g. “vboxnet1”).
  5. Find the IP for the identified adapter
    • VirtualBox – Preferences – Network – Host-only Networks – “vboxnet1”.
    • In my case the IP is 192.168.57.1.
  6. On Android, configure the Wi-Fi connection HTTP proxy (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Settings – Wi-Fi – long tap on choosen network – modify network – enable advanced options – manual proxy
      • Set “Proxy hostname” to the IP identified in the previous step (e.g. 192.168.57.1).
      • Set “Proxy port” to 8888.
    • With this step, all the HTTP traffic will be directed to the Fiddler HTTP proxy running on the Windows VM
  7. The last step is to install the Fiddler root certificate, so that the Fiddler generated certificates are accepted by the Android applications, such as the system browser (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Open the browser and navigate to http://ipv4.fiddler:8888
    • Select the link “FiddlerRoot certificate” and on the Android dialog select “Credential use: VPN and apps”.

And that’s it: all HTTP traffic that uses the Android system’s proxy settings will be directed to Fiddler, with the following advantages

  • Visibility of the requests and responses on the Fiddler UI, namely the ones using HTTPS.
  • Access to Web applications running on the Windows VM, using both IIS hosting or self-hosting.
  • Access to external hosts on the Internet.
  • Use of the Windows “hosts” file host name overrides.
    • For development purposes I typically use host names other than “localhost”, such as “app1.example.com” or “id.example.com”.
    • Since the name resolution will be done by the Fiddler proxy, these host names can be used directly on Android.

Here is the screenshot of Chrome running on Android and presenting a ASP.NET MVC application running on the Windows VM. Notice the green “https” icon.

Screen Shot 2016-03-05 at 19.31.53

And here is the Chrome screenshot of a IdentityServer3 login screen, also running on the Windows VM.

Screen Shot 2016-03-05 at 19.34.42

Hope this helps!



Ben Foster: Multi-tenant middleware pipelines in ASP.NET Core

ASP.NET Core Applications are created using middleware components that are assembled together to form a HTTP pipeline. Each middleware component has the opportunity to modify the HTTP request and response before passing it on to the next component in the pipeline, as the following diagram illustrates:

ASP.NET Core Middleware Sequence Diagram

In a typical ASP.NET Core application you'd likely have a number of middleware components configured, for example:

  • Static Files Middleware
  • Authentication Middleware
  • MVC

If you're using SaasKit you'll also be using the multi-tenancy middleware that resolves tenants on each request:

app.UseMultitenancy<AppTenant>();

Authentication Middleware

A few weeks ago I wrote about how to isolate tenant data using Entity Framework. The example application in that article demonstrated how we could have separate membership databases for each tenant. It didn't however show some of the issues using ASP.NET Authentication Middleware in multi-tenant environments.

An unfortunate design choice in the authentication middleware means that middleware options are created at the point the middleware is registered. For example, here's how we would register the Google OAuth middleware:

 builder.UseGoogleAuthentication(options =>
 {
     options.AuthenticationScheme = "Google";
     options.SignInScheme = "Cookies";

     options.ClientId = "xxx";
     options.ClientSecret = "xxx";
 });

Looking at the source for this extension method we can see that the options are created immediately before registering the middleware:

public static IApplicationBuilder UseGoogleAuthentication(this IApplicationBuilder app, GoogleOptions options)
{
    if (app == null)
    {
        throw new ArgumentNullException(nameof(app));
    }
    if (options == null)
    {
        throw new ArgumentNullException(nameof(options));
    }

    return app.UseMiddleware<GoogleMiddleware>(Options.Create(options));
}

A simplified illustration of the pipeline would look something like this:

ASP.NET Core Pipeline

The best way to illustrate why this is a problem is using an example. I've wired up both the cookie and Google OAuth middleware so if you ever wanted to know how to do this from scratch without ASP.NET Identity, you're in luck:

app.UseCookieAuthentication(options =>
{
    options.AuthenticationScheme = "Cookies";
    options.LoginPath = new PathString("/account/login");
    options.AccessDeniedPath = new PathString("/account/forbidden");
    options.AutomaticAuthenticate = true;
    options.AutomaticChallenge = true;
});

app.UseGoogleAuthentication(options =>
{
    options.AuthenticationScheme = "Google";
    options.SignInScheme = "Cookies";

    options.ClientId = "xxx";
    options.ClientSecret = "xxx";
});

The AccountController supports logging in with Google:

public IActionResult Google()
{
    var props = new AuthenticationProperties
    {
        RedirectUri  = "/home/about"
    };

    return new ChallengeResult("Google", props);
}

Note: Returning a ChallengeResult is effectively the same as calling:

HttpContext.Authentication.ChallengeAsync("Google");

And of course we have a way to log out:

public async Task<IActionResult> LogOut()
{
    await HttpContext.Authentication.SignOutAsync("Cookies");

    return RedirectToAction("index", "home");
}

The about page has been protected by decorating the corresponding action method with [Authorize] and simply lists out the claims of the current authenticated user:

<table class="table">
  @foreach (var claim in User.Claims)
  {
      <tr>
        <td>@claim.Type</td>
        <td>@claim.Value</td>
      </tr>
  }
</table>

So nothing too exciting here.

Running the example sets up two tenants on http://localhost:60000 (Tenant 1) and http://localhost:60001 (Tenant 2). Running the app (dnx web) and browsing to Tenant 1's site I can successfully log in with Google:

Google Consent Screen

After giving my consent I'm redirected back to the app and can see my claims on the About page:

Google Claims

So that all works fine. Now let's browse to http://localhost:60001 (Tenant 2). Notice anything odd?

Logged in twice

In case you didn't spot it, we're already logged in! This is because we're using the same localhost domain name for both tenants and cookies don't honour ports. Whilst this may not be a common scenario in public web apps, you would have the same problem if you wanted to identify your tenants by path e.g. http://localhost/tenantid.

The solution is to name your auth cookie differently per tenant but since the Cookie Auth options are created when the middleware is registered, all tenants get the same options.

To demonstrate the second issue we'll log out of Tenant 2 and attempt to log in with Google:

Google OAuth error page

This time we get an error concerning our callback URL since we've configured the Google Auth middleware to use Tenant 1's settings which obviously have a different callback URL.

Again this is caused by the fact that the auth middleware options are created once per application.

Let's fork!

Whilst this issue has been raised, it's not going to be fixed before ASP.NET Core v1 is released.

One possible solution is to register the middleware per tenant which we can do by forking the root request pipeline:

Forked ASP.NET Core pipelines

In the above diagram you can see we have forked pipelines for two tenants each with their own instances of the authentication middleware.

The latest SaasKit release makes this possible via the UsePerTenant extension on IApplicationBuilder enabling you to create independent request pipelines per tenant.

To fix the example application we can do the following:

app.UsePerTenant<AppTenant>((ctx, builder) =>
{
    builder.UseCookieAuthentication(options =>
    {
        options.AuthenticationScheme = "Cookies";
        options.LoginPath = new PathString("/account/login");
        options.AccessDeniedPath = new PathString("/account/forbidden");
        options.AutomaticAuthenticate = true;
        options.AutomaticChallenge = true;

        options.CookieName = $"{ctx.Tenant.Id}.AspNet.Cookies";
    });

    builder.UseGoogleAuthentication(options =>
    {
        options.AuthenticationScheme = "Google";
        options.SignInScheme = "Cookies";

        options.ClientId = Configuration[$"{ctx.Tenant.Id}:GoogleClientId"];
        options.ClientSecret = Configuration[$"{ctx.Tenant.Id}:GoogleClientSecret"];
    });
});

Notice that I have access to the current tenant when I configure the pipeline so I'm able to obtain the tenant specific Google auth settings. I'm also naming the auth cookie differently per tenant so I can use the same domain but different ports/paths.

If you'd like to run the example yourself you'll need to add tenantX:GoogleClientId and tenantX:GoogleClientSecret to your appSettings.json. I'm using user secrets rather than checking my precious keys into GitHub.

Taking it further

Being able to create pipelines per tenant opens up many possibilities. Rather than just configuring the same middleware components per tenant you could create custom pipelines depending on a tenant's configuration:

app.UsePerTenant<AppTenant>((ctx, builder) =>
{
    if (ctx.Tenant.UseGoogleAuth) {
        builder.UseGoogleAuthentication(options =>
        {

        });
    }

    if (ctx.Tenant.UseFacebookAuth) {
        builder.UseFacebookAuthentication(options =>
        {

        });
    }
});

Wrapping Up

The UsePerTenant method in SaasKit enables you to create forked request pipelines for each tenant which among many other possibilities means you can use the ASP.NET Authentication Middleware in multi-tenant applications.

Special thanks to Joe Audette, Sébastien Ros and Kévin Chalet for their valuable feedback and help with this feature.


Pedro Félix: OAuth 2.0 and PKCE

Introduction

Both Google and IdentityServer have recently announced support for the PKCE (Proof Key for Code Exchange by OAuth Public Clients) specification defined by RFC 7636.

This is an excellent opportunity to revisit the OAuth 2.0 authorization code flow and illustrate how PKCE addresses some of the security issues that exist when this flow is implemented on native applications.

tl;dr

On the authorization code flow, the redirect from the authorization server back to client is one of the most security sensitive parts of the OAuth 2.0 protocol. The main reason is that this redirect contains the code representing the authorization delegation performed by the User. On public clients, such as native applications, this code is enough to obtain the access tokens allowing access to the User’s resources.

The PKCE specification addresses an attack vector where an attacker creates a native application that registers the same URL scheme used by the Client application, therefore gaining access to the authorization code. Succinctly, the PKCE specification requires the exchange of the code for the access token to use a ephemeral secret information that is not available on the redirect, making the knowledge of the code insufficient to use it. This extra information (or a transformation of it) is sent on the initial authorization request.

A slightly longer version

The OAuth 2.0 cast of characters

  • The User is typically an human entity capable of granting access to resources.
  • The Resource Server (RS) is the entity exposing an HTTP API to access these resources.
  • The Client is an application (e.g. server-based Web application or native application) wanting to access these resources, via a authorization delegation performed by the User. Clients can be
    • confidential – client applications that can hold a secret. The typical example are Web applications, where a client secret is stored and used only on the server side.
    • public – client application that cannot hold a secret, such as native applications running on the User’s mobile device.
  • The Authorization Server (AS) is the entity that authenticates the user, captures her authorization consent and issues access tokens that the Client application can use to access the resources exposed on the RS.

Authorization code flow for Web Applications

The following diagram illustrates the authorization code flow for Web applications (the Client application is a Web server).

Slide2

 

  1. The flow starts with the Client application server-side producing a redirect HTTP response (e.g. response with 302 status) with the authorization request URL in the Location header. This URL will contain the authorization request parameters such as the state, scope and redirect_uri.
  2. When receiving this response, the User’s browser automatically performs a GET HTTP request to the Authorization Server (AS) authorization endpoint, containing the OAuth 2.0 authorization request.
  3. The AS then starts an interaction sequence to authenticate the user (e.g. username and password, two-factor authentication, delegated authentication), and to obtain the user consent. This sequence is not defined by OAuth 2.0 and can take multiple steps.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL points to the client application hostname and contains the the authorization response parameters, such as the state and the (security sensitive) code.
  5. When receiving this response, the user’s browser automatically performs a GET request to the Client redirect endpoint with the OAuth 2.0 authorization response. By using HTTPS on the request to the Client, the protocol minimises the chances of the code being leaked to an attacker.
  6. Having received that authorization code, the Client then uses it to obtain the access token from the AS token endpoint. Since the client is a confidencial client, this request is authenticated with the client credentials (client ID and client secret), typically sent in the Authorization header using the basic scheme. The AS checks if this code is valid, namely if it was issued to the requesting authenticated client. If everything is verified, a 200 response with the access token is returned.
  7. Finally, the client can use the received access token to access the protected resources.

Authorization code flow for native Applications

For a native application, the flow is slightly different, namely on the first phase (the authorization request). Recall that in this case the Client application is running in the User’s device

Slide3

  1. The flow begins with the Client application starting the system’s browser (or a web view, more on this on another post) at a URL with the authorization request. For instance, on the Android platform this is achieved by sending an intent.
  2. The browser comes into the foreground and performs a GET request to the AS authorization endpoint containing the authorization request.
  3. The same authentication and consent dance occurs between the AS and the User’s browser.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL contains the the authorization response parameters. However, there is something special in the redirect URL. Instead of using a http URL scheme, which would make the browser perform another HTTP request, the redirect URL contains a custom URI scheme.
  5. As a result, when the browser receives this response and processes the redirect an inter-application message (e.g. an intent in Android) is sent to the application associated to this scheme, which should be the Client application. This brings the Client application to the foreground and provides it with the authorization response parameters, namely the authorization code.
  6. From now on, the flow is similar to the Web based one. Namely, the Client application  uses the code to obtain the access token from the AS token endpoint. Since the client is a public client, this request is not authenticated, that is no client secret is used.
  7. Finally, having received the access token, the client application running on the device can access the User’s resources.

On both scenarios, the authorization code communication path, from the AS to the Client via User’s browser, is very security sensitive. This is specially relevant in the native scenario since the Client is public and the knowledge of that authorization code is enough to obtain the access token.

Hijacking the redirect

On the Web application scenario, the GET request with the authorization response has a HTTPS URL, which means that the browser will only send the code if the server correctly authenticates itself. However, on the native scenario, the intent will be sent to any installed application that registered the custom scheme. Unfortunately, there isn’t a central entity controlling and validating these scheme registrations, so an application can hijack the message from the browser to the client application, as shown in the following diagram.

Slide4

Having obtained the authorization code, the attacker’s application has all the information required to retrieve a token and access the User’s resources.

The PKCE protection

The PKCE specification mitigates this vulnerability by requiring an extra code_verifier parameter on the exchange of the authorization code for the access token.Slide5

  • On step 1, the Client application generates a random secret, stores it and uses its hash value on the new code_challenge authorization request parameter.
  • On step 4, the AS somehow associates the returned code to the code_challenge.
  • On step 6, the Client includes a code_verifier parameter with the secret on the token request message. The AS computes the hash of the code_verifier value and compares it with the original code_challenge associated with the code. Only if they are equals is the code accepted and an access token returned.

This ensures that only the entity that started the flow (sent the code_challenge on the authorization request) can end the flow and obtain the access token. By using a cryptographic hash function on the code_challenge, the protocol is protected from attackers that have read access to the original authorization request. However, the protocol also allows the secret to be used directly on the code_challenge.

Finally, the PKCE support by an AS can be advertised on the OAuth 2.0 or OpenID Connect discovery document, using the code_challenge_methods_supported field. The following is the Google’s OpenID Connect discovery document, located at https://accounts.google.com/.well-known/openid-configuration.

{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
 "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
 "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
 "response_types_supported": [
  "code",
  "token",
  "id_token",
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
  "none"
 ],
 "subject_types_supported": [
  "public"
 ],
 "id_token_signing_alg_values_supported": [
  "RS256"
 ],
 "scopes_supported": [
  "openid",
  "email",
  "profile"
 ],
 "token_endpoint_auth_methods_supported": [
  "client_secret_post",
  "client_secret_basic"
 ],
 "claims_supported": [
  "aud",
  "email",
  "email_verified",
  "exp",
  "family_name",
  "given_name",
  "iat",
  "iss",
  "locale",
  "name",
  "picture",
  "sub"
 ],
 "code_challenge_methods_supported": [
  "plain",
  "S256"
 ]
}

 

 

 

 



Ben Foster: ASP.NET Core Multi-tenancy: Data Isolation with Entity Framework

This is my fourth post in a series on building multi-tenant applications with ASP.NET Core.

A common requirement of multi-tenancy is to partition application services per tenant. This could be something presentational (like the theme-able engine I created in the previous article) or as I'll cover in this post, how to isolate tenant data.

Multi-tenant Data Architecture

The three most common approaches to managing multi-tenant data are:

  1. Separate Database
  2. Separate Schema
  3. Shared Schema

These approaches vary in the level of isolation and complexity. Separate Database is the most isolated but will also make provisioning and managing your tenant data more complex. Licensing costs may also be a factor.

If you have a large number of clients with small datasets that have the same schema I'd personally recommend the Shared Schema approach, otherwise go for Separate Database. I've never found Separate Schema (same database) to be a good solution since you don't really achieve better isolation and it makes maintenance a nightmare with a large number of clients.

For more information on multi-tenant data architecture, see this MSDN article.

Entity Framework Core

EF Core (7.0) is the evolution of Microsoft's data-access tool and has been rebuilt from the ground up to be with a focus on performance and portability.

We'll be using EF Core in this post to isolate tenant data using the database-per-tenant approach.

Getting Started

I'll be building on the sample MVC application from the other posts. It uses ASP.NET Identity and is currently configured to use the same database for all tenants.

Tenant Connection Strings

Since we want to use a separate database for each tenant we will need a different connection string per tenant. We'll add a ConnectionString property to AppTenant:

public class AppTenant
{
    public string Name { get; set; }
    public string[] Hostnames { get; set; }
    public string Theme { get; set; }
    public string ConnectionString { get; set; }
}

Then we can update appsettings.json to include our tenant-specific connection strings:

"Multitenancy": {
  "Tenants": [
    {
      "Name": "Tenant 1",
      "Hostnames": [
        "localhost:6000",
        "localhost:6001",
        "localhost:51261"
      ],
      "Theme": "Cerulean",
      "ConnectionString": "Server=(localdb)\\mssqllocaldb;Database=saaskit-sample-tenant1;"
    },
    {
      "Name": "Tenant 2",
      "Hostnames": [
        "localhost:6002"
      ],
      "Theme": "Darkly",
      "ConnectionString": "Server=(localdb)\\mssqllocaldb;Database=saaskit-sample-tenant2;"
    }
  ]
}

With that done, we need to configure Entity Framework to use our tenant-specific connection strings.

Configuring Entity Framework

As our sample project was created using the boilerplate ASP.NET Core MVC template, it is already configured to use a default connection string from appsettings.json. The following code in Startup.cs registers the necessary EF dependencies with the built-in services container:

// Add framework services.
services.AddEntityFramework()
    .AddSqlServer()
    .AddDbContext<ApplicationDbContext>(options =>
        options.UseSqlServer(Configuration["Data:DefaultConnection:ConnectionString"]));

The first step is to remove the default connection string. This will be configured using a different approach:

services.AddEntityFramework()
    .AddSqlServer()
    .AddDbContext<SqlServerApplicationDbContext>();

Note that I've also renamed ApplicationDbContext to SqlServerApplicationDbContext. You'll see why later.

There are three different ways to configure a DbContext instance. OnConfiguring is executed last and overrides the options obtained from DI or the DbContext constructor. Given that DbContext instances are typically transient or created per-request we can get the current tenant instance and configure the connections string:

public class SqlServerApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    private readonly AppTenant tenant;

    public SqlServerApplicationDbContext(AppTenant tenant)
    {
        this.tenant = tenant;
        Database.EnsureCreated();
    }

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder.UseSqlServer(tenant.ConnectionString);
        base.OnConfiguring(optionsBuilder);
    }
}

By taking a dependency on AppTenant the current tenant instance will be injected automatically. The Database.EnsureCreated() call will create the database if it does not already exist - perfect if you need to provision tenants on demand.

When we run the application we can see a new database is created for each tenant.

image

ASP.NET Identity continues to work as before only now our users are completely isolated per tenant.

 Using SQLite

My favourite thing about ASP.NET Core is that it's cross platform. Being able to enjoy the richness of the .NET framework from my Mac using a lightweight editor like VS Code makes me very happy indeed.

For this reason I thought I'd update the sample to use SQLite instead of SQL Server. I find that a lightweight database such as SQLite works well for development since there is no additional software to install and you can blow away your databases just by deleting a file on disk.

First we'll add the SQLite package to our dependencies project.json:

"dependencies": {
  "EntityFramework.Commands": "7.0.0-rc1-final",
  "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final",
  "EntityFramework.Sqlite": "7.0.0-rc1-final",
  ...

If you're not using Visual Studio you can run dnu restore to pull down the package.

We'll then create a different DbContext implementation that configures Entity Framework to use SQLite:

public class SqliteApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    private readonly IApplicationEnvironment env;
    private readonly AppTenant tenant;

    public SqliteApplicationDbContext(IApplicationEnvironment env, AppTenant tenant)
    {
        this.env = env;
        this.tenant = tenant;
        Database.EnsureCreated();
    }

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        var tenantDbName = tenant.Name.Replace(" ", "-").ToLowerInvariant();
        var connectionString = $"FileName={Path.Combine(env.ApplicationBasePath, "App_Data", tenantDbName)}.db";
        optionsBuilder.UseSqlite(connectionString);

        base.OnConfiguring(optionsBuilder);
    }
}

Since SQLite is file-based we need to provide a physical path to the database. To do this we're taking a dependency on IApplicationEnvironment so that we can obtain the application path at runtime. I chose to name the database based on the tenant name. You could also use a convention based approach for SQL Server rather than having to store the connection string for each tenant.

Finally we need to tell EF to use our SqliteApplicationDbContext instead. In Startup.cs:

services.AddEntityFramework()
    .AddSqlite()
    .AddDbContext<SqliteApplicationDbContext>();

services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<SqliteApplicationDbContext>()
    .AddDefaultTokenProviders();

Firing up a terminal on my Mac I can run the sample application with dnx web:

Jasper:AspNetMvcSample ben$ dnx web
Hosting environment: Production
Now listening on: http://localhost:60000
Now listening on: http://localhost:60001
Now listening on: http://localhost:60002
Application started. Press Ctrl+C to shut down.

When registering a user account on each of my tenant sites the SQLite databases are created automatically:

image

Wrapping Up

In this post we looked at how to isolate tenant data via a database-per-tenant strategy using Entity Framework. We updated the EF DbContext to obtain the connection string dynamically from the current tenant instance. Finally we swapped out SQL Server for SQLite making it possible to run the sample completely cross-platform.

Questions?

Join the SaasKit chat room on Gitter.


More content like this?

If you don't have anything to contribute but are interested in where SaasKit is heading, please subscribe to the mailing list below. Emails won't be frequent, only when we have something to show you.


Henrik F. Nielsen: ASP.NET WebHooks and Slack Slash Commands (Link)

We just added a couple of new features in ASP.NET WebHooks that make it easier to build some nice integrations with Slack Slash Commands. Slash Commands make it possible to trigger any kind of processing from a Slack channel by generating an HTTP request containing details about the command. For example, these commands typed in a Slack channel can be configured to send an HTTP request to an ASP.NET WebHook endpoint where processing can happen:

/list add <user>
/list list
/list remove <user>
 
Responses from Slash Commands can be structured documents containing text, data, images, and more:

SlashCommandReply1

In the post ASP.NET WebHooks and Slack Slash Commands we describe how ASP.NET WebHooks help you with parsing Slash Commands and generate structured responses so that you can build cool integrations with Slack.

Have fun!

Henrik


Dominick Baier: NDC London 2016 Wrap-up

NDC has been fantastic again! Good fun, good talks and good company!

Brock and I did the usual 2-day version of our Identity & Access Control workshop at the pre-con. This was (probably) the last time we ran the 2-day version on Katana. At NDC in Oslo it will be all new material based on ASP.NET Core 1.0 (fingers crossed ;))

The main conference had dozens of interesting sessions and as always – pretty strong security content. On Wednesday I did a talk on (mostly) the new identity & authentication features of ASP.NET Core 1.0 [1]. This was also the perfect occasion to world-premier IdentityServer4 – the preview of the new version of IdentityServer for ASP.NET and .NET Core [2].

Right after my session, Barry focused on the new data protection and authorization APIs [3] and Brock did an introduction to IdentityServer (which is now finally on video [4]).

We also did a .NET Rocks [5] and Channel9 [6] interview – and our usual “user group meeting” at Brewdogs [7] ;)

All in all a really busy week – but well worth it!

[1] What’s new in Security in ASP.NET Core 1.0
[2] Announcing IdentityServer4
[3] A run around the new Data Protection and Authorization Stacks
[4] Introduction to IdentityServer
[5] .NET Rocks
[6] Channel9 Interview
[7] Brewdog Shepherd’s Bush

 


Filed under: .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI


Ben Foster: ASP.NET Core Multi-tenancy: Creating theme-able applications

This is my third post in a series on building multi-tenant applications with ASP.NET Core.

The first post covered the fundamentals of multi-tenancy, tenant resolution and introduced the SaasKit library. In the second post I explained how to control the lifetime of your tenant instances using a caching tenant resolver.

In this post things get more interesting as we look at how to support theming and overriding views for individual tenants.

Theming is quite common in multi-tenant or SaaS applications. It enables you to provide your users (tenants) similar functionality but give them control over the design of their site. In Fabrik our customers can choose from a number of themes, each of which can be further customised:

Fabrik Themes

Fabrik Customise Theme

Themes were a native feature of ASP.NET Web Forms (perhaps they still are) but were limited to stylesheets and skins (property configurations for ASP.NET controls). ASP.NET MVC was more flexible and it became possible to create theme-specific views by building a custom view-engine. Doing it properly however was fairly involved, especially if you wanted your view locations to be cached correctly.

Theming in ASP.NET Core is much easier and there are a few ways it can be achieved. We'll start with a very simple approach, using theme-specific layout pages.

The Sample Application

I'm building upon the sample MVC application I used in the other posts.

First the tenant class (AppTenant) needs to be extended to support the concept of themes:

public class AppTenant
{
    public string Name { get; set; }
    public string[] Hostnames { get; set; }
    public string Theme { get; set; }
}

Setting a tenant's theme is done by updating appsettings.json:

"Multitenancy": {
  "Tenants": [
    {
      "Name": "Tenant 1",
      "Hostnames": [
        "localhost:6000",
        "localhost:6001"
      ],
      "Theme": "Cerulean"
    },
    {
      "Name": "Tenant 2",
      "Hostnames": [
        "localhost:6002"
      ],
      "Theme": "Darkly"
    }
  ]
}

For this example I downloaded a couple of Bootstrap themes from Bootswatch and placed them in /wwwroot/css/themes:

AspNetMvcSample\wwwroot\css\site.css
AspNetMvcSample\wwwroot\css\site.min.css
AspNetMvcSample\wwwroot\css\themes
AspNetMvcSample\wwwroot\css\themes\cerulean.css
AspNetMvcSample\wwwroot\css\themes\darkly.css

Gulp was also updated to only include stylesheets in the root of wwwroot\css otherwise it would bundle all of the theme stylesheets together. In gulpfile.js:

paths.css = paths.webroot + "css/*.css";

Creating Theme Layout Pages

A simple approach to theming in ASP.NET Core MVC is to create a layout page for each theme that references specific stylesheets and scripts.

ViewStart (_ViewStart.cshtml) is a special file in MVC that can be used to define common view code that will be executed at the start of each View’s rendering. A common use of ViewStart is to set the layout of your views in a single location. ASP.NET Core MVC builds on this concept further introducing _ViewImports.cshtml, a new file that can be used to add using statements and tag helpers into your views. You can also use it to inject common application dependencies.

We'll take a copy of the default layout page and create versions for each theme:

_Layout.cshtml
_Layout_Cerulean.cshtml
_Layout_Darkly.cshtml

Each theme layout references the associated stylesheet downloaded from Bootswatch:

  <environment names="Development">
    <link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.css" />
    <link rel="stylesheet" href="~/css/site.css" />
    <link rel="stylesheet" href="~/css/themes/darkly.css" />
  </environment>
  <environment names="Staging,Production">
    <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.5/css/bootstrap.min.css"
          asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
          asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />

    <link rel="stylesheet" href="~/css/site.min.css" asp-append-version="true" />
    <link rel="stylesheet" href="~/css/themes/darkly.css" />
  </environment>

_ViewStart.cshtml is then updated to set the layout based on the theme configured for the current tenant. The AppTenant instance is injected into the view using the @inject helper:

@inject AppTenant Tenant;
@{
    Layout = $"_Layout_{Tenant.Theme}";
}

You can run the application by running the following from a command-prompt or terminal:

dnx web

This will start the Kestrel web server and listen on the ports mapped to the tenants in appsettings.json.

When you browse to http://localhost:6000 (Tenant 1 - Cerulean theme) you should see the following:

Tenant 1 theme

Browsing to http://localhost:6002 (Tenant 2 - Darkly theme) displays a different theme:

Tenant 2 theme

Creating Theme Views

The above method works great if the pages in your application are the same for all of your themes, i.e. there is no change in page markup. In Fabrik, our themes differ significantly so only changing stylesheets and scripts isn't enough.

The solution is to provide theme specific versions of your views. You can either provide a themed version for every view or just an override, falling back to "default" views where necessary.

To do this in the previous of ASP.NET MVC we had to build our own view-engine and as I mentioned earlier, this was quite complicated to do correctly.

The Razor view-engine in ASP.NET Core MVC makes this easier with View Location Expanders. This new feature enables you to control the paths in which the view-engine will search for views. It will also take care of building and maintaining a view location cache, even when using dynamic variables.

To create a View Location Expander you must create a class that implements IViewLocationExpander:

public interface IViewLocationExpander
{
    IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations);
    void PopulateValues(ViewLocationExpanderContext context);
}

The ExpandViewLocations method is invoked to determine the potential locations for a view. PopulateValues determines the values that will be used by the view location expander. This is where you can add information that can be used to dynamically build your view location paths. ExpandViewLocations will only be invoked if the values returned from PopulateValues have changed since the last time the view was located - essentially this means the caching is taken care of for you. Well done ASP.NET team!

Before we create our own View Location Expander we need to restructure our application. I prefer to keep my themes separate so I'm going with the following directory structure:

/themes
  /cerulean
    /shared
      _layout.cshtml
  /darkly
    /home
      about.cshtml
    /shared
      _layout.cshtml
/views
  /home
    /...
  /shared
    /...

Note that I'm following the same view directory naming convention used by the default view-engine.

In the above example we want our theme layout pages to override the default (/views/shared/_layout.cshtml) and when using the Darkly theme, the about.cshtml should override the default version.

Finally, I'm going to reset _ViewStart.cshtml back to it's original code, which sets the default layout page to _Layout:

@{
    Layout = "_Layout";
}

Creating the View Location Expander

By default, the Razor view-engine will search the following paths for the views:

/Views/{1}/{0}.cshtml
/Views/Shared/{0}.cshtml

{1} - Controller Name
{0} - View/Action Name

So when invoking the Index action from HomeController it will look in the following locations:

/Views/Home/Index.cshtml
/Views/Shared/Index.cshtml

For the layout page, /Views/Shared/_Layout.cshtml will be used.

When you create your own view location expander, you'll be passed the view location paths returned by other expanders in the pipeline. We want to preserve the defaults but add our theme locations ensuring that they take priority.

public class TenantViewLocationExpander : IViewLocationExpander
{
    private const string THEME_KEY = "theme";

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        context.Values[THEME_KEY] = context.ActionContext.HttpContext.GetTenant<AppTenant>()?.Theme;
    }

    public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations)
    {
        string theme = null;
        if (context.Values.TryGetValue(THEME_KEY, out theme))
        {
            viewLocations = new[] {
                $"/Themes/{theme}/{{1}}/{{0}}.cshtml",
                $"/Themes/{theme}/Shared/{{0}}.cshtml",
            }
            .Concat(viewLocations);
        }


        return viewLocations;
    }
}

In PopulateValues we use a SaasKit extension method to retrieve the current tenant instance from HttpContext and add the tenant theme to ViewLocationExpanderContext.Values.

In ExpandViewLocations we retrieve the current theme from the context and use it to append two new view location paths as per the convention described earlier. The returned locations include our theme locations and the defaults, in that order.

Wiring it up

Add the following to the ConfigureServices method in Startup.cs:

services.Configure<RazorViewEngineOptions>(options =>
{
    options.ViewLocationExpanders.Add(new TenantViewLocationExpander());
});

Now run the application. You should see the same results as the screenshots from before with the appropriate theme layout page (and therefore styles) being rendered for each tenant.

To demonstrate view overrides, we'll create a themed version of the About view in the Darkly theme:

@{
    ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>
<h3>@ViewData["Message"]</h3>

<p>This is the Darkly theme.</p>

If i navigate to Tenant 1's About Page (Cerulean theme), it will show the default:

Tenant 1 About

Tenant 2 however will display the themed version:

Tenant 2 About

Overriding Views for specific tenants

At some point it's inevitable that your tenants will want bespoke customisations. This is a common problem in multi-tenant applications as changing one feature or theme usually impacts all of the tenants using your service.

In Fabrik, bespoke customisations are usually stylistic so we typically handle bespoke sites (like this one) with custom themes that are locked to a specific tenant.

However, often a customer simply wants to change a few elements on the page or adjust it's layout. We can apply the techniques above to support tenant-specific view overrides:

public class TenantViewLocationExpander : IViewLocationExpander
{
    private const string THEME_KEY = "theme", TENANT_KEY = "tenant";

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        context.Values[THEME_KEY] 
            = context.ActionContext.HttpContext.GetTenant<AppTenant>()?.Theme;

        context.Values[TENANT_KEY] 
            = context.ActionContext.HttpContext.GetTenant<AppTenant>()?.Name.Replace(" ", "-");
    }

    public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations)
    {
        string theme = null;
        if (context.Values.TryGetValue(THEME_KEY, out theme))
        {
            IEnumerable<string> themeLocations = new[]
            {
                $"/Themes/{theme}/{{1}}/{{0}}.cshtml",
                $"/Themes/{theme}/Shared/{{0}}.cshtml"
            };

            string tenant;
            if (context.Values.TryGetValue(TENANT_KEY, out tenant))
            {
                themeLocations = ExpandTenantLocations(tenant, themeLocations);
            }

            viewLocations = themeLocations.Concat(viewLocations);
        }


        return viewLocations;
    }

    private IEnumerable<string> ExpandTenantLocations(string tenant, IEnumerable<string> defaultLocations)
    {
        foreach (var location in defaultLocations)
        {
            yield return location.Replace("{0}", $"{{0}}_{tenant}");
            yield return location;
        }
    }
}

In the above code PopulateValues has been updated to also add the normalised tenant name (Tenant 1 > tenant-1).

In ExpandViewLocations we append the tenant-specific search locations. With the above conventions, if we navigate to /Home/Index for Tenant 1 (Cerulean theme), the following locations will be searched:

/Themes/Cerulean/Home/Index_tenant-1.cshtml
/Themes/Cerulean/Home/Index.cshtml
/Themes/Cerulean/Shared/Index_tenant-1.cshtml
/Themes/Cerulean/Shared/Index.cshtml
/Views/Home/Index.cshtml
/Views/Shared/Index.cshtml

To test it out, we'll create a very basic override of the Index view at /Themes/Cerulean/Home/Index_tenant-1.cshtml.

Now when navigating to http://localhost:6000 (Tenant 1) we get the custom tenant view:

Tenant 1 Custom View

Thoughts on Asset location

Ideally we'd keep our theme assets (stylesheets, scripts and views) in one place e.g. /themes/darkly/assets. The default static file configuration is to only serve static files from wwwroot which is why in this example I decided to keep the theme assets separate.

While it is possible to serve static files from additional locations in ASP.NET Core, I'm not sure I'd want to configure this for every theme.

A better approach therefore would be to use a task runner like gulp to minify/bundle your theme assets and copy the output to the wwwroot directory.

Wrapping Up

This post covered how to build theme-able multi-tenant applications in ASP.NET Core MVC. We looked at a simple approach to theming by dynamically setting the View Layout page using ViewStart. Finally we introduced View Location Expanders in the new Razor view-engine and how these can be used to support more complex theming scenarios such as theme-specific views and overriding views for individual tenants.

Questions?

Join the SaasKit chat room on Gitter.


More content like this?

If you don't have anything to contribute but are interested in where SaasKit is heading, please subscribe to the mailing list below. Emails won't be frequent, only when we have something to show you.


Henrik F. Nielsen: Sending ASP.NET WebHooks from Azure WebJobs (Link)

Azure WebJobs is a great way for running any kind of script or executable as a background process in connection with an App Service Web App. You can upload an executable or script as a WebJob and run it either on a schedule or continuously. The WebJob can perform any function you can stick into a command line script or program and using the Azure WebJobs SDK, you can trigger actions to happen as a result of inputs from a Azure Queues, Blobs, Azure Service Bus, and much more.

WebHooks provide a simple mechanism for sending event notification across web applications and external services. For example, you can receive a WebHook when someone sends money to your PayPal account, or when a message is posted to Slack, or a picture is posted to Instagram.

The blog Sending ASP.NET WebHooks from Azure WebJobs describes how to send ASP.NET WebHooks from Azure WebJobs triggered by anything that can kick off a WebJob.

Have fun!

Henrik


Ben Foster: ASP.NET Core Multi-tenancy: Tenant lifetime

In my previous post I covered the basics of building multi-tenant applications in ASP.NET Core with SaasKit.

SaasKit uses a tenant resolver to find tenants based on information available in the current request (hostname, user, header etc.). By default, if you implement ITenantResolver directly, SaasKit will attempt to resolve the tenant on every request.

There are a few reasons why you may not want to do this:

  1. You're loading tenant information from an external data source (e.g. a database) which could be an expensive operation.
  2. You need to maintain state for your tenants across requests, for example storing singleton-per-tenant scoped objects (I'll cover this in a later article).

SaasKit has built-in support for caching tenant context instances. Rather than implementing ITenantResolver you can instead derive from MemoryCacheTenantResolver<TTenant> which uses an in-memory cache to persist the current tenant context across requests. Here's our updated resolver:

public class CachingAppTenantResolver : MemoryCacheTenantResolver<AppTenant>
{
    private readonly IEnumerable<AppTenant> tenants;

    public CachingAppTenantResolver(IMemoryCache cache, ILoggerFactory loggerFactory, IOptions<MultitenancyOptions> options)
        : base(cache, loggerFactory)
    {
        this.tenants = options.Value.Tenants;
    }

    protected override string GetContextIdentifier(HttpContext context)
    {
        return context.Request.Host.Value.ToLower();
    }

    protected override IEnumerable<string> GetTenantIdentifiers(TenantContext<AppTenant> context)
    {
        return context.Tenant.Hostnames;
    }

    protected override Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
    {
        TenantContext<AppTenant> tenantContext = null;

        var tenant = tenants.FirstOrDefault(t => 
            t.Hostnames.Any(h => h.Equals(context.Request.Host.Value.ToLower())));

        if (tenant != null)
        {
            tenantContext = new TenantContext<AppTenant>(tenant);
        }

        return Task.FromResult(tenantContext);
    }
}

The base class performs the cache lookup for you. It requires that you override the following methods:

  1. ResolveAsync - Resolve a tenant context from the current request. This will only be executed on cache misses.
  2. GetContextIdentifier - Determines what information in the current request should be used to do a cache lookup e.g. the hostname.
  3. GetTenantIdentifiers - Determines the identifiers (keys) used to cache the tenant context. In our example tenants can have multiple domains, so we return each of the hostnames as identifiers.

Overriding Tenant Lifetime

By default, tenant contexts are cached for an hour but you can control this by overriding CreateCacheEntryOptions:

protected override MemoryCacheEntryOptions CreateCacheEntryOptions()
{
    return new MemoryCacheEntryOptions()
        .SetAbsoluteExpiration(new TimeSpan(0, 30, 0)); // Cache for 30 minutes
}

Here we set an absolute expiration of 30 minutes.

Testing

If you want to test the cache is working as expected, set your minimum logging level to debug or if you're loading log settings from a JSON file set the following:

"Logging": {
  "IncludeScopes": false,
  "LogLevel": {
    "Default": "Information",
    "SaasKit": "Debug"
  }
},

Now when you run the application (dnx web) you should see the following in the console:

First Request

info: Microsoft.AspNet.Hosting.Internal.HostingEngine[1]
      Request starting HTTP/1.1 GET http://localhost:6000/
dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware<AspNetMvcSample.AppTenant>[0]
      Resolving TenantContext using CachingAppTenantResolver.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver<AspNetMvcSample.AppTenant>[0]
      TenantContext not present in cache with key "localhost:6000". Attempting to resolve.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver<AspNetMvcSample.AppTenant>[0]
      TenantContext resolved. Caching with keys "localhost:6000, localhost:6001".
verb: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware<AspNetMvcSample.AppTenant>[0]
      TenantContext Resolved. Adding to HttpContext.

Subsequent Requests

dbug: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware<AspNetMvcSample.AppTenant>[0]
      Resolving TenantContext using CachingAppTenantResolver.
dbug: SaasKit.Multitenancy.MemoryCacheTenantResolver<AspNetMvcSample.AppTenant>[0]
      TenantContext retrieved from cache with key "localhost:6000".
verb: SaasKit.Multitenancy.Internal.TenantResolutionMiddleware<AspNetMvcSample.AppTenant>[0]
      TenantContext Resolved. Adding to HttpContext.

Wrapping up

If you need to control the lifetime of your tenant context instances or resolving a tenant is an expensive operation, derive your tenant resolver from MemoryCacheTenantResolver.

Source for this example.

Questions?

Join the SaasKit chat room on Gitter.


More content like this?

If you don't have anything to contribute but are interested in where SaasKit is heading, please subscribe to the mailing list below. Emails won't be frequent, only when we have something to show you.


Ben Foster: Building multi-tenant applications with ASP.NET Core (ASP.NET 5)

Without proper guidance, multi-tenancy can be difficult to implement. This was especially the case with previous versions of ASP.NET where its fragmented stack of frameworks led to several possible implementations. It became even more complex if you were using multiple ASP.NET frameworks within the same application (MVC, Web API, SignalR).

Fortunately things improved with OWIN and it was this that led to me starting SaasKit; a toolkit designed to make SaaS (Software as a Service) applications much easier to build. With OWIN middleware it was possible to introduce behaviour into the HTTP pipeline no matter what framework was being used (providing the underlying host was OWIN compatible).

Things have got better still with ASP.NET Core 1.0 (previously ASP.NET 5) and since it uses middleware in a similar way to OWIN, we were able to get SaasKit up and running pretty quickly (thanks to saan800 who paved the way).

Getting Started

Create a new ASP.NET 5 application and add a reference to SaasKit.Multitenancy (available on NuGet) in your project.json file:

 "dependencies": {
   ...
   "SaasKit.Multitenancy": "1.0.0-alpha"
 },

In this example I'm using MVC 6 but SaasKit will work with any ASP.NET Core application. Once done we need to tell SaasKit how to identify our tenants.

Tenant Identification

The first aspect of multi-tenancy is tenant identification - identifying tenants based on information available in the current request. This could be the hostname, current user or perhaps a custom HTTP header.

First we need to create a class that represents our tenant. This can be any POCO and there are no constraints enforced by SaasKit:

public class AppTenant
{
    public string Name { get; set; }
    public string[] Hostnames { get; set; }
}

Next we need to tell SaasKit how to resolve a tenant from the current request. We do this by creating a tenant resolver. This should return a TenantContext<TTenant> instance if a matching tenant is found:

public class AppTenantResolver : ITenantResolver<AppTenant>
{
    IEnumerable<AppTenant> tenants = new List<AppTenant>(new[]
    {
        new AppTenant {
            Name = "Tenant 1",
            Hostnames = new[] { "localhost:6000", "localhost:6001" }
        },
        new AppTenant {
            Name = "Tenant 2",
            Hostnames = new[] { "localhost:6002" }
        }
    });

    public async Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
    {
        TenantContext<AppTenant> tenantContext = null;

        var tenant = tenants.FirstOrDefault(t =>
            t.Hostnames.Any(h => h.Equals(context.Request.Host.Value.ToLower())));

        if (tenant != null)
        {
            tenantContext = new TenantContext<AppTenant>(tenant);
        }

        return tenantContext;
    }
}

Here we're resolving tenants based on the hostname but notice that you have access to the full HttpContext so you can match on anything you like - URL, user, headers etc. I've hardcoded my tenants for now. In your own applications you'll likely resolve tenants against a database or a configuration file.

Wiring it up

Once you've defined your tenant and tenant resolver you're ready to wire up SaasKit. I've tried to follow the same pattern as most ASP.NET Core components. First you need to register SaasKit's dependencies. Open up startups.cs and add the following to the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMultitenancy<AppTenant, AppTenantResolver>();
}

Then you need to register the SaasKit middleware components. Add the following to your Configure method:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    // after .UseStaticFiles()
    app.UseMultitenancy<AppTenant>();
    // before .UseMvc()
}

Getting the current tenant

Whenever you need the current tenant instance you can just take a dependency on your tenant type. For example in an MVC controller:

public class HomeController : Controller
{
    private AppTenant tenant;

    public HomeController(AppTenant tenant)
    {
        this.tenant = tenant;
    }

The same is true for TenantContext. SaasKit registers both of these dependencies for you so there's no need to obtain the current HttpContext every time you need access to the current tenant. This makes your tenant-scoped dependencies much easier to test.

So that we can see everything is working, add the current tenant name to my site's title. To do this I'm going to use a new feature of MVC6, the ability to inject services into views.

In _Layout.cshtml add the following to the top of the view:

@inject AppTenant Tenant;

This will inject AppTenant into the view and make it available on a property called Tenant.

Now we can use the tenant details in our view:

<a asp-controller="Home" asp-action="Index" class="navbar-brand">@Tenant.Name</a>

Running the example

Open project.json and update the web command to listen on the URLs configured for our tenants:

"commands": {
  "web": "Microsoft.AspNet.Server.Kestrel --server.urls=http://localhost:6000;http://localhost:6001;http://localhost:6002",
},

Now open up a console (cmd prompt) in the root of your project and run:

dnx web

If I navigate to http://localhost:6000 and http://localhost:6001 (remember I mapped two hostnames to this tenant):

Tenant 1

If I navigate to http://localhost:6001:

Tenant 2

Making tenants configurable

Of course hardcoding tenants is pretty lame so we'll update our sample to load the tenant details from appsettings.json using the new options pattern introduced in ASP.NET Core. To begin with add your tenant configuration to this file:

"Multitenancy": {
  "Tenants": [
    {
      "Name": "Tenant 1",
      "Hostnames": [
        "localhost:6000",
        "localhost:6001"
      ]
    },
    {
      "Name": "Tenant 2",
      "Hostnames": [
        "localhost:6002"
      ]
    }
  ]
}

Next we'll define a class to represent our tenant options:

public class MultitenancyOptions
{
    public Collection<AppTenant> Tenants { get; set; }
}

Now we need to configure the options framework to bind values from from the configuration we created in Startup() (where we define appsettings.json). Add the following to ConfigureServices:

   services.Configure<MultitenancyOptions>(Configuration.GetSection("Multitenancy"));

Then update our tenant resolver to get tenant information from MultitenancyOptions:

public class AppTenantResolver : ITenantResolver<AppTenant>
{
    private readonly IEnumerable<AppTenant> tenants;

    public AppTenantResolver(IOptions<MultitenancyOptions> options)
    {
        this.tenants = options.Value.Tenants;
    }

    public async Task<TenantContext<AppTenant>> ResolveAsync(HttpContext context)
    {
        TenantContext<AppTenant> tenantContext = null;

        var tenant = tenants.FirstOrDefault(t => 
            t.Hostnames.Any(h => h.Equals(context.Request.Host.Value.ToLower())));

        if (tenant != null)
        {
            tenantContext = new TenantContext<AppTenant>(tenant);
        }

        return Task.FromResult(tenantContext);
    }
}

Re-run the application and everything should work as before.

Wrapping up

The first step in building a multi-tenant application is deciding how you identify your tenants. SaasKit makes it easy to do this by resolving tenants on each request and making them injectable in your application.

Once you're able to get the current tenant you can start to partition your services accordingly - from connecting to tenant-specific databases, rendering tenant specific views or overriding application components for each tenant.

In my next few blog posts I'll cover more advanced multi-tenancy requirements so stay tuned.

Download

SaasKit is open source and available on GitHub. You can get the above example here.

Questions?

Join the SaasKit chat room on Gitter.


More content like this?

If you don't have anything to contribute but are interested in where SaasKit is heading, please subscribe to the mailing list below. Emails won't be frequent, only when we have something to show you.


Ben Foster: How to log debug messages in ASP.NET Core (ASP.NET 5)

I've been getting to grips with ASP.NET 5 (now ASP.NET Core 1.0) over the past few weeks and recently wanted to log out some debug messages from some custom middleware:

log.LogDebug("Setting current tenant.");

In startup.cs I added the following to ConfigureServices:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.MinimumLevel = LogLevel.Debug;

    ...
}

The default log level in ASP.NET 5 is Information. However, even after setting the MinimumLevel to Debug I still couldn't see my debug messages.

It turns out that the Console Logger also defaults to LogLevel.Information which means that the debug messages get dropped. The solution is to explicitly set the log level of the logging provider, in this case:

loggerFactory.AddConsole(LogLevel.Debug);

After doing this, my debug messages were displayed.

Note that you need to set both the ILoggerFactory.MinimumLevel and logging level of the logging provider to Debug - it's confusing I know. I guess this is a "safe by default" setting so that sensitive information doesn't find it's way into your production logs.

If passing a configuration section to the logging provider (like the standard ASP.NET 5 MVC template does), set the value for the default category in appsettings.json:

  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },


Dominick Baier: Which OpenID Connect/OAuth 2.0 Flow is the right One?

That is probably the most common question we get – and the answer is of course: it depends!

Machine to Machine Communication
This one is easy – since there is no human directly involved, client credentials are used to request tokens.

Browser-based Applications
This might be a JavaScript-based application or a “traditional” server-rendered web application. For those scenarios, you typically want to use the implicit flow (OpenID Connect / OAuth 2.0).

A side effect of the implicit flow is, that all tokens (identity and access tokens) are delivered through the browser front-channel. If you want to use the access token purely on the server side, this would result in an unnecessary exposure of the token to the client. In that case I would prefer the authorization code flow – or hybrid flow.

Native Applications
Strictly speaking, a native application has very similar security properties compared to a JavaScript application. Still they are generally considered a bit more easy to secure because you often have stronger platform support for protecting data and isolation.

That’s the reason why the current consensus is, that an authorization code based flow gives you “a bit more” security than implicit. The much more important reason IMO is, that there are a couple of (upcoming) protocols that are optimized for native clients, and they use code exchange and the token endpoint as a foundation – e.g. PKCE, Proof of Possession and AC/DC.

Remark 1: With native applications I mean applications that have access to platform-native APIs like data protection or maybe the system browser. Cordova applications are e.g. written in JavaScript, but I would not consider them to be a “browser-based application”.

Remark 2: For code based flows, you need to embed the client secret in the client application. Of course you can’t treat that as a secret anymore – no matter how good you protect it, a motivated attacker will be able to reverse engineer it. It is still a bit better than no secret at all. Specs like PKCE make it a bit better as well.

Remark 3: I often hear the argument that the client application does not care who the user is, it just needs an access token – thus we rather do OAuth 2.0 than OpenID Connect. While this might be strictly speaking true – OIDC is the superior protocol as it includes a couple of extra security features like nonces for replay protection or c_hash and at_hash to link the (verifiable) identity token to the (unverifiable) access token.

Remark 4: As an extension to remark 3 – always use OpenID Connect – and not OAuth 2.0 on its own. There should be client libraries for every platform of interest by now. ASP.NET has middleware, we have a library for JavaScript. Other platforms should be fine as as well.

Remark 5: Whenever you think about using authorization code flow – rather use hybrid flow. This gives you a verifiable token first before you make additional roundtrips (another extensions of remark 3 and 4).

HTH


Filed under: .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.