Darrel Miller: When Opportunity meets Momentum

Over the past few years, I have occasionally dreamed about what would be my perfect job.  Of course it would have to involve HTTP APIs. But beyond that, my background in ERP software leaves me longing for solving business problems for users.  My experience at Runscope as a developer advocate reaffirmed my desire to spend time helping other developers.  Working with the OpenAPI Initiative over the past couple of years has allowed me to dip my toes into standards work that I have been fascinated with for so long. It’s a curious mix.

I was one of those strange kids that used to tell people that I wanted to work for a large corporation.  Many people dream of startup life, being their own boss, whereas I am fascinated by the dynamics of enterprises, the community and the culture.  There is something special about the idea of thousands of people working together towards a common goal.  At least the theory appeals to me.

Ironically, I spent the first 20 years of my career self-employed. Joining Microsoft was a big change for me, but I was right! The last two and half years at Microsoft have been an amazing experience.  It is everything I expected of being a cog in the corporate machine and I love it.  The mantra of “One Microsoft” is something that I can really get behind and I genuinely feel that real action is being taken at every level to make it happen.

My intent with this blog post was to get to the point quickly and then provide more details for those who care.  Apparently I failed.  So without any further context: I believe I found my perfect job. I am moving to a new team at Microsoft.  I will be joining Yina Arenas’ team working in the Microsoft 365 developer ecosystem.  More specifically, I will be working on the Microsoft Graph developer tooling.  In another twist to the story, I’ll be joining as a product manager, not as a developer.  Don’t laugh!  I’ve been honing my Power Point skills recently.

Momentum: Microsoft Graph

If you didn’t watch the keynotes at Microsoft Build this year, or last year, you may not have heard of Microsoft Graph.  It is an HTTP API that provides access to a user’s data that is stored in one of the many services in the Microsoft Cloud.  It provides a consistent developer experience whether you are accessing emails in an Outlook Inbox, messages in Teams channel, an Excel worksheet or a file in OneDrive.

In spirit with formation of the Experiences and Devices division, this year saw the introduction of the Windows Timeline API being added to the Microsoft Graph alone with Intune device management APIs.  The Identity division who work as partners with E&D on the Microsoft Graph announced a number of security and user management APIs.

It is clear, from the high level vision statements, the services being added, all the way down to the tools and documentation that are being written, that the Microsoft Graph is a cross company strategic effort to make it easier for developers to build products and processes that leverage user’s productivity data that is stored in the Microsoft Cloud.

From a technology perspective, Microsoft Graph uses OData conventions.  This is not quite the same as saying it is an OData service.  Back when OData first appeared in 2009, the tooling that created OData services made it very easy to take data stores and make them accessible for ad-hoc querying via HTTP.  Although this approach has a number of useful applications, it also is an anti-pattern for creating scalable, cost efficient, evolvable HTTP APIs for widespread consumption. The OData specification, the tools, and community have evolved significantly in since its introduction.  The payload format is now a much lighter weight JSON format that has some similarities to JSON-LD.  Some of the syntax elements have been made optional and many of the capabilities missing in the original specification have been added.

The most important aspect of how OData is used in Microsoft Graph relates to how teams are guided to implement OData capabilities.  Teams are directed to build the capabilities that make sense for the service, not to implement capabilities because “that’s what OData services can do”.  Where filtering is appropriate, it is implemented with OData syntax.  Where navigation properties make sense, the OData conventions are followed.  This common sense approach provides services that are tailored to the task but use common conventions that have expected behavior across all the services.

Opportunity: Client Libraries

As much as it pains me to say, HTTP isn’t the easiest application protocol to understand.  There are lots of ambiguous scenarios, a number of strange mechanisms due to age and backward compatibility.  There are some things in the design that are just plain broken.  As much as I have tried to spread the gospel of HTTP for the last ten years… maybe, perhaps, not everyone needs to understand the details of how some of the more obscure interactions work.

Providing client libraries to access an API is taken for granted as an essential part of providing an HTTP API.  They eliminate the need to manually create HTTP requests and handle responses and enable developers to focus on dealing with functions and data in terms of the application domain.  However, a curious anomaly that I have experienced is, the more experienced API developer, the less likely they are to use a provided client library and more likely will use their own infrastructure to construct the HTTP requests.  I hope to dig into the details of why I think this is in a future blog post.

As an industry I believe we have spent an order of magnitude more time building frameworks and tooling to enable developers to provide HTTP APIs that we have to consume them.  I believe there is a significant opportunity  to provide a better experience for the API consumer

Don’t take this as a criticism of the existing Microsoft Graph client libraries, because I actually think the existing Microsoft Graph client libraries have some major advantages over many others I have seen.  I just think we can go even further.

Momentum: HTTP APIs

I took a career bet on HTTP about 12 years ago, so feel free to consider this commentary self-serving and biased. 

HTTP has powered the web for more than 35 years.  Other protocols have emerged to fill gaps where HTTP wasn’t effective.  HTTP/2 addresses some of the major performance challenges that have emerged.  It also enables some scenarios that previously required switching to other application protocols.  It has achieved this without breaking backward compatibility and achieved significant adoption in a fairly short period of time.

The next frontier is the Internet of Things and for the vast majority of devices, HTTP and HTTP/2 are going to be good enough.  From what I am seeing in the industry these days, majority of systems integration is done with HTTP.  There will always be the need for specialized protocols for specialized scenarios, but HTTP is water to the life of the web.

For developers who are constantly bombarded by the next great technology to learn, I think even 12 years later, the advice to learn how HTTP brings value as an application protocol and not just a data transport protocol is still sound career advice.

Opportunity: Your data, at your fingertips

If you have a Microsoft 365 (or Office 365) subscription, or you have customers that have them, then Microsoft Graph opens up a world of possibilities.  Who wants to implement yet another identity store with permissions management capabilities?  A cloud based file storage system with format translation capabilities? Task management, contact management, schedule management?  Chances are these are all ancillary concerns that are not part of the domain expertise that makes your company or your product valuable to your customers?  Some of the services in Microsoft Graph are essential parts of any piece of business software, and others would be really nice to have.  Microsoft Graph entities are extensible so that you can choose to attach your domain data to them and many entities allow you to subscribe to change notifications so that you get callbacks when there are changes. 

Leveraging the Microsoft Graph by integrating it into new or existing products is a low cost, high return, easy pitch to your customers who already use Microsoft 365.  And it opens a whole new set of prospective customers who are already Microsoft 365 users.

Momentum: Microsoft 365 Ecosystem

There is no doubt that Microsoft see cloud based services as a strategic part of their effort for the foreseeable future.  Across the company we are seeing efforts to make integration and interoperability seamless.  Logic Apps has many connections for interacting with Microsoft Graph based services.  Azure Functions has bindings to Microsoft Graph services.  Based on my 15 years of building ERP software, I see the potential for integration with Dynamics services to be immense.

Opportunity: OpenAPI

Due to the fact that Microsoft Graph uses OData conventions, it makes sense that it would use the OData metadata language, CSDL, to describe the API.  However, there has been demand from customers to provide a description of the API in OpenAPI format. I have recently been working on a library that has a strongly typed C# model for OpenAPI descriptions and readers/writers.  One of the teams that I have been working with have been building a related library that creates an OpenAPI description from the CSDL description. We expect to expose these OpenAPI descriptions on the Graph in the coming months

Momentum: Azure API Management

The downside of moving to a new team is I have to leave my old team.  Working on Azure API management has been an amazing experience.  It is a team of really smart folks who are passionate about building a great product for our customers.  We have seen tremendous growth in the product, we have added a ton of new features and there are some really exciting things coming later this year. 

Opportunity: The Future

If you haven’t tried out the Microsoft Graph, go play around in our explorer.  If you are already using the Microsoft Graph, I’d love to hear about it.  If you have things you wish the client libraries would do for you, make sure I know about it because I want to make that happen.


Andrew Lock: Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core

Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core

For apps running in Kubernetes, it's particularly important to be storing log messages in a central location. I'd argue that this is important for all apps, whether or not you're using Kubernetes or docker, but the ephemeral nature of pods and containers make the latter cases particularly important.

If you're not storing logs from your containers centrally, then if a container crashes and is restarted, the logs may be lost forever.

There are lots of ways you can achieve this. You could log to Elasticsearch or Seq directly from your apps, or to an external service like Elmah.io for example. One common approach is to use Fluentd to collect logs from the Console output of your container, and to pipe these to an Elasticsearch cluster.

By default, Console log output in ASP.NET Core is formatted in a human readable format. If you take the Fluentd/Elasticsearch approach, you'll need to make sure your console output is in a structured format that Elasticsearch can understand, i.e. JSON.

In this post, I describe how you can add Serilog to your ASP.NET Core app, and how to customise the output format of the Serilog Console sink so that you can pipe your console output to Elasticsearch using Fluentd.

Note that it's also possible to configure Serilog to write directly to Elasticsearch using the Elasticsearch sink. If you're not using Fluentd, or aren't containerising your apps, that's a great option.

Writing logs to the console output

When you create a new ASP.NET Core application from a template, your program file will looks something like this (in .NET Core 2.1 at least):

public class Program  
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
}

The static helper method WebHost.CreateDefaultBuilder(args) creates a WebHostBuilder and wires up a number of standard configuration options. By default, it configures the Console and Debug logger providers:

.ConfigureLogging((hostingContext, logging) =>
{
    logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
    logging.AddConsole();
    logging.AddDebug();
})

If you run your application from the command line using dotnet run, you'll see logs appear in the console for each request. The following shows the logs generated by two requests from a browser - one for the home page, and one for the favicon.ico.

Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core

Unfortunately, the Console logger doesn't provider much flexibility in how the logs are written. You can optionally include scopes, or disable the colours, but that's about it.

An alternative to the default Microsoft.Extensions.Logging infrastructure in ASP.NET Core is to use Serilog for your logging, and connect it as a standard ASP.NET Core logger.

Adding Serilog to an ASP.NET Core app

Serilog is a mature open source project, that predates all the logging infrastructure in ASP.NET Core. In many ways, the ASP.NET Core logging infrastructure seems modelled after Serilog: Serilog has similar configuration options and pluggable "sinks" to control where logs are written.

The easiest way to get started with Serilog is with the Serilog.AspNetCore NuGet package. Add it to your application with:

dotnet add package Serilog.AspNetCore  

You'll also need to add one or more "sink" packages, to control where logs are written. In this case, I'm going to install the Console sink, but you could add others too, if you want to write to multiple destinations at once.

dotnet add package Serilog.Sinks.Console  

The Serilog.AspNetCore package provides an extension method, UseSerilog() on the WebHostBuilder instance. This replaces the default ILoggerFactory with an implementation for Serilog. You can pass in an existing Serilog.ILogger instance, or you can configure a logger inline. For example, the following code configures the minimum log level that will be written (info) and registers the console sink:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>  
    WebHost.CreateDefaultBuilder(args)
        .UseSerilog((ctx, config) =>
        {
            config
                .MinimumLevel.Information()
                .Enrich.FromLogContext()
                .WriteTo.Console();
        })
        .UseStartup<Startup>();
}

Running the app again when you're using Serilog instead of the default loggers gives the following console output:

Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core

The output is similar to the default logger, but importantly it's very configurable. You can change the output template however you like. For example, you could show the name of the class that generated the log by including the SourceContext parameter.

For more details and samples for the Serilog.AspNetCore package, see the GitHub repository. For console formatting options, see the Serilog.Sinks.Console repository.

As well as simple changes to the output template, the Console sink allows complete control over how the message is rendered. We'll use that capability to render the logs as JSON for Fluentd, instead of a human-friendly format.

Customising the output format of the Serilog Console Sink to write JSON

To change how the data is rendered, you can add a custom ITextFormatter. Serilog includes a JsonFormatter you can use, but it's suggested that you consider the Serilog.Formatting.Compact package instead:

CompactJsonFormatter significantly reduces the byte count of small log events when compared with Serilog's default JsonFormatter, while remaining human-readable. It achieves this through shorter built-in property names, a leaner format, and by excluding redundant information.”

We're not going to use this package for our Fluentd/Elasticsearch use case, but I'll show how to plug it in here in any case. Add the package using dotnet add package Serilog.Formatting.Compact, create a new instance of the formatter, and pass it to the WriteTo.Console() method in your UseSerilog() call:

.UseSerilog((ctx, config) =>
{
    config
        .MinimumLevel.Information()
        .Enrich.FromLogContext()
        .WriteTo.Console(new CompactJsonFormatter());
})

Now if you run your application, you'll see the logs written to the console as JSON:

Writing logs to Elasticsearch with Fluentd using Serilog in ASP.NET Core

This formatter may be useful to you, but in my case, I wanted the JSON to be written so that Elasticsearch could understand it. You can see that the compact JSON format (pretty-printed below) uses, as promised, compact names for the timestamp (@t), message template (@mt) and the rendered message (@r):

{
  "@t": "2018-05-17T10:23:47.0727764Z",
  "@mt": "{HostingRequestStartingLog:l}",
  "@r": [
    "Request starting HTTP\/1.1 GET http:\/\/localhost:5000\/  "
  ],
  "Protocol": "HTTP\/1.1",
  "Method": "GET",
  "ContentType": null,
  "ContentLength": null,
  "Scheme": "http",
  "Host": "localhost:5000",
  "PathBase": "",
  "Path": "\/",
  "QueryString": "",
  "HostingRequestStartingLog": "Request starting HTTP\/1.1 GET http:\/\/localhost:5000\/  ",
  "EventId": {
    "Id": 1
  },
  "SourceContext": "Microsoft.AspNetCore.Hosting.Internal.WebHost",
  "RequestId": "0HLDRS135F8A6:00000001",
  "RequestPath": "\/",
  "CorrelationId": null,
  "ConnectionId": "0HLDRS135F8A6"
}

For the simplest Fluentd/Elasticsearch integration, I wanted the JSON to be output using standard Elasticsearch names such as @timestamp for the timestamp. Luckily, all that's required is to replace the formatter.

Using an Elasticsearch compatible JSON formatter

The Serilog.Sinks.Elasticsearch package contains exactly the formatter we need, the ElasticsearchJsonFormatter. This renders data using standard Elasticsearch fields like @timestamp and fields.

Unfortunately, currently the only way to add the formatter to your project short of copying and pasting the source code (check the license first!) is to install the whole Serilog.Sinks.Elasticsearch package, which has quite a few dependencies.

Ideally, I'd like to see the formatter as its own independent package, like Serilog.Formatting.Compact is. I've raised an issue and will update this post if there's movement.

If that's not a problem for you (it wasn't for me, as I already had a dependency on Elasticsearch.Net, then adding the Elasticsearch Sink to access the formatter is the easiest solution. Add the sink using dotnet add package Serilog.Sinks.ElasticSearch, and update your Serilog configuration to use the ElasticsearchJsonFormatter:

.UseSerilog((ctx, config) =>
{
    config
        .MinimumLevel.Information()
        .Enrich.FromLogContext()
        .WriteTo.Console(new ElasticsearchJsonFormatter();
})

Once you've connected this formatter, the console output will contain the common Elasticsearch fields like @timestamp, as shown in the following (pretty-printed) output:

{
  "@timestamp": "2018-05-17T22:31:43.9143984+12:00",
  "level": "Information",
  "messageTemplate": "{HostingRequestStartingLog:l}",
  "message": "Request starting HTTP\/1.1 GET http:\/\/localhost:5000\/  ",
  "fields": {
    "Protocol": "HTTP\/1.1",
    "Method": "GET",
    "ContentType": null,
    "ContentLength": null,
    "Scheme": "http",
    "Host": "localhost:5000",
    "PathBase": "",
    "Path": "\/",
    "QueryString": "",
    "HostingRequestStartingLog": "Request starting HTTP\/1.1 GET http:\/\/localhost:5000\/  ",
    "EventId": {
      "Id": 1
    },
    "SourceContext": "Microsoft.AspNetCore.Hosting.Internal.WebHost",
    "RequestId": "0HLDRS5H8TSM4:00000001",
    "RequestPath": "\/",
    "CorrelationId": null,
    "ConnectionId": "0HLDRS5H8TSM4"
  },
  "renderings": {
    "HostingRequestStartingLog": [
      {
        "Format": "l",
        "Rendering": "Request starting HTTP\/1.1 GET http:\/\/localhost:5000\/  "
      }
    ]
  }
}

Now logs are being rendered in a format that can be piped straight from Fluentd into Elasticsearch. We can just write to the console.

Switching between output formatters based on hosting environment

A final tip. What if you want to have human readable console output when developing locally, and only use the JSON formatter in Staging or Production?

This is easy to achieve as the UseSerilog extension provides access to the IHostingEnvironment via the WebHostBuilderContext. For example, in the following snippet I configure the app to use the human-readable console in development, and to use the JSON formatter in other environments.

.UseSerilog((ctx, config) =>
{
    config
        .MinimumLevel.Information()
        .Enrich.FromLogContext();

    if (ctx.HostingEnvironment.IsDevelopment())
    {
        config.WriteTo.Console();
    }
    else
    {
        config.WriteTo.Console(new ElasticsearchJsonFormatter());
    }
})

Instead of environment, you could also switch based on configuration values available via the IConfiguration object at ctx.Configuration.

Summary

Storing logs in a central location is important, especially if you're building containerised apps. One possible solution to this is to output your logs to the console, have Fluentd monitor the console, and pipe the output to an Elasticsearch cluster. In this post I described how to add Serilog logging to your ASP.NET Core application and configure it to write logs to the console in the JSON format that Elasticsearch expects.


Dominick Baier: Making the IdentityModel Client Libraries HttpClientFactory friendly

IdentityModel has a number of protocol client libraries, e.g. for requesting, refreshing, revoking and introspecting OAuth 2 tokens as well as a client and cache for the OpenID Connect discovery endpoint.

While they work fine, the style around libraries that use HTTP has changed a bit recently, e.g.:

  • the lifetime of the HttpClient is currently managed internally (including IDisposable). In the light of modern APIs like HttpClientFactory, this is an anti-pattern.
  • the main extensibility point is HttpMessageHandler – again the HttpClientFactory promotes a more composable way via DelegatingHandler.

While I could just add more constructor overloads that take an HttpClient, I decided to explore another route (all credits for this idea goes to @randompunter).

I reworked all the clients to be simply extensions methods for HttpClient. This allows you to new up your own client or get one from a factory. This gives you complete control over the lifetime and configuration of the client including handlers, default headers, base address, proxy settings etc. – e.g.:

public async Task<string> NoFactory()
{
    var client = new HttpClient();
 
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
    {
        Address = "https://demo.identityserver.io/connect/token",
        ClientId = "client",
        ClientSecret = "secret"
    });
 
    return response.AccessToken ?? response.Error;
}

If you want to throw in the client factory – you can register the client like this:

services.AddHttpClient();

..and use it like this:

public async Task<string> Simple()
{
    var client = HttpClientFactory.CreateClient();
 
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
    {
        Address = "https://demo.identityserver.io/connect/token",
        ClientId = "client",
        ClientSecret = "secret"
    });
 
    return response.AccessToken ?? response.Error;
}

HttpClientFactory also supports named clients, which allows configuring certain things upfront, e.g. the base address:

services.AddHttpClient("token_client", 
    client => client.BaseAddress = new Uri("https://demo.identityserver.io/connect/token"));

Which means you don’t need to supply the address per request:

public async Task<string> WithAddress()
{
    var client = HttpClientFactory.CreateClient("token_client");
 
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
    {
        ClientId = "client",
        ClientSecret = "secret"
    });
 
    return response.AccessToken ?? response.Error;
}

You can also go one step further by creating a typed client, which exactly models the type of OAuth 2 requests you need to make in your application. You can mix that with the ASP.NET Core configuration model as well:

public class TokenClient
{
    public TokenClient(HttpClient client, IOptions<TokenClientOptions> options)
    {
        Client = client;
        Options = options.Value;
    }
 
    public HttpClient Client { get; }
    public TokenClientOptions Options { get; }
 
    public async Task<string> GetToken()
    {
        var response = await Client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
        {
            Address = Options.Address,
            ClientId = Options.ClientId,
            ClientSecret = Options.ClientSecret
        });
 
        return response.AccessToken ?? response.Error;
    }
}

..and register it like this:

services.Configure<TokenClientOptions>(options =>
{
    options.Address = "https://demo.identityserver.io/connect/token";
    options.ClientId = "client";
    options.ClientSecret = "secret";
});
 
services.AddHttpClient<TokenClient>();

…and use it e.g. like this:

public async Task<string> Typed([FromServicesTokenClient tokenClient)
{
    return await tokenClient.GetToken();
}

And one of my favourite features is the nice integration of the Polly library (and handlers in general) to give you extra features like retry logic:

services.AddHttpClient<TokenClient>()
    .AddTransientHttpErrorPolicy(builder => builder.WaitAndRetryAsync(new[]
    {
        TimeSpan.FromSeconds(1),
        TimeSpan.FromSeconds(2),
        TimeSpan.FromSeconds(3)
    }));

This is work in progress right now, but it feels like this is a better abstraction level than the current client implementations. I am planning to release that soon – if you have any feedback, please leave a comment here or open an issue on github. Thanks!


Damien Bowden: ASP.NET Core MVC Form Requests and the Browser Back button

This article shows how an ASP.NET Core MVC application can request data using a HTML form so that the browser back button will work. When using a HTTP POST to request data from a server, the back button does not work, because it tries to re-submit the form data. This can be solved by using a HTTP GET, or an AJAX POST.

Code: https://github.com/damienbod/MvcDynamicDropdownList

Request data using a HTTP Post

In an ASP.NET Core MVC application, a user can request data from a server using a HTML form. The form can be set to do a POST request and when the data is returned in the response, it is displayed in the partial view using the data from the POST response.

@model MvcDynamicDropdownList.Models.DdlItems
@{
    ViewData["Title"] = "Index";
}

<h4>Back Fail Confirm Form Resubmission</h4>

<form asp-controller="Back" asp-action="SomeDataFromAPost" method="post">
    <button class="btn btn-primary" type="submit">Display</button>
</form>

@if (Model != null)
{
    @await Html.PartialAsync("SomeData", Model)
}
else
{
    <p>no display items</p>
}

The MVC Controller handles the view requests, prepares the data, and returns the views as required.

using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Rendering;
using MvcDynamicDropdownList.Models;
using System.Collections.Generic;

namespace MvcDynamicDropdownList.Controllers
{
    public class BackController : Controller
    {
        public IActionResult IndexPost()
        {
            return View();
        }

        public IActionResult IndexGet()
        {
            return View();
        }

        // SomeDataFromAPost
        [HttpPost]
        public IActionResult SomeDataFromAPost()
        {
            var model = new DdlItems
            {
                Items = new List<SelectListItem>
                {
                    new SelectListItem { Text = "H1", Value = "H1Value"},
                    new SelectListItem { Text = "This is cool", Value = "cool"}
                }
            };
            return View("IndexPost", model);
        }

        // SomeDataFromAPostGet
        [HttpGet]
        public IActionResult SomeDataFromAGet()
        {
            var model = new DdlItems
            {
                Items = new List<SelectListItem>
                {
                    new SelectListItem { Text = "H1", Value = "H1Value"},
                    new SelectListItem { Text = "This is cool", Value = "cool"}
                }
            };
            return View("IndexGet", model);
        }
    }
}

The partial view just displays the data.

@model MvcDynamicDropdownList.Models.DdlItems

<p>display items</p>
@if (Model.Items != null)
{
    <div style="width: 200px;">
        <ol>
            @foreach (var item in Model.Items)
            {
                <li>item.Text</li>
            }
        </ol>
    </div>
}
else
{
    <p>no display items</p>
}

By doing this, when the user clicks the back button, the web application breaks and displays the following message: Confirm Form Resubmission

This is correct, because the browser thinks you are changing the data by doing a POST, or resending an UPDATE data request.

The following gif displays this:

Solving this problem using HTTP GET

The back button problem can be solved by implementing the data request in a different way. The first way would be to send a HTTP GET request, instead of a HTTP POST. Request parameters, if any, are sent in the URL and the back button will then work without problems. This could be implemented as follows:

@model MvcDynamicDropdownList.Models.DdlItems
@{
    ViewData["Title"] = "Index";
}

<h4>Back No Confirm Form Resubmission due to GET Form</h4>

<form asp-controller="Back" asp-action="SomeDataFromAGet" method="get">
    <button class="btn btn-primary" type="submit">Display</button>
</form>

@if (Model != null)
{
    @await Html.PartialAsync("SomeData", Model)
}
else
{
    <p>no display items</p>
}

Solving this problem using AJAX HTTP POST

If a POST request is required, it could be sent using AJAX, which can then send a POST request to the server. This is done in ASP.NET Core by using the data-ajax properties, and the jquery.validate.unobtrusive javascript library. Is this example, the DOM element with the content id displays the result.

@{
    ViewData["Title"] = "Home Page";
}

<h4>Back Ok, No Confirm Form Resubmission due to async</h4>

<div>
    <div>
        <form asp-controller="Home" asp-action="DynamicDropDownList" 
                 data-ajax="true" 
                 data-ajax-method="POST" 
                 data-ajax-mode="replace" 
                 data-ajax-update="#content">
            <input class="btn btn-primary" type="submit" value="getData" />
        </form>
        <div id="content"></div>

    </div>

</div>

Links:

https://docs.microsoft.com/en-us/aspnet/core/getting-started?view=aspnetcore-2.1&tabs=windows

https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?view=aspnetcore-2.1&tabs=visual-studio


Dominick Baier: Mixing UI and API Endpoints in ASP.NET Core 2.1 (aka Dynamic Scheme Selection)

Some people like to co-locate UI and API endpoints in the same application. I generally prefer to keep them separate, but I acknowledge that certain architecture styles make this conscious decision.

Server-side UIs typically use cookies for authentication (or a combination of cookies and OpenID Connect) and APIs should use access tokens – and you want to make sure that you are not accepting cookies in the API by accident.

Since authentication of incoming calls in ASP.NET.Core are abstracted by so called authentication handlers, and you can register as many of them as you want – you can support both authentication scenarios. That’s by design.

One way you could implement that is explicitly decorating every controller with an [Authorize] attribute and specify the name of the authentication scheme you want use. That’s a bit tedious, and also error prone. I prefer to have a global authorization policy that denies anonymous access and then rather opt-out of that with [AllowAnonymous] where needed.

This did not work prior to ASP.NET Core 2.1 because global policies rely on the default scheme configuration – but since we have two different schemes, there is no default. The effect would be e.g. that you would get a redirect to a login page for an anonymous API call where you would have expected a 401.

In ASP.NET Core 2.1 there is a new feature where you can dynamically select the authentication scheme based on the incoming HTTP request. Let’s say e.g. all your API endpoint are below /api – you could define a rule that for requests to that path you would use JWT tokens – and for all other, OpenID Connect with cookies. You do that by adding a forward selector to the authentication handler like this:

options.ForwardDefaultSelector = ctx =>
{
    if (ctx.Request.Path.StartsWithSegments("/api"))
    {
        return "jwt";
    }
    else
    {
        return "cookies";
    }
};

For a full sample – see here.


Andrew Lock: Suppressing the startup and shutdown messages in ASP.NET Core

Suppressing the startup and shutdown messages in ASP.NET Core

In this post, I show how you can disable the startup message shown in the console when you run an ASP.NET Core application. This extra text can mess up your logs if you're using a collector that reads from the console, so it can be useful to disable in production. A similar approach can be used to disable the startup log messages when you're using the new IHostBuilder in ASP.NET Core 2.1.

This post will be less of a revelation after David Fowler dropped his list of new features in ASP.NET Core 2.1!. If you haven't seen that tweet yet, I recommend you check out this summary post by Scott Hanselman.

ASP.NET Core startup messages

By default, when you startup an ASP.NET Core application, you'll get a message something like the following, indicating the current environment, the content root path, and the URLs Kestrel is listening on:

Using launch settings from C:\repos\andrewlock\blog-examples\suppress-console-messages\Properties\launchSettings.json...  
Hosting environment: Development  
Content root path: C:\repos\andrewlock\blog-examples\suppress-console-messages  
Now listening on: https://localhost:5001  
Now listening on: http://localhost:5000  
Application started. Press Ctrl+C to shut down.  

This message, written by the WebHostBuilder, gives you a handy overview of your app, but it's written directly to the console, not through the ASP.NET Core Logging infrastructure provided by Microsoft.Extensions.Logging and used by the rest of the application.

When you're running in Docker particularly, it's common to write structured logs to the standard output (Console), and have another process read these logs and send them to a central location, using fluentd for example.

Unfortunately, while the startup information written the console can be handy, it's written in an unstructured format. If you're writing logs to the Console in a structured format for fluentd to read, then this extra text can pollute your nicely structured logs.

Suppressing the startup and shutdown messages in ASP.NET Core

The example shown above just uses the default ConsoleLoggingProvider rather than a more structured provider, but it highlights the difference between the messages written by the WebHostBuilder and those written by the logging infrastructure.

Luckily, you can choose to disable the startup messages (and the Application is shutting down... shutdown message).

Disabling the startup and shutdown messages in ASP.NET Core

Whether or not the startup messages are shown is controlled by a setting in your WebHostBuilder configuration. This is different to your app configuration, in that it describes the settings of the WebHost itself. This configuration controls things such as the environment name, the application name, and the ContentRoot path.

By default, these values can be set by using ASPNETCORE_ environment variables to control the values. For example, setting the ASPNETCORE_ENVIRONMENT variable to Staging will set the IHostingEnvironment.EnvironmentName to Staging.

The WebHostBuilder loads a whole number of settings from environment variables if they're available. You can use these to control a wide range of WebHost configuration options.

Disabling the messages using an environment variable

You can override lots of the default host configuration values by setting ASPNETCORE_ environment variables. In this case, the variable to set is ASPNETCORE_SUPPRESSSTATUSMESSAGES. If you set this variable to true on your machine, whether globally, or using launchSettings.json, then both the startup and shutdown messages are suppressed:

Suppressing the startup and shutdown messages in ASP.NET Core

Annoyingly, the Using launch settings... messages seems to still be shown. However, it's only shown when you use dotnet run. It won't show if you publish your app and use dotnet app.dll.

Disabling the messages using UseSetting

Environment variables aren't the only way to control the WebHostOptions configuration. You can provide your own configuration entirely by passing in a pre-built IConfiguration object for example, as I showed in a previous post using command line arguments.

However, if you only want to change the one setting, then creating a whole new ConfigurationBuilder may seem a bit like overkill. In that case, you could use the UseSetting method on WebHostBuilder.

Under the hood, if you call UseConfiguration() to provide a new IConfiguration object for your WebHostBuilder, you're actually making calls to UseSetting() for each key-value-pair in the provided configuration.

As shown below, you can use the UseSetting() method to set the SuppressStatusMessages value in the WebHost configuration. This will be picked up by the builder when you call Build() and the startup and shutdown messages will be suppressed.

public class Program  
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseSetting(WebHostDefaults.SuppressStatusMessagesKey, "True") // add this line
            .UseStartup<Startup>();
}

You may notice that I've used a strongly typed property on WebHostDefaults as the key. There are a whole range of other properties you can set directly in this way. You can see the WebHostDefaults class here, and the WebHostOptions class where the values are used here.

There's an even easier way to set this setting however, with the SuppressStatusMessages() extension method on IHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>  
    WebHost.CreateDefaultBuilder(args)
        .SuppressStatusMessages(true) //disable the status messages
        .UseStartup<Startup>();

Under the hood, this extension method sets the WebHostDefaults.SuppressStatusMessagesKey setting for you, so it's probably the preferable approach to use!

I had missed this approach originally, I only learned about it from this helpful twitter thread from David Fowler.

Disabling messages for HostBuilder in ASP.NET Core 2.1

ASP.NET Core 2.1 introduces the concept of a generic Host and HostBuilder, analogous to the WebHost and WebHostBuilder typically used to build ASP.NET Core applications. Host is designed to be used to build non-HTTP apps. You could use it to build .NET Core services for example. Steve Gordon has an excellent introduction I suggest looking into if HostBuilder is new to you.

The following program is a very basic example of creating a simple service, registering an IHostedService to run in the background for the duration of the app's lifetime, and adding a logger to write to the console. The PrintTextToConsoleService class refers to the service in Steve's post.

public class Program  
{
    public static void Main(string[] args)
    {
        // CreateWebHostBuilder(args).Build().Run();
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) => 
        new HostBuilder()
            .ConfigureLogging((context, builder) => builder.AddConsole())
            .ConfigureServices(services => services.AddSingleton<IHostedService, PrintTextToConsoleService>());
}

When you run this app, you will get similar startup messages written to the console:

Application started. Press Ctrl+C to shut down.  
Hosting environment: Production  
Content root path: C:\repos\andrewlock\blog-examples\suppress-console-messages\bin\Debug\netcoreapp2.1\  
info: suppress_console_messages.PrintTextToConsoleService[0]  
      Starting
info: suppress_console_messages.PrintTextToConsoleService[0]  
      Background work with text: 14/05/2018 11:27:16 +00:00
info: suppress_console_messages.PrintTextToConsoleService[0]  
      Background work with text: 14/05/2018 11:27:21 +00:00

Even though the startup messages look very similar, you have to go about suppressing them in a very different way. Instead of setting environment variables, using a custom IConfiguration object, or the UseSetting() method, you must explicitly configure an instance of the ConsoleLifetimeOptions object.

You can configure the ConsoleLifetimeOptions in the ConfigureServices method using the IOptions pattern, in exactly the same way you'd configure your own strongly-typed options classes. That means you can load the values from configuration if you like, but you could also just configure it directly in code:

public static IHostBuilder CreateHostBuilder(string[] args) =>  
    new HostBuilder()
        .ConfigureLogging((context, builder) => builder.AddConsole())
        .ConfigureServices(services =>
        {
            services.Configure<ConsoleLifetimeOptions>(options =>  // configure the options
                options.SuppressStatusMessages = true);            // in code
            services.AddSingleton<IHostedService, PrintTextToConsoleService>();
        });

With the additional configuration above, when you run your service, you'll no longer get the unstructured text written to the console.

Summary

By default, ASP.NET Core writes environment and configuration information to the console on startup. By setting the supressStartupMessages webhost configuration value to true, you can prevent these messages being output. For the HostBuilder available in ASP.NET Core 2.1, you need to configure the ConsoleLifetimeOptions object to set SuppressStatusMessages = true.


Andrew Lock: Exploring the .NET Core 2.1 Docker files (updated): dotnet:runtime vs aspnetcore-runtime vs sdk

Exploring the .NET Core 2.1 Docker files (updated): dotnet:runtime vs aspnetcore-runtime vs sdk

This is an update to my previous post explaining the difference between the various Linux .NET docker files. Things have changed a lot in .NET Core 2.1, so that post is out of date!

When you build and deploy an application in Docker, you define how your image should be built using a Dockerfile. This file lists the steps required to create the image, for example: set an environment variable, copy a file, or run a script. Whenever a step is run, a new layer is created. Your final Docker image consists of all the changes introduced by these layers in your Dockerfile.

Typically, you don't start from an empty image where you need to install an operating system, but from a "base" image that contains an already configured OS. For .NET development, Microsoft provide a number of different images depending on what it is you're trying to achieve.

In this post, I look at the various Docker base images available for .NET Core development, how they differ, and when you should use each of them. I'm only going to look at the Linux amd64 images, but there are Windows container versions and even Linux arm32 images available too. At the time of writing (just after the .NET Core 2.1 release) the latest images available are 2.1.0 and 2.1.300 for the various runtime and SDK images respectively.

Note: You should normally be specific about exactly which version of a Docker image you build on in your Dockerfiles (e.g. don't use latest). For that reason, all the images I mention in this post use the current latest version numbers, 2.1.300 and 2.1.0

I'll start by briefly discussing the difference between the .NET Core SDK and the .NET Core Runtime, as it's an important factor when deciding which base image you need. I'll then walk through each of the images in turn, using the Dockerfiles for each to explain what they contain, and hence what you should use them for.

tl;dr; This is a pretty long post, so for convenience, here's some links to the relevant sections and a one-liner use case:

Note that all of these images use the microsoft/dotnet repository - the previous microsoft/aspnetcore and microsoft/aspnetcore-build repositories have both been deprecated. There is no true 2.1 equivalent to the old microsoft/aspnetcore-build:2.0.3 image which included Node, Bower, and Gulp, or the microsoft/aspnetcore-build:1.0-2.0 image which included multiple .NET Core SDKs. Instead, it's recommended you use MultiStage builds to achieve this instead.

The .NET Core Runtime vs the .NET Core SDK

One of the most often lamented aspects of .NET Core and .NET Core development, is around version numbers. There are so many different moving parts, and none of the version numbers match up, so it can be difficult to figure out what you need.

For example, on my dev machine I am building .NET Core 2.1 apps, so I installed the .NET Core 2.1 SDK to allow me to do so. When I look at what I have installed using dotnet --info, I get (a more verbose version) of the following:

> dotnet --info
.NET Core SDK (reflecting any global.json):
 Version:   2.1.300
 Commit:    adab45bf0c

Runtime Environment:  
 OS Name:     Windows
 OS Version:  10.0.17134

Host (useful for support):  
  Version: 2.1.0
  Commit:  caa7b7e2ba

.NET Core SDKs installed:
  1.1.9 [C:\Program Files\dotnet\sdk]
  ...
  2.1.300 [C:\Program Files\dotnet\sdk]

.NET Core runtimes installed:
  Microsoft.AspNetCore.All 2.1.0-preview1-final [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
  Microsoft.NETCore.App 2.1.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:  
  https://aka.ms/dotnet-download

There's a lot of numbers there, but the important ones are 2.1.300 which is the version of the command line tools or SDK I'm currently using, and 2.1.0 which is the version of the .NET Core runtime.

In .NET Core 2.1, dotnet --info lists all the runtimes and SDKs you have installed. I haven't shown all 20 I apparently have installed… I really need to claim some space back!

Whether you need the .NET Core SDK or the .NET Core runtime depends on what you're trying to do:

  • The .NET Core SDK - This is what you need to build .NET Core applications.
  • The .NET Core Runtime - This is what you need to run .NET Core applications.

When you install the SDK, you get the runtime as well, so on your dev machines you can just install the SDK. However, when it comes to deployment you need to give it a little more thought. The SDK contains everything you need to build a .NET Core app, so it's much larger than the runtime alone (122MB vs 22MB for the MSI files). If you're just going to be running the app on a machine (or in a Docker container) then you don't need the full SDK, the runtime will suffice, and will keep the image as small as possible.

For the rest of this post, I'll walk through the main Docker images available for .NET Core and ASP.NET Core. I assume you have a working knowledge of Docker - if you're new to Docker I suggest checking out Steve Gordon's excellent series on Docker for .NET developers.

1. microsoft/dotnet:2.1.0-runtime-deps

  • Contains native dependencies
  • No .NET Core runtime or .NET Core SDK installed
  • Use for running Self-Contained Deployment apps

The first image we'll look at forms the basis for most of the other .NET Core images. It actually doesn't even have .NET Core installed. Instead, it consists of the base debian:stretch-slim image and has all the low-level native dependencies on which .NET Core depends.

The Docker images are currently all available in three flavours, depending on the OS image they're based on: debian:stretch-slim, ubuntu:bionic, and alpine:3.7. There are also ARM32 versions of the debian and ubuntu images. In this post I'm just going to look at the debian images, as they are the default.

The Dockerfile consists of a single RUN command that apt-get installs the required dependencies on top of the base image, and sets a few environment variables for convenience.

FROM debian:stretch-slim

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        ca-certificates \
        \
# .NET Core dependencies
        libc6 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

# Configure Kestrel web server to bind to port 80 when present
ENV ASPNETCORE_URLS=http://+:80 \  
    # Enable detection of running in a container
    DOTNET_RUNNING_IN_CONTAINER=true

What should you use it for?

The microsoft/dotnet:2.1.0-runtime-deps image is the basis for subsequent .NET Core runtime installations. Its main use is for when you are building self-contained deployments (SCDs). SCDs are apps that are packaged with the .NET Core runtime for the specific host, so you don't need to install the .NET Core runtime. You do still need the native dependencies though, so this is the image you need.

Note that you can't build SCDs with this image. For that, you'll need the SDK-based image described later in the post, microsoft/dotnet:2.1.300-sdk.

2. microsoft/dotnet:2.1.0-runtime

  • Contains .NET Core runtime
  • Use for running .NET Core console apps

The next image is one you'll use a lot if you're running .NET Core console apps in production. microsoft/dotnet:2.1.0-runtime builds on the runtime-deps image, and installs the .NET Core Runtime. It downloads the tar ball using curl, verifies the hash, unpacks it, sets up symlinks and removes the old installer.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.1-runtime-deps-stretch-slim

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        curl \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core
ENV DOTNET_VERSION 2.1.0

RUN curl -SL --output dotnet.tar.gz https://dotnetcli.blob.core.windows.net/dotnet/Runtime/$DOTNET_VERSION/dotnet-runtime-$DOTNET_VERSION-linux-x64.tar.gz \  
    && dotnet_sha512='f93edfc068290347df57fd7b0221d0d9f9c1717257ed3b3a7b4cc6cc3d779d904194854e13eb924c30eaf7a8cc0bd38263c09178bc4d3e16281f552a45511234' \
    && echo "$dotnet_sha512 dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

What should you use it for?

The microsoft/dotnet:2.1.0-runtime image contains the .NET Core runtime, so you can use it to run any .NET Core 2.1 app such as a console app. You can't use this image to build your app, only to run it.

If you're running a self-contained app then you would be better served by the runtime-deps image. Similarly, if you're running an ASP.NET Core app, then you should use the microsoft/dotnet:2.1.0-aspnetcore-runtime image instead (up next), as it contains the shared runtime required for most ASP.NET Core apps.

3. microsoft/dotnet:2.1.0-aspnetcore-runtime

  • Contains .NET Core runtime and the ASP.NET Core shared framework
  • Use for running ASP.NET Core apps
  • Sets the default URL for apps to http://+:80

.NET Core 2.1 moves away from the runtime store feature introduced in .NET Core 2.0, and replaces it with a series of shared frameworks. This is a similar concept, but with some subtle benefits (to cloud providers in particular, e.g. Microsoft). I wrote a post about the shared framework and the associated Microsoft.AspNetCore.App metapackage here.

By installing the Microsoft.AspNetCore.App shared framework, all the packages that make up the metapackage are already available, so when your app is published, it can exclude those dlls from the output. This makes your published output smaller, and improves layer caching for Docker images.

The microsoft/dotnet:2.1.0-aspentcore-runtime image is very similar to the microsoft/dotnet:2.1.0-runtime image, but instead of just installing the .NET Core runtime and shared framework, it installs the .NET Core runtime and the ASP.NET Core shared framework, so you can run ASP.NET Core apps, as well as .NET Core console apps.

You can view the Dockerfile for the image here:

FROM microsoft/dotnet:2.1-runtime-deps-stretch-slim

RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        curl \
    && rm -rf /var/lib/apt/lists/*

# Install ASP.NET Core
ENV ASPNETCORE_VERSION 2.1.0

RUN curl -SL --output aspnetcore.tar.gz https://dotnetcli.blob.core.windows.net/dotnet/aspnetcore/Runtime/$ASPNETCORE_VERSION/aspnetcore-runtime-$ASPNETCORE_VERSION-linux-x64.tar.gz \  
    && aspnetcore_sha512='0f37dc0fabf467c36866ceddd37c938f215c57b10c638d9ee572316a33ae66f7479a1717ab8a5dbba5a8d2661f09c09fcdefe1a3f8ea41aef5db489a921ca6f0' \
    && echo "$aspnetcore_sha512  aspnetcore.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf aspnetcore.tar.gz -C /usr/share/dotnet \
    && rm aspnetcore.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

What should you use it for?

Fairly obviously, for running ASP.NET Core apps! This is the image to use if you've published an ASP.NET Core app and you need to run it in production. It has the smallest possible footprint but all the necessary framework components and optimisations. You can't use it for building your app though, as it doesn't have the SDK installed. For that, you need the following image.

If you want to go really small, check out the new Alpine-based images - 163MB vs 255MB for the base image!

4. microsoft/dotnet:2.1.300-sdk

  • Contains .NET Core SDK
  • Use for building .NET Core and ASP.NET Core apps

All of the images shown so far can be used for running apps, but in order to build your app, you need the .NET Core SDK image. Unlike all the runtime images which use debian:stretch-slim as the base, the microsoft/dotnet:2.1.300-sdk image uses the buildpack-deps:stretch-scm image. According to the Docker Hub description, the buildpack image:

…includes a large number of "development header" packages needed by various things like Ruby Gems, PyPI modules, etc.…a majority of arbitrary gem install / npm install / pip install should be successful without additional header/development packages…

The stretch-scm tag also ensures common tools like curl, git, and ca-certificates are installed.

The microsoft/dotnet:2.1.300-sdk image installs the native prerequisites (as you saw in the microsoft/dotnet:2.1.0-runtime-deps image), and then installs the .NET Core SDK. Finally, it sets some environment variables and warms up the NuGet package cache by running dotnet help in an empty folder, which makes subsequent dotnet operations faster.

You can view the Dockerfile for the image here:

FROM buildpack-deps:stretch-scm

# Install .NET CLI dependencies
RUN apt-get update \  
    && apt-get install -y --no-install-recommends \
        libc6 \
        libgcc1 \
        libgssapi-krb5-2 \
        libicu57 \
        liblttng-ust0 \
        libssl1.0.2 \
        libstdc++6 \
        zlib1g \
    && rm -rf /var/lib/apt/lists/*

# Install .NET Core SDK
ENV DOTNET_SDK_VERSION 2.1.300

RUN curl -SL --output dotnet.tar.gz https://dotnetcli.blob.core.windows.net/dotnet/Sdk/$DOTNET_SDK_VERSION/dotnet-sdk-$DOTNET_SDK_VERSION-linux-x64.tar.gz \  
    && dotnet_sha512='80a6bfb1db5862804e90f819c1adeebe3d624eae0d6147e5d6694333f0458afd7d34ce73623964752971495a310ff7fcc266030ce5aef82d5de7293d94d13770' \
    && echo "$dotnet_sha512 dotnet.tar.gz" | sha512sum -c - \
    && mkdir -p /usr/share/dotnet \
    && tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
    && rm dotnet.tar.gz \
    && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet

# Configure Kestrel web server to bind to port 80 when present
ENV ASPNETCORE_URLS=http://+:80 \  
    # Enable detection of running in a container
    DOTNET_RUNNING_IN_CONTAINER=true \
    # Enable correct mode for dotnet watch (only mode supported in a container)
    DOTNET_USE_POLLING_FILE_WATCHER=true \
    # Skip extraction of XML docs - generally not useful within an image/container - helps perfomance
    NUGET_XMLDOC_MODE=skip

# Trigger first run experience by running arbitrary cmd to populate local package cache
RUN dotnet help  

What should you use it for?

This image has the .NET Core SDK installed, so you can use it for building your .NET Core and ASP.NET Core apps. Technically you can also use this image for running your apps in production as the SDK includes the runtime, but you shouldn't do that in practice. As discussed at the beginning of this post, optimising your Docker images in production is important for performance reasons, but the microsoft/dotnet:2.1.300-sdk image weighs in at a hefty 1.73GB, compared to the 255MB for the microsoft/dotnet:2.1.0-runtime image.

To get the best of both worlds, you should use this image (or one of the later images) to build your app, and one of the runtime images to run your app in production. You can see how to do this using Docker multi-stage builds in Scott Hanselman's post here, or in my blog series here.

Summary

In this post I walked through some of the common Docker images used in .NET Core 2.1 development. Each of the images have a set of specific use-cases, and it's important you use the right one for your requirements. These images have changed since I wrote the previous version of this post; if you're using an earlier version of .NET Core check out that one instead.


Damien Bowden: Dynamic CSS in an ASP.NET Core MVC View Component

This post shows how a view with dynamic css styles could be implemented using an MVC view component in ASP.NET Core. The values are changed using a HTML form with ASP.NET Core tag helpers, and passed into the view component which displays the view using css styling. The styles are set at runtime.

Code: https://github.com/damienbod/AspNetCoreMvcDynamicViews

Creating the View Component

The View Component is a nice way of implementing components in ASP.NET Core MVC. The view component is saved in the \Views\Shared\Components\DynamicDisplay folder which fulfils some of the standard paths which are pre-defined by ASP.NET Core. This can be changed, but I always try to use the defaults where possible.

The DynamicDisplay class implements the ViewComponent class, which has a single async method InvokeAsync that returns a Task with the IViewComponentResult type.

using AspNetCoreMvcDynamicViews.Views.Shared.Components.DynamicDisplay;
using Microsoft.AspNetCore.Mvc;
using System.Threading.Tasks;

namespace AspNetCoreMvcDynamicViews.Views.Home.ViewComponents
{
    [ViewComponent(Name = "DynamicDisplay")]
    public class DynamicDisplay : ViewComponent
    {
        public async Task<IViewComponentResult> InvokeAsync(DynamicDisplayModel dynamicDisplayModel)
        {
            return View(await Task.FromResult(dynamicDisplayModel));
        }
    }
}

The view component uses a simple view model with some helper methods to make it easier to use the data in a cshtml view.

namespace AspNetCoreMvcDynamicViews.Views.Shared.Components.DynamicDisplay
{
    public class DynamicDisplayModel
    {
        public int NoOfHoles { get; set; } = 2;

        public int BoxHeight { get; set; } = 100;

        public int NoOfBoxes { get; set; } = 2;

        public int BoxWidth { get; set; } = 200;

        public string GetAsStringWithPx(int value)
        {
            return $"{value}px";
        }

        public string GetDisplayHeight()
        {
            return $"{BoxHeight + 50 }px";
        }

        public string GetDisplayWidth()
        {
            return $"{BoxWidth * NoOfBoxes}px";
        }
    }
}

The cshtml view uses both css classes and styles to do a dynamic display of the data.

@using AspNetCoreMvcDynamicViews.Views.Shared.Components.DynamicDisplay
@model DynamicDisplayModel

<div style="height:@Model.GetDisplayHeight(); width:@Model.GetDisplayWidth()">
    @for (var i = 0; i < Model.NoOfBoxes; i++)
    {
    <div class="box" style="width:@Model.GetAsStringWithPx(Model.BoxWidth);height:@Model.GetAsStringWithPx(Model.BoxHeight);">
        @if (Model.NoOfHoles == 4)
        {
            @await Html.PartialAsync("./FourHolesPartial.cshtml")
        }
        else if (Model.NoOfHoles == 2)
        {
            @await Html.PartialAsync("./TwoHolesPartial.cshtml")
        }
        else if (Model.NoOfHoles == 1)
        {
            <div class="row justify-content-center align-items-center" style="height:100%">
                <span class="dot" style=""></span>
            </div>
        }
    </div>
    }
</div>

Partial views are used inside the view component to display some of the different styles. The partial view is added using the @await Html.PartialAsync call. The box with the four holes is implemented in a partial view.

<div class="row" style="height:50%">
    <div class="col-6">
        <span class="dot" style="float:left;"></span>
    </div>
    <div class="col-6">
        <span class="dot" style="float:right;"></span>
    </div>
</div>

<div class="row align-items-end" style="height:50%">
    <div class="col-6">
        <span class="dot" style="float:left;"></span>
    </div>
    <div class="col-6">
        <span class="dot" style="float:right;"></span>
    </div>
</div>

And CSS classes are used to display the data.

.dot {
	height: 25px;
	width: 25px;
	background-color: #bbb;
	border-radius: 50%;
	display: inline-block;
}

.box {
	float: left;
	height: 100px;
	border: 1px solid gray;
	padding: 5px;
	margin: 5px;
	margin-left: 0;
	margin-right: -1px;
}

Using the View Component

The view component is then used in a cshtml view. This view implements the form which sends the data to the server. The view component is added using the Component.InvokeAsync method which takes only a model as a parameter and then name of the view component.

@using AspNetCoreMvcDynamicViews.Views.Shared.Components.DynamicDisplay
@model MyDisplayModel
@{
    ViewData["Title"] = "Home Page";
}

<div style="padding:20px;"></div>

<form asp-controller="Home" asp-action="Index" method="post">
    <div class="col-md-12">

        @*<div class="form-group row">
            <label  class="col-sm-3 col-form-label font-weight-bold">Circles</label>
            <div class="col-sm-9">
                <select class="form-control" asp-for="mmm" asp-items="mmmItmes"></select>
            </div>
        </div>*@

        <div class="form-group row">
            <label class="col-sm-5 col-form-label font-weight-bold">No of Holes</label>
            <select class="col-sm-5 form-control" asp-for="DynamicDisplayData.NoOfHoles">
                <option value="0" selected>No Holes</option>
                <option value="1">1 Hole</option>
                <option value="2">2 Holes</option>
                <option value="4">4 Holes</option>
            </select>
        </div>

        <div class="form-group row">
            <label class="col-sm-5 col-form-label font-weight-bold">Height in mm</label>
            <input class="col-sm-5 form-control" asp-for="DynamicDisplayData.BoxHeight" type="number" min="65" max="400" />
        </div>

        <div class="form-group row">
            <label class="col-sm-5 col-form-label font-weight-bold">No. of Boxes</label>
            <input class="col-sm-5 form-control" asp-for="DynamicDisplayData.NoOfBoxes" type="number" min="1" max="7" />
        </div>

        <div class="form-group row">
            <label class="col-sm-5 col-form-label font-weight-bold">Box Width</label>
            <input class="col-sm-5 form-control" asp-for="DynamicDisplayData.BoxWidth" type="number" min="65" max="400" />
        </div>

        <div class="form-group row">
            <button class="btn btn-primary col-sm-10" type="submit">Update</button>
        </div>

    </div>

    @await Component.InvokeAsync("DynamicDisplay", Model.DynamicDisplayData)

</form>

The MVC Controller implements two methods, one for the GET, and one for the POST. The form uses the POST to send the data to the server, so this could be saved if required.

public class HomeController : Controller
{
	[HttpGet]
	public IActionResult Index()
	{
		var model = new MyDisplayModel
		{
			DynamicDisplayData = new DynamicDisplayModel()
		};
		return View(model);
	}

	[HttpPost]
	public IActionResult Index(MyDisplayModel myDisplayModel)
	{
		// save data to db...
		return View("Index", myDisplayModel);
	}

Running the demo

When the application is started, the form is displayed, and the default values are displayed.

And when the update button is clicked, the values are visualized inside the view component.

Links:

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/view-components?view=aspnetcore-2.1

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/partial?view=aspnetcore-2.1

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/razor?view=aspnetcore-2.1


Anuraj Parameswaran: Deploying an ASP.NET Core 2.1 Application with AWS Elastic Beanstalk

AWS Elastic Beanstalk is an orchestration service offered from Amazon Web Services for deploying infrastructure which orchestrates various AWS services, including EC2, S3, Simple Notification Service (SNS), CloudWatch, auto scaling, and Elastic Load Balancers. Currently AWS Elastic Beanstalk only supports .NET Core 2.0.


Andrew Lock: Pushing NuGet packages built in Docker by running the container

Pushing NuGet packages built in Docker by running the container

In a previous post I described how you could build NuGet packages in Docker. One of the advantages of building NuGet packages in Docker is that you can don't need any dependencies installed on the build-server itself, you can install all the required dependencies in the Docker container instead. One of the disadvantages of this approach is that getting at the NuGet packages after they've been built is more tricky - you have to run the image to get at the files.

Given that constraint, it's likely that if you're building your apps in Docker, you'll also want to push your NuGet packages to a feed (e.g. nuget.org or myget.org from Docker.

In this post I show how to create a Dockerfile for building your NuGet packages which you can then run as a container to push them to a NuGet feed.

Previous posts in this series:

Building your NuGet packages in Docker

I've had a couple of questions since my posting on building NuGet packages in Docker as to why you would want to do this. Given Docker is for packaging and distributing apps, isn't it the wrong place for building NuGet packages?

While Docker images are a great way for distributing an app, one of their biggest selling points is the ability to isolate the dependencies of the app it contains from the host operating system which runs the container. For example, I can install a specific version of Node in the Docker container, without having to install Node on the build server.

That separation doesn't just apply when you're running your application, but also when building your application. To take an example from the .NET world - if I want to play with some pre-release version of the .NET SDK, I can install it into a Docker image and use that to build my app. If I wasn't using Docker, I would have to install it directly on the build server, which would affect everything it built, not just my test app. If there was a bug in the preview SDK it could potentially compromise the build-process for production apps too.

I could also use a global.json file to control the version of the SDK used to build each application.

The same argument applies to building NuGet packages in Docker as well as apps. By doing so, you isolate the dependencies required to package your libraries from those installed directly on the server.

For example, consider this simple Dockerfile. It uses the .NET Core 2.1 release candidate SDK (as it uses the 2.1.300-rc1-sdk base image), but you don't need to have that installed on your machine to be able to build and produce the required NuGet packages.

FROM microsoft/dotnet:2.1.300-rc1-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build -o /sln/artifacts  

This Dockerfile doesn't have any optimisations, but it will restore and build a .NET solution in the root directory. It will then create NuGet packages and output them to the /sln/artifacts directory. You can set the version of the package by providing the Version as a build argument, for example:

docker build --build-arg Version=0.1.0 -t andrewlock/test-app .  

If the solution builds successfully, you'll have a Docker image that contains the NuGet .nupkg files, but they're not much good sat there. Instead, you'll typically want to push them to a NuGet feed. There's a couple of ways you could do that, but in the following example I show how to configure your Dockerfile so that it pushes the files when you docker run the image.

Pushing NuGet packages when a container is run

Before I show the code, a quick reminder on terminology:

  • An image is essentially a static file that is built from a Dockerfile. You can think of it as a mini hard-drive, containing all the files necessary to run an application. But nothing is actually running; it's just a file.
  • A container is what you get if you run an image.

The following Dockerfile expands on the previous one, so that when you run the image, it pushes the .nupkgs built in the previous stage to the nuget.org feed.

FROM microsoft/dotnet:2.1.300-rc1-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build -o /sln/artifacts 

ENTRYPOINT ["dotnet", "nuget", "push", "/sln/artifacts/*.nupkg"]  
CMD ["--source", "https://api.nuget.org/v3/index.json"]  

This Dockerfile makes use of both ENTRYPOINT and CMD commands. For an excellent description of the differences between them, and when to use one over the other, see this article. In summary, I've used ENTRYPOINT to define the executable command to run and it's constant arguments, and CMD to specify the optional arguments. When you run the image built using this Dockerfile (andrewlock/test-app for example) it will combine ENTRYPOINT and CMD to give the final command to run.

For example, if you run:

docker run --rm --name push-packages andrewlock/test-app  

then the Docker container will execute the following command in the container:

dotnet nuget push /sln/artifacts/*.nupkg --source https://api.nuget.org/v3/index.json  

When pushing files to nuget.org, you will typically need to provide an API key using the --api-key argument, so running the container as it is will give a 401 Unauthorized response. To provide the extra arguments to the dotnet nuget push command, add them at the end of your docker run statement:

docker run --rm --name push-packages andrewlock/test-app --source https://api.nuget.org/v3/index.json --api-key MY-SECRET-KEY  

When you pass additional arguments to the docker run command, they replace any arguments embedded in the image with CMD, and are appended to the ENTRYPOINT, to give the final command:

dotnet nuget push /sln/artifacts/*.nupkg --source https://api.nuget.org/v3/index.json --api-key MY-SECRET-KEY  

Note that I had to duplicate the --source argument in order to add the additional --api-key argument. When you provide additional arguments to the docker run command, it completely overridees the CMD arguments, so if you need them, you must repeat them when you call docker run.

Why push NuGet packages on run instead of on build?

The example I've shown here, using dotnet run to push NuGet packages to a NuGet feed, is only one way you can achieve the same goal. Another valid approach would be to call dotnet nuget push inside the Dockerfile itself, as part of the build process. For example, you could use the following Dockerfile:

FROM microsoft/dotnet:2.1.300-rc1-sdk AS builder

ARG Version  
ARG NUGET_KEY  
ARG NUGET_URL=https://api.nuget.org/v3/index.json  
WORKDIR /sln

COPY . .

RUN dotnet restore  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build -o /sln/artifacts  
RUN dotnet nuget push /sln/artifacts/*.nupkg --source NUGET_URL --api-key $NUGET_KEY  

In this example, building the image itself would push the artifacts to your NuGet feed:

docker build --build-arg Version=0.1.0 --build arg NUGET_KEY=MY-SECRET-KEY  .  

So why choose one approach over the other? It's a matter of preference really.

Oftentimes I have a solution that consists of both libraries to push to NuGet and applications to package and deploy as Dockerfiles. In those cases, my build scripts tend to look like the following:

  1. Restore, build and test the whole solution in a shared Dockerfile
  2. Publish each of the apps to their own images
  3. Pack the libraries in an image
  4. Test the app images
  5. Push the app Docker images to the Docker repository
  6. Push the NuGet packages to the NuGet feed by running the Docker image

Moving the dotnet nuget push out of docker build and into docker run feels conceptually closer to the two-step approach taken for the app images. We don't build and push Docker images all in one step; there's a build phase and a push phase. The setup with NuGet adopts a similar approach. If I wanted to run some checks on the NuGet packages produced (e.g. testing they have been built with required attributes for example) then I could easily do that before they're pushed to NuGet.

Whichever approach you take, there's definitely benefits to building your NuGet packages in Docker.

Summary

In this post I showed how you can build NuGet packages in Docker, and then push them to your NuGet feed when you run the container. By using ENTRYPOINT and CMD you can provide default arguments to make it easier to run the container. You don't have to use this two-stage approach - you could push your NuGet packages as part of the docker build call. I prefer to separate the two processes to more closely mirror the process of building and publishing app Docker images.


Anuraj Parameswaran: Yammer external login setup with ASP.NET Core

This post shows you how to enable your users to sign in with their Yammer account. Similar to the other social networks, the authentication is an OAuth 2 flow, beginning with the user authenticating with their Yammer credentials. The user then authorizes your app to connect to their Yammer network. The end result is a token that your app will use to write events to Yammer and retrieve Yammer data.


Anuraj Parameswaran: Working with Entity Framework Core - Hybrid Approach

Recently I started working on Dictionary Web API, which converts English to Malayalam(my native language). I am able to find out the word definitions database as CSV, by running Import Data wizard in SQL Server, I created a SQL Server database with definitions. The definitions table is a contains thousands of rows, so I don’t want to create it and insert the data, instead I want to use Database first approach for creating the entity. So here is the command to which build DBContext and POCO classes using existing database.


Andrew Lock: Setting ASP.NET Core version numbers for a Docker ONBUILD builder image

Setting ASP.NET Core version numbers for a Docker ONBUILD builder image

In a previous post, I showed how you can create NuGet packages when you build your app in Docker using the .NET Core CLI. As part of that, I showed how to set the version number for the package using MSBuild commandline switches.

That works well when you're directly calling dotnet build and dotnet pack yourself, but what if you want to perform those tasks in a "builder" Dockerfile, like I showed previously. In those cases you need to use a slightly different approach, which I'll describe in this post.

I'll start with a quick recap on using an ONBUILD builder, and how to set the version number of an app, and then I'll show the solution for how to combine the two. In particular, I'll show how to create a builder and a "downstream" app's Dockerfile where

  • Calling docker build with --build-arg Version=0.1.0 on your app's Dockerfile, will set the version number for your app in the builder image
  • You can provide a default version number in your app's Dockerfile, which is used if you don't provide a --build-arg
  • If the downstream image does not set the version, the builder Dockerfile uses a default version number.

Previous posts in this series:

Using ONBUILD to create builder images

The ONBUILD command allows you to specify a command that should be run when a "downstream" image is built. This can be used to create "builder" images that specify all the steps to build an application or library, reducing the boilerplate in your application's Dockerfile.

For example, in a previous post I showed how you could use ONBUILD to create a generic ASP.NET Core builder Dockerfile, reproduced below:

# Build image
FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
WORKDIR /sln

ONBUILD COPY ./*.sln ./NuGet.config  ./

# Copy the main source project files
ONBUILD COPY src/*/*.csproj ./  
ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done

# Copy the test project files
ONBUILD COPY test/*/*.csproj ./  
ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done 

ONBUILD RUN dotnet restore

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src  
ONBUILD RUN dotnet build -c Release --no-restore

ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  

By basing your app Dockerfile on this image (in the FROM statement), your application would be automatically restored, built and tested, without you having to include those steps yourself. Instead, your app image could be very simple, for example:

# Build image
FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder

# Publish
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../dist" --no-restore

#App image
FROM microsoft/aspnetcore:2.0.7  
WORKDIR /app  
ENV ASPNETCORE_ENVIRONMENT Local  
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
COPY --from=builder /sln/dist .  

Setting the version number when building your application

You often want to set the version number of a library or application when you build it - you might want to record the app version in log files when it runs for example. Also, when building NuGet packages you need to be able to set the package version number. There are a variety of different version numbers available to you (as I discussed in a previous post), all of which can be set from the command line when building your application.

In my last post I described how to set version numbers using MSBuild switches. For example, to set the Version MSBuild property when building (which, when set, updates all the other version numbers of the assembly) you could use the following command

dotnet build /p:Version=0.1.2-beta -c Release --no-restore  

Setting the version in this way is the same whether you're running it from the command line, or in Docker. However, in your Dockerfile, you will typically want to pass the version to set as a build argument. For example, the following command:

docker build --build-arg Version="0.1.0" .  

could be used to set the Version property to 0.1.0 by using the ARG command, as shown in the following Dockerfile:

FROM microsoft/dotnet:2.0.3-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build  

Using ARGs in a parent Docker image that uses ONBUILD

The two techniques described so far work well in isolation, but getting them to play nicely together requires a little bit more work. The initial problem is to do with the way Docker treats builder images that use ONBUILD.

To explore this, imagine you have the following, simple, builder image, tagged as andrewlock/testbuild:

FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
WORKDIR /sln

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src

ONBUILD RUN dotnet build -c Release  

Warning: This Dockerfile has no optimisations, don't use it for production!

As a first attempt, you might try just adding the ARG command to your downstream image, and passing the --build-arg in. The following is a very simple Dockerfile that uses the builder, and accepts an argument.

# Build image
FROM andrewlock/testbuild as builder

ARG Version

# Publish
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

Calling docker build --build-arg Version="0.1.0" . will build the image, and set the $Version parameter in the downstream dockerfile to 0.1.0, but that won't be used in the builder Dockerfile at all, so it would only be useful if you're running dotnet pack in your downstream image for example.

Instead, you can use a couple of different characteristics about Dockerfiles to pass values up from your downstream app's Dockerfile to the builder Dockerfile.

  • Any ARG defined before the first FROM is "global", so it's not tied to a builder stage. Any stage that wants to use it, still needs to declare its own ARG command
  • You can provide default values to ARG commands using the format ARG value=default
  • You can combine ONBUILD with ARG

Lets combine all these features, and create our new builder image.

A builder image that supports setting the version number

I've cut to the chase a bit here - needless to say I spent a while fumbling around, trying to get the Dockerfiles doing what I wanted. The solution shown in this post is based on the excellent description in this issue.

The annotated builder image is as follows. I've included comments in the file itself, rather than breaking it down afterwards. As before, this is a basic builder image, just to demonstrate the concept. For a Dockerfile with all the optimisations see my builder image on Dockerhub.

FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  

# This defines the `ARG` inside the build-stage (it will be executed after `FROM`
# in the child image, so it's a new build-stage). Don't set a default value so that
# the value is set to what's currently set for `BUILD_VERSION`
ONBUILD ARG BUILD_VERSION

# If BUILD_VERSION is set/non-empty, use it, otherwise use a default value
ONBUILD ARG VERSION=${BUILD_VERSION:-1.0.0}

WORKDIR /sln

ONBUILD COPY ./test ./test  
ONBUILD COPY ./src ./src

ONBUILD RUN dotnet build -c Release /p:Version=$VERSION  

I've actually defined two arguments here, BUILD_VERSION and VERSION. We do this to ensure that we can set a default version in the builder image, while also allowing you to override it from the downstream image or by using --build-arg.

Those two additional ONBUILD ARG lines are all you need in your builder Dockerfile. You need to either update your downstream app's Dockerfile as shown below, or use --build-arg to set the BUILD_VERSION argument for the builder to use.

If you want to set the version number with --build-arg

If you just want to provide the version number as a --build-arg value, then you don't need to change your downstream image. You could use the following:

FROM andrewlock/testbuild as builder  
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

And then set the version number when you build:

docker build --build-arg BUILD_VERSION="0.3.4-beta" .  

That would pass the BUILD_VERSION value up to the builder image, which would in turn pass it to the dotnet build command, setting the Version property to 0.3.4-beta.

If you don't provide the --build-arg argument, the builder image will use its default value (1.0.0) as the build number.

Note that this will overwrite any version number you've set in your csproj files, so this approach is only any good for you if you're relying on a CI process to set your version numbers

If you want to set a default version number in your downstream Dockerfile

If you want to have the version number of your app checked in to source, then you can set a version number in your downstream Dockerfile. Set the BUILD_VERSION argument before the first FROM command in your app's Dockerfile:

ARG BUILD_VERSION=0.2.3  
FROM andrewlock/testbuild as builder  
RUN dotnet publish "./AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o --no-restore  

Running docker build . on this file will ensure that the libraries built in the builder file have a version of 0.2.3.

If you wish to overwrite this at runtime you can simply pass in the build argument as before:

docker build --build-arg BUILD_VERSION="0.3.4-beta" .  

And there you have it! ONBUILD playing nicely with ARG. If you decide to adopt this pattern in your builder images, just be aware that you will no longer be able to change the version number by setting it in your csproj files.

Summary

In this post I described how you can use ONBUILD and ARG to dynamically set version numbers for your .NET libraries when you're using a generalised builder image. For an alternative description (and the source of this solution), see this issue on GitHub and the provided examples.


Dominick Baier: The State of HttpClient and .NET Multi-Targeting

IdentityModel is a library that uses HttpClient internally – it should also run on all recent versions of the .NET Framework and .NET Core.

HttpClient is sometimes “built-in”, e.g. in the .NET Framework, and sometimes not, e.g. in .NET Core 1.x. So fundamentally there is a “GAC version” and a “Nuget version” of the same type.

We had lots of issues with this because it seemed regardless in which combination you are using the flavours of HttpClient, this will lead to a problem one way or another (github issues). The additional confusion was added by the fact that the .NET tooling had certain bugs in the past that needed workarounds that lead to other problems when those bugs were fixes in later tooling.

Long story short – every time I had to change the csproj file, it broke someone. The latest issue was related to Powershell and .NET 4.7.x (see here).

I once and for all wanted an official statement, how to deal with HttpClient – so I reached out to Immo (@terrajobst) over various channels. Turns out I was not alone with this problem.

Screenshot 2018-05-21 07.43.06

Despite him being on holidays during that time, he gave a really elaborate answer that contains both excellent background information and guidance.

I thought I should copy it here, so it becomes more search engine friendly and hopefully helps out other people that are in the same situation (original thread here).

“Alright, let me try to answer your question. It will probably have more detail than you need/asked for but I might be helpful to start with intention/goals and then the status quo. HttpClient started out as a NuGet package (out-of-band) and was added to the .NET Framework in 4.5 as well (in-box).

With .NET Core/.NET Standard we originally tried to model the .NET platform as a set of packages where being in-box vs. out-of-band no longer mattered. However, this was messier and more complicated than we anticipated.

As a result, we largely abandoned the idea of modeling the .NET platform as a NuGet graph with Core/Standard 2.0.

With .NET Core 2.0 and .NET Standard 2.0 you shouldn’t need to reference the SystemNetHttpClient NuGet package at all. It might get pulled from 1.x dependencies though.

Same goes for .NET Framework: if you target 4.5 and up, you should generally use the in-box version instead of the NuGet package. Again, you might end up pulling it in for .NET Standard 1.x and PCL dependencies, but code written directly against .NET Framework shouldn’t use it.

So why does the package still exist/why do we still update it? Simply because we want to make existing code work that took a dependency on it. However, as you discovered that isn’t smooth sailing on .NET Framework.

The intended model for the legacy package is: if you consume the package from .NET Framework 4.5+, .NET Core 2+, .NET Standard 2+ the package only forwards to the platform provided implementation as opposed to bring it’s own version.

That’s not what actually happens in all cases though: the HTTP Client package will (partially) replace in-box components on .NET Framework which happen to work for some customers and fails for others. Thus, we cannot easily fix the issue now.

On top of that we have the usual binding issues with the .NET Framework so this only really works well if you add binding redirects. Yay!

So, as a library author my recommendation is to avoid taking a dependency on this package and prefer the in-box versions in .NET Framework 4.5, .NET Core 2.0 and .NET Standard 2.0.

Thanks Immo!


Andrew Lock: Creating NuGet packages in Docker using the .NET Core CLI

Creating NuGet packages in Docker using the .NET Core CLI

This is the next post in a series on building ASP.NET Core apps in Docker. In this post, I discuss how you can create NuGet packages when you build your app in Docker using the .NET Core CLI.

There's nothing particularly different about doing this in Docker compared to another system, but there are a couple of gotchas with versioning you can run into if you're not careful.

Previous posts in this series:

Creating NuGet packages with the .NET CLI

The .NET Core SDK and new "SDK style" .csproj format makes it easy to create NuGet packages from your projects, without having to use NuGet.exe, or mess around with .nuspec files. You can use the dotnet pack command to create a NuGet package by providing the path to a project file.

For example, imagine you have a library in your solution you want to package:

Creating NuGet packages in Docker using the .NET Core CLI

You can pack this project by running the following command from the solution directory - the .csproj file is found and a NuGet package is created. I've used the -c switch to ensure we're building in Release mode:

dotnet pack ./src/AspNetCoreInDocker -c Release  

By default, this command runs dotnet restore and dotnet build before producing the final NuGet package, in the bin folder of your project:

Creating NuGet packages in Docker using the .NET Core CLI

If you've been following along with my previous posts, you'll know that when you build apps in Docker, you should think carefully about the layers that are created in your image. In previous posts I described how to structure your projects so as to take advantage of this layer caching. In particular, you should ensure the dotnet restore happens early in the Docker layers, so that is is cached for subsequent builds.

You will typically run dotnet pack at the end of a build process, after you've confirmed all the tests for the solution pass. At that point, you will have already run dotnet restore and dotnet build so, running it again is unnecessary. Luckily, dotnet pack includes switches to do just this:

dotnet pack ./src/AspNetCoreInDocker -c Release --no-build --no-restore  

If your project has multiple projects that you want to package, you can pass in the path to a solution file, or just call dotnet pack in the solution directory:

dotnet pack -c Release --no-build --no-restore  

This will attempt to package all projects in your solution. If you don't want to package a particular project, you can add <IsPackable>false</IsPackable> to the project's .csproj file. For example:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <IsPackable>false</IsPackable>
  </PropertyGroup>

</Project>  

That's pretty much all there is to it. You can add this command to the end of your Dockerfile, and NuGet packages will be created for all your packable projects. There's one major point I've left out with regard to creating packages - setting the version number.

Setting the version number for your NuGet packages

Version numbers seem to be a continual bugbear of .NET; ASP.NET Core has gone through so many numbering iterations and mis-aligned versions that it can be hard for newcomers to figure out what's going on.

Sadly, the same is almost true when it comes to versioning of your .NET Project dlls. There are no less than seven different version properties you can apply to your project. Each of these has slightly different rules, and meaning, as I discussed in a previous post.

Luckily, you can typically get away with only worrying about one: Version.

As I discussed in my previous post, the MSBuild Version property is used as the default value for the various version numbers that are embedded in your assembly: AssemblyVersion, FileVersion, and InformationalVersion, as well as the NuGet PackageVersion when you pack your library. When you're building NuGet packages to share with other applications, you will probably want to ensure that these values are all updated.

Creating NuGet packages in Docker using the .NET Core CLI

There's two primary ways you can set the Version property for your project

  • Set it in your .csproj file
  • Provide it at the command line when you dotnet build your app.

Which you choose is somewhat a matter of preference - if in your .csproj, then the build number is checked into source code and will picked up automatically by the .NET CLI. However, be aware that if you're building in Docker (and have been following my optimisation series), then updating the .csproj will break your layer cache, so you'll get a slower build immediately after bumping the version number.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <Version>0.1.0</Version>
  </PropertyGroup>

</Project>  

One reason to provide the Version number on the command line is if your app version comes from a CI build. If you create a NuGet package in AppVeyor/Travis/Jenkins with every checkin, then you might want your version numbers to be provided by the CI system. In that case, the easiest approach is to set the version at runtime.

In principle, setting the Version just requires passing the correct argument to set the MSBuild property when you call dotnet:

RUN dotnet build /p:Version=0.1.0 -c Release --no-restore  
RUN dotnet pack /p:Version=0.1.0 -c Release --no-restore --no-build  

However, if you're using a CI system to build your NuGet packages, you need some way of updating the version number in the Dockerfile dynamically. There's several ways you could do this, but one way is to use a Docker build argument.

Build arguments are values passed in when you call docker build. For example, I could pass in a build argument called Version when building my Dockerfile using:

docker build --build-arg Version="0.1.0" .  

Note that as you're providing the version number on the command line when you call docker build you can pass in a dynamic value, for example an Environment Variable set by your CI system.

In order for your Dockerfile to use the provided build argument, you need to declare it using the ARG instruction:

ARG Version  

To put that into context, the following is a very basic Dockerfile that uses a version provided in --build-args when building the app

FROM microsoft/dotnet:2.0.3-sdk AS builder

ARG Version  
WORKDIR /sln

COPY . .

RUN dotnet restore -c Release  
RUN dotnet build /p:Version=$Version -c Release --no-restore  
RUN dotnet pack /p:Version=$Version -c Release --no-restore --no-build  

Warning: This Dockerfile is VERY basic - don't use it for anything other than as an example of using ARG!

After building this Dockerfile you'll have an image that contains the NuGet packages for your application. It's then just a case of using dotnet nuget push to publish your package to a NuGet server. I won't go into details on how to do that in this post, so check the documentation for details.

Summary

Building NuGet packages in Docker is much like building them anywhere else with dotnet pack. The main things you need to take into account are optimising your Dockerfile to take advantage of layer caching, and how to set the version number for the generated packages. In this post I described how to use the --build-args argument to update the Version property at build time, to give the smallest possible effect on your build cache.


Anuraj Parameswaran: Code coverage in .NET Core with Coverlet

Few days back I wrote a post about code coverage in ASP.NET Core. In that post I was using Visual Studio 2017 Enterprise, which doesn’t support Linux or Mac and it is costly. Later I found one alternative, Coverlet - Coverlet is a cross platform code coverage library for .NET Core, with support for line, branch and method coverage. Coverlet integrates with the MSBuild system, so it doesn’t require any additional setup other than including the NuGet package in the unit test project. It integrates with the dotnet test infrastructure built into the .NET Core CLI and when enabled, will automatically generate coverage results after tests are run.


Ali Kheyrollahi: CacheCow 2.0 is here - now supporting .NET Standard and ASP.NET Core MVC


CacheCow 2.0 Series:

    So, no CacheCore in the end!

    Yeah. I did announce last year that the new updated CacheCow will live under the name CacheCore. The more I worked on it, the more it became evident that only a tiny amount of CacheCow will ever be Core-related. And frankly trends come and go, while HTTP Caching is pretty much unchanged for the last 20 years.

    So the name CacheCow lives on, although in the end what matters for a library is if it can solve any of your problems. I hope it will and carry on doing so. Now you can use CacheCow.Client with .NET 4.52+ and .NET Standard 2.0+. Also CacheCow.Server also supports both Web API and ASP.NET Core MVC - and possibly Nancy soon!

    CacheCow 2.0 has lots of documentation and the project now has 3 sample projects covering both client and server sides in the same project.

    CacheCow.Server has changed radically

    The design for the server-side of the CacheCow 0.x and 1.x was based on the assumption that your API is a pure RESTful API and the data only changes through calling its endpoints so the API layer gets to see all changes to its underlying resources. The more I explored and over the years, this turned out to be a pretty big assumption in the end, and is realistic only in the REST La La Land - a big learning for me. And even if the case is true, the relationship between resources resulted in server-side cache directive management to be a mammoth task. For example in the familiar scenario of customer-product-orders, if an order changes, the cache for the collection of orders is now invalidated - hence the API needs to understand which resource is collection of which. What is more, change in customer could change the order data (depending on implementation of course, but just the take it for the sake of argument). So it meant that the API now has to know a lot more: single vs collection resources, relationship between resources... it was a slippery slope to a very bad place.

    With removing that assumption, the responsibility now lies with the back-end stores which provide data for the resources - they will be queried by a couple of constructs added to CacheCow.Server. If you opt to implement that part for your API, then you have a supper-efficient API. If not, there are some defaults there to do the work for you - although super-optimal. All of this will be explained in the CacheCow.Server posts, but the point is CacheCow.Server is now a clean abstraction for HTTP Caching, as clean as I could make it. Judge for yourself.

    What is HTTP Caching?

    Caching is a very familiar notion in programming and pretty much every developer uses it on a regular basis. This familiarity has a downside to it since HTTP Caching is more complex and in many ways different to the routing caching in code - hence it is very common to see misunderstandings even amongst senior developers. If you ask an average developer this question: "In HTTP Caching, where the cache data gets stored?" it is probably more likely to hear the wrong answer "server" than the correct answer "client". In fact, many developers are looking for to improve their server-side code's performance by turning on the caching on the server, while if the callers ignore the caching directives it will not result in any benefit.

    This reminds me of a blog post I wrote 6 years ago where I used HTTP Caching as an example of mixed-concern (as opposed to server-concern or client-concern) where "For HTTP caching to work, client and server need to work in tandem".  This a key difference with the usual caching scenarios seen everyday. What makes HTTP Caching even more complex is the concurrency primitives, built-in starting with HTTP 1.1 - we will look into those below. 

    I know HTTP Caching is hardly new and has been explained many times before. But considering number of times I have seen being completely misunderstood, I think it deserves your 5-10 minutes - even though as refresher.


    Resources vs. Representations

    REST advocates exposing services through a uniform API (where HTTP is one such implementation) allowing resources to be created, modified and queried using the API. A resource is addressed by its location identifier or URL (e.g. /api/car/123). When a client requests a resource, only a representation of the resource is sent back. This means that the client receives only a representation out of many possible representations. This also would mean that when the client caches the representation, this representation is only valid if the the representation requested matches the one cached. And finally, a client might cache different representations of the same resource. But what does all of this mean?

    HTTP GET - The server serving a representation of the resource. Server also send cache directives.
    A resource could be represented differently in terms of format, encoding, language and other presentation concerns. HTTP provides semantic for the client to express its preferences in such concerns with headers such as Accept, Accept-Language and Accept-Encoding. There could be other headers that can result in alternative representations. The server is responsible for returning the definitive list of such headers in the Vary header.

    Cache Directives

    Server is responsible for returning cache directives along with the representation. Cache-Control header is the de-factor cache directive defining whether the representation can be cached, for how long, whether by the end client or also by the HTTP intermediaries/proxies, etc. HTTP 1.0 had the simple Expires header which only defined absolute expire time of the representation.

    You could also think of other cache-related headers as cache directives (although purely speaking they are not) such as ETag, Last-Modified and Vary.

    Resource Version Identifiers (Validators)

    HTTP 1.1 defines ETag as an opaque identifier which defines the version of the resource. ETag (or EntityTag) can be strong or weak. Normally a strong ETag identifies version of the representation while a weak ETag is only at the resource level.

    Last-Modified header was the main validator in HTTP 1.0 but since it is based on a date with up-to-a-second precision, it is not suitable for achieving high consistency since a resource could change multiple times in a second.

    CacheCow supports both validators (ETag and Last-Modified) and combines these two notions in the construct TimedETag.

    Validating (conditional) HTTP Calls

    A GET call can request the server for the resource with the condition that the resource has been modified with respect to its validator. In this case, the client sends ETag(s) in the If-None-Match header or Last-Modified date in the If-Modified-Since header. If validation matches and no change was made, the server returns status 304 otherwise the resource is sent back.

    For a PUT (and DELETE) call, the client sends validators in If-Match or If-Unmodified-Since.  The server performs the action if validation matches otherwise status 412 is sent back.

    Consistency

    The client normally caches representations longer than the expiry and after the expiry it resorts to validating calls and if they succeed it can carry on using the representations.

    In fact the sever can return representations with immediate expiry forcing the client to validate every time before using the cache resource. This scenario can be called High-Consistency caching since it ensures the client always uses the most recent version.

    Is HTTP Caching suitable for my scenario?

    Consider using HTTP Caching if:
    • Both your client and server are cache-aware. The client either is a browser which is the ultimate HTTP machine well capable of handling cache directives or a client that understands caching such as HttpClient + CacheCow.Client.
    • You need a High-Consistency caching and you cannot afford clients to use outdated data
    • Saving on network bandwidth is important

    HTTP Caching is unsuitable for you if:
    • Your client does not understand/implement HTTP caching
    • The server is unable to provide cache directives


    In the next post, we will look into CacheCow.Client.


    Damien Bowden: Uploading and sending image messages with ASP.NET Core SignalR

    This article shows how images could be uploaded using a file upload with a HTML form in an ASP.MVC Core view, and then sent to application clients using SignalR. The images are uploaded as an ICollection of IFormFile objects, and sent to the SignalR clients using a base64 string. Angular is used to implement the SignalR clients.

    Code https://github.com/damienbod/AspNetCoreAngularSignalR

    Posts in this series

    SignalR Server

    The SignalR Hub is really simple. This implements a single method which takes an ImageMessage type object as a parameter.

    using System.Threading.Tasks;
    using AspNetCoreAngularSignalR.Model;
    using Microsoft.AspNetCore.SignalR;
    
    namespace AspNetCoreAngularSignalR.SignalRHubs
    {
        public class ImagesMessageHub : Hub
        {
            public Task ImageMessage(ImageMessage file)
            {
                return Clients.All.SendAsync("ImageMessage", file);
            }
        }
    }
    

    The ImageMessage class has two properties, one for the image byte array, and a second for the image information, which is required so that the client application can display the image.

    public class ImageMessage
    {
    	public byte[] ImageBinary { get; set; }
    	public string ImageHeaders { get; set; }
    }
    

    In this example, SignalR is added to the ASP.NET Core application in the Startup class, but this could also be done directly in the kestrel server. The AddSignalR middleware is added and then each Hub explicitly with a defined URL in the Configure method.

    public void ConfigureServices(IServiceCollection services)
    {
    	...
    	
    	services.AddTransient<ValidateMimeMultipartContentFilter>();
    
    	services.AddSignalR()
    	  .AddMessagePackProtocol();
    
    	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    }
    
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
    	...
    
    	app.UseSignalR(routes =>
    	{
    		routes.MapHub<ImagesMessageHub>("/zub");
    	});
    
    	app.UseMvc(routes =>
    	{
    		routes.MapRoute(
    			name: "default",
    			template: "{controller=FileClient}/{action=Index}/{id?}");
    	});
    }
    
    

    A File Upload ASP.NET Core MVC controller is implemented to support the file upload. The SignalR IHubContext interface is added per dependency injection for the type ImagesMessageHub. When files are uploaded, the IFormFile collection which contain the images are read to memory and sent as a byte array to the SignalR clients. Maybe this could be optimized.

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Threading.Tasks;
    using AspNetCoreAngularSignalR.Model;
    using AspNetCoreAngularSignalR.SignalRHubs;
    using Microsoft.AspNetCore.Mvc;
    using Microsoft.AspNetCore.SignalR;
    using Microsoft.Net.Http.Headers;
    
    namespace AspNetCoreAngularSignalR.Controllers
    {
        [Route("api/[controller]")]
        public class FileUploadController : Controller
        {
            private readonly IHubContext<ImagesMessageHub> _hubContext;
    
            public FileUploadController(IHubContext<ImagesMessageHub> hubContext)
            {
                _hubContext = hubContext;
            }
    
            [Route("files")]
            [HttpPost]
            [ServiceFilter(typeof(ValidateMimeMultipartContentFilter))]
            public async Task<IActionResult> UploadFiles(FileDescriptionShort fileDescriptionShort)
            {
                if (ModelState.IsValid)
                {
                    foreach (var file in fileDescriptionShort.File)
                    {
                        if (file.Length > 0)
                        {
                            using (var memoryStream = new MemoryStream())
                            {
                                await file.CopyToAsync(memoryStream);
    
                                var imageMessage = new ImageMessage
                                {
                                    ImageHeaders = "data:" + file.ContentType + ";base64,",
                                    ImageBinary = memoryStream.ToArray()
                                };
    
                                await _hubContext.Clients.All.SendAsync("ImageMessage", imageMessage);
                            }
                        }
                    }
                }
    
                return Redirect("/FileClient/Index");
            }
        }
    }
    
    
    

    SignalR Angular Client

    The Angular SignalR client uses the HubConnection to receive ImageMessage messages. Each message is pushed to the client array which is used to display the images. The @aspnet/signalr npm package is required to use the HubConnection.

    import { Component, OnInit } from '@angular/core';
    import { HubConnection } from '@aspnet/signalr';
    import * as signalR from '@aspnet/signalr';
    
    import { ImageMessage } from '../imagemessage';
    
    @Component({
        selector: 'app-images-component',
        templateUrl: './images.component.html'
    })
    
    export class ImagesComponent implements OnInit {
        private _hubConnection: HubConnection | undefined;
        public async: any;
        message = '';
        messages: string[] = [];
    
        images: ImageMessage[] = [];
    
        constructor() {
        }
    
        ngOnInit() {
            this._hubConnection = new signalR.HubConnectionBuilder()
                .withUrl('https://localhost:44324/zub')
                .configureLogging(signalR.LogLevel.Trace)
                .build();
    
            this._hubConnection.stop();
    
            this._hubConnection.start().catch(err => {
                console.error(err.toString())
            });
    
            this._hubConnection.on('ImageMessage', (data: any) => {
                console.log(data);
                this.images.push(data);
            });
        }
    }
    

    The Angular template displays the images using the header and the binary data properties.

    <div class="container-fluid">
    
        <h1>Images</h1>
    
       <a href="https://localhost:44324/FileClient/Index" target="_blank">Upload Images</a> 
    
        <div class="row" *ngIf="images.length > 0">
            <img *ngFor="let image of images;" 
            width="150" style="margin-right:5px" 
            [src]="image.imageHeaders + image.imageBinary">
        </div>
    </div>
    

    File Upload

    The images are uploaded using an ASP.NET Core MVC View which uses a multiple file input HTML control. This sends the files to the MVC Controller as a multipart/form-data request.

    <form enctype="multipart/form-data" method="post" action="https://localhost:44324/api/FileUpload/files" id="ajaxUploadForm" novalidate="novalidate">
    
        <fieldset>
            <legend style="padding-top: 10px; padding-bottom: 10px;">Upload Images</legend>
    
            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <label>Upload</label>
                </div>
                <div class="col-xs-7">
                    <input type="file" id="fileInput" name="file" multiple>
                </div>
            </div>
    
            <div class="col-xs-12" style="padding: 10px;">
                <div class="col-xs-4">
                    <input type="submit" value="Upload" id="ajaxUploadButton" class="btn">
                </div>
                <div class="col-xs-7">
    
                </div>
            </div>
    
        </fieldset>
    
    </form>
    

    When the application is run, n instances of the clients can be opened. Then one can be used to upload images to all the other SignalR clients.

    This soultion works good, but has many ways, areas which could be optimized for performance.

    Links

    https://github.com/aspnet/SignalR

    https://github.com/aspnet/SignalR#readme

    https://radu-matei.com/blog/signalr-core/

    https://www.npmjs.com/package/@aspnet/signalr-client

    https://msgpack.org/

    https://stackoverflow.com/questions/40214772/file-upload-in-angular?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa

    https://stackoverflow.com/questions/39272970/angular-2-encode-image-to-base64?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa


    Andrew Lock: Version vs VersionSuffix vs PackageVersion: What do they all mean?

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    In this post I look at the various different version numbers you can set for a .NET Core project, such as Version, VersionSuffix, and PackageVersion. For each one I'll describe the format it can take, provide some examples, and what it's for.

    This post is very heavily inspired by Nate McMaster's question (which he also answered) on Stack Overflow. I'm mostly just reproducing it here so I can more easily find it again later!

    Version numbers in .NET

    .NET loves version numbers - they're sprinkled around everywhere, so figuring out what version of a tool you have is sometimes easier said than done.

    Leaving aside the tooling versioning, .NET also contains a plethora of version numbers for you to add to your assemblies and NuGet packages. There are at least seven different version numbers you can set when you build your assemblies. In this post I'll describe what they're for, how you can set them, and how you can read/use them.

    The version numbers available to you break logically into two different groups. The first group, below, exist only as MSBuild properties. You can set them in your csproj file, or pass them as command line arguments when you build your app, but their values are only used to control other properties; as far as I can tell, they're not visible directly anywhere in the final build output:

    So what are they for then? They control the default values for the version numbers which are visible in the final build output.

    I'll explain each number in turn, then I'll explain how you can set the version numbers when you build your app.

    VersionPrefix

    • Format: major.minor.patch[.build]
    • Examples: 0.1.0, 1.2.3, 100.4.222, 1.0.0.3
    • Default: 1.0.0
    • Typically used to set the overall SemVer version number for your app/library

    You can use VersionPrefx to set the "base" version number for your library/app. It indirectly controls all of the other version numbers generated by your app (though you can override it for other specific versions). Typically, you would use a SemVer 1.0 version number with three numbers, but technically you can use between 1 and 4 numbers. If you don't explicitly set it, VersionPrefix defaults to 1.0.0.

    VersionSuffix

    • Format: Alphanumberic (+ hyphen) string: [0-9A-Za-z-]*
    • Examples: alpha, beta, rc-preview-2-final
    • Default: (blank)
    • Sets the pre-release label of the version number

    VersionSuffix is used to set the pre-release label of the version number, if there is one, such as alpha or beta. If you don't set VersionSuffix, then you won't have any pre-release labels. VersionSuffix is used to control the Version property, and will appear in PackageVersion and InformationalVersion.

    Version

    • Format: major.minor.patch[.build][-prerelease]
    • Examples: 0.1.0, 1.2.3.5, 99.0.3-rc-preview-2-final
    • Default: VersionPrefix-VersionSuffix (or just VersionPrefix if VersionSuffix is empty)
    • The most common property set in a project, used to generate versions embedded in assembly.

    The Version property is the value most commonly set when building .NET Core applications. It controls the default values of all the version numbers embedded in the build output, such as PackageVersion and AssemblyVersion so it's often used as the single source of the app/library version.

    By default, Version is formed from the combination of VersionPrefix and VersionSuffix, or if VersionSuffix is blank, VersionPrefix only. For example,

    • If VersionPrefix = 0.1.0 and VersionSuffix = beta, then Version = 0.1.0-beta
    • If VersionPrefix = 1.2.3 and VersionSuffix is empty, then Version = 1.2.3

    Alternatively, you can explicitly overwrite the value of Version. If you do that, then the values of VersionPrefix and VersionSuffix are effectively unused.

    The format of Version, as you might expect, is a combination of the VersionPrefix and VersionSuffix formats. The first part is typically a SemVer three-digit string, but it can be up to four digits. The second part, the pre-release label, is an alphanumeric-plus-hyphen string, as for VersionSuffix.

    AssemblyVersion

    • Format: major.minor.patch.build
    • Examples: 0.1.0.0, 1.2.3.4, 99.0.3.99
    • Default: Version without pre-release label
    • The main value embedded into the generated .dll. An important part of assembly identity.

    Every assembly you produce as part of your build process has a version number embedded in it, which forms an important part of the assembly's identity. It's stored in the assembly manifest and is used by the runtime to ensure correct versions are loaded etc.

    The AssemblyVersion is used along with name, public key token and culture information only if the assemblies are strong-named signed. If assemblies are not strong-named signed, only file names are used for loading. You can read more about assembly versioning in the docs.

    The value of AssemblyVersion defaults to the value of Version, but without the pre-release label, and expanded to 4 digits. For example:

    • If Version = 0.1.2, AssemblyVersion = 0.1.2.0
    • If Version = 4.3.2.1-beta, AssemblyVersion = 4.3.2.1
    • If Version = 0.2-alpha, AssemblyVersion = 0.2.0.0

    The AssemblyVersion is embedded in the output assembly as an attribute, System.Reflection.AssemblyVersionAttribute. You can read this value by inspecting the executing Assembly object:

    using System;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var assemblyVersion = assembly.GetName().Version;
            Console.WriteLine($"AssemblyVersion {assemblyVersion}");
        }
    }
    

    FileVersion

    • Format: major.minor.patch.build
    • Examples: 0.1.0.0, 1.2.3.100
    • Default: AssemblyVersion
    • The file-system version number of the .dll file, that doesn't have to match the AssemblyVersion, but usually does.

    The file version is literally the version number exposed by the DLL to the file system. It's the number displayed in Windows explorer, which often matches the AssemblyVersion, but it doesn't have to. The FileVersion number isn't part of the assembly identity as far as the .NET Framework or runtime are concerned.

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    When strong naming was more heavily used, it was common to keep the same AssemblyVersion between different builds and increment FileVersion instead, to avoid apps having to update references to the library so often.

    The FileVersion is embedded in the System.Reflection.AssemblyFileVersionAttribute in the assembly. You can read this attribute from the assembly at runtime, or you can use the FileVersionInfo class by passing the full path of the assembly (Assembly.Location) to the FileVersionInfo.GetVersionInfo() method:

    using System;  
    using System.Diagnostics;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var fileVersionInfo = FileVersionInfo.GetVersionInfo(assembly.Location);
            var fileVersion = fileVersionInfo.FileVersion;
            Console.WriteLine($"FileVersion {fileVersion}");
        }
    }
    

    InformationalVersion

    • Format: anything
    • Examples: 0.1.0.0, 1.2.3.100-beta, So many numbers!
    • Default: Version
    • Another information number embedded into the DLL, can contain any text.

    The InformationalVersion is a bit of an odd-one out, in that it doesn't need to contain a "traditional" version number per-se, it can contain any text you like, though by default it's set to Version That makes it generally less useful for programmatic uses, though the value is still displayed in Windows explorer:

    Version vs VersionSuffix vs PackageVersion: What do they all mean?

    The InformationalVersion is embedded into the assembly as a System.Reflection.AssemblyInformationalVersionAttribute, so you can read it at runtime using the following:

    using System;  
    using System.Reflection;
    
    class Program  
    {
        static void Main(string[] args)
        {
            var assembly = Assembly.GetExecutingAssembly();
            var informationVersion = assembly.GetCustomAttribute<AssemblyInformationalVersionAttribute>().InformationalVersion;
            Console.WriteLine($"InformationalVersion  {informationVersion}");
        }
    }
    

    PackageVersion

    • Format: major.minor.patch[.build][-prerelease]
    • Examples: 0.1.0, 1.2.3.5, 99.0.3-rc-preview-2-final
    • Default: Version
    • Used to generate the NuGet package version when building a package using dotnet pack

    PackageVersion is the only version number that isn't embedded in the output dll directly. Instead, it's used to control the version number of the NuGet package that's generated when you call dotnet pack.

    By default, PackageVersion takes the same value as Version, so it's typically a three value SemVer version number, with or without a pre-release label. As with all the other version numbers, it can be overridden at build time, so it can differ from all the other assembly version numbers.

    How to set the version number when you build your app/library

    That's a lot of numbers, and you can technically set every one to a different value! But if you're a bit overwhelmed, don't worry. It's likely that you'll only want to set one or two values: either VersionPrefix and VersionSuffix, or Version directly.

    You can set the value of any of these numbers in several ways. I'll walk through them below.

    Setting an MSBuild property in your csproj file

    With .NET Core, and the simplification of the .csproj project file format, adding properties to your project file is no longer an arduous task. You can set any of the version numbers I've described in this post by setting a property in your .csproj file.

    For example, the following .csproj file sets the Version number of a console app to 1.2.3-beta, and adds a custom InformationalVersion:

    <Project Sdk="Microsoft.NET.Sdk">
    
      <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>netcoreapp2.0</TargetFramework>
        <Version>1.2.3-beta</Version>
        <InformationalVersion>This is a prerelease package</InformationalVersion>
      </PropertyGroup>
    
    </Project>  
    

    Overriding values when calling dotnet build

    As well as hard-coding the version numbers into your project file, you can also pass them as arguments when you build your app using dotnet build.

    If you just want to override the VersionSuffix, you can use the --version-suffix argument for dotnet build. For example:

    dotnet build --configuration Release --version-suffix preview2-final  
    

    If you want to override any other values, you'll need to use the MSBuild property format instead. For example, to set the Version number:

    dotnet build --configuration Release /p:Version=1.2.3-preview2-final  
    

    Similarly, if you're creating a NuGet package with dotnet pack, and you want to override the PackageVersion, you'll need to use MSBuild property overrides

    dotnet pack --no-build /p:PackageVersion=9.9.9-beta  
    

    Using assembly attributes

    Before .NET Core, the standard way to set the AssemblyVersion, FileVersion, and InformationalVersion were through attributes, for example:

    [assembly: AssemblyVersion("1.2.3.4")]
    [assembly: AssemblyFileVersion("6.6.6.6")]
    [assembly: AssemblyInformationalVersion("So many numbers!")]
    

    However, if you try to do that with a .NET Core project you'll be presented with errors!

    > Error CS0579: Duplicate 'System.Reflection.AssemblyFileVersionAttribute' attribute
    > Error CS0579: Duplicate 'System.Reflection.AssemblyInformationalVersionAttribute' attribute
    > Error CS0579: Duplicate 'System.Reflection.AssemblyVersionAttribute' attribute
    

    As the SDK sets these attributes automatically as part of the build, you'll get build time errors. Simply delete the assembly attributes, and use the MSBuild properties instead.

    Alternatively, as James Gregory points out on Twitter, you can still use the Assembly attributes in your code if you turn off the auto-generated assembly attributes. You can do this by setting the following property in your csproj file:

    <PropertyGroup>  
       <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
    </PropertyGroup>  
    

    This could be useful if you already have tooling or a CI process to update the values in the files, but otherwise I'd encourage you to embrace the new approach to setting your project's version numbers.

    Summary

    In this post I described the difference between the various version numbers you can set for your apps and libraries in .NET Core. There's an overwhelming number of versions to choose from, but generally it's best to just set the Version and use it for all of the version numbers.


    Anuraj Parameswaran: Getting started with SignalR service in Azure

    This post is about how to work with SignalR service in Azure. In Build 2018, Microsoft introduced SignalR in Azure as a service.


    Anuraj Parameswaran: Static Code Analysis of .NET Core Projects with SonarCloud

    This post is about how to use SonarCloud application for running static code analysis in .NET Core projects. Static analysis is a way of automatically analysing code without executing it. SonarCloud is cloud offering of SonarQube app. It is Free for Open source projects.


    Andrew Lock: Creating a generalised Docker image for building ASP.NET Core apps using ONBUILD

    Creating a generalised Docker image for building ASP.NET Core apps using ONBUILD

    This is a follow-up to my recent posts on building ASP.NET Core apps in Docker:

    In this post I'll show how to create a generalised Docker image that can be used to build multiple ASP.NET Core apps. If your app conforms to a standard format (e.g. projects in the src directory, test projects in a test directory) then you can use it as the base image of a Dockerfile to create very simple Docker images for building your own apps.

    As an example, if you use the Docker image described in this post (andrewlock/aspnetcore-build:2.0.7-2.1.105), you can build your ASP.NET Core application using the following Docker image:

    # Build image
    FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder
    
    # Publish
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This multi-stage build image can build a complete app - the builder only has two commands, a FROM statement, and a single RUN statement to publish the app. The runtime image build itself is the same as it would be without the generalised build image. If you wish to use the builder image yourself, you can use the andrewlock/aspnetcore-build repository, available on Docker Hub.

    In this post I'll describe the motivation for creating the generalised image, how to use Docker's ONBUILD command, and how the generalised image itself works.

    The Docker build image to generalise

    When you build an ASP.NET Core application (whether "natively" or in Docker), you typically move through the following steps:

    • Restore the NuGet packages
    • Build the libraries, test projects, and app
    • Test the test projects
    • Publish the app

    In Docker, these steps are codified in a Dockerfile by the layers you add to your image. A basic, non-general, Dockerfile to build your app could look something like the following:

    Note, this doesn't include the optimisation described in my earlier post or the follow up:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln  
    
    # Copy solution folders and NuGet config
    COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
    COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj
    
    # Copy the test project files
    COPY test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj
    
    # Restore to cache the layers
    RUN dotnet restore
    
    # Copy all the source code and build
    COPY ./test ./test  
    COPY ./src ./src  
    RUN dotnet build -c Release --no-restore
    
    # Run dotnet test on the solution
    RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" -c Release --no-build --no-restore
    
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This Dockerfile will build and test a specific ASP.NET Core app, but there are a lot of hard-coded paths in there. When you create a new app, you can copy and paste this Dockerfile, but you'll need to tweak all the commands to use the correct paths.

    By the time you get to your third copy-and-paste (and your n-th inevitable typo, you'll be wondering if there's a better, more general, way to achievev the same result. That's where Docker's ONBUILD command comes in. We can use it to create a generalised "builder" image for building our apps, and remove a lot of the repetition in the process.

    The ONBUILD Docker command

    In the Dockerfile shown above, the COPY and RUN commands are all executed in the context of your app. For normal builds, that's fine - the files that you want to copy are in the current directory. You're defining the commands to be run when you call docker build ..

    But we're trying to build a generalised "builder" image that we can use as the base for building other ASP.NET Core apps. Instead of defining the commands we want to execute when building our "builder" file, the commands should be run when an image that uses our "builder" as a base is built.

    The Docker documentation describes it as a "trigger" - you're defining a command to be triggered when the downstream build runs. I think of ONBUILD as effectively automating copy-and-paste; the ONBUILD command is copy-and-pasted into the downstream build.

    For example, consider this simple builder Dockerfile which uses ONBUILD:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src
    
    ONBUILD RUN dotnet build -c Release  
    

    This simple Dockerfile doesn't have any optimisations, but it uses ONBUILD to register triggers for downstream builds. Imagine you build this image using docker build . -tag andrewlock/testbuild. That creates a builder image called andrewlock/testbuild.

    The ONBUILD commands don't actually run when you build the "builder" image, they only run when you build the downstream image.

    You can then use this image as a basic "builder" image for your ASP.NET Core apps. For example, you could use the following Dockerfile to build your ASP.NET Core app:

    FROM andrewlock/testbuild
    
    ENTRYPOINT ["dotnet", "./src/MyApp/MyApp.dll"]  
    

    Note, for simplicity this example doesn't publish the app, or use multi-stage builds to optimise the runtime container size. Be sure to use those optimisations in production.

    That's a very small Dockerfile for building and running a whole app! The use of ONBUILD means that our downstream Dockerfile is equivalent to:

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    COPY ./test ./test  
    COPY ./src ./src
    
    RUN dotnet build -c Release
    
    ENTRYPOINT ["dotnet", "./src/MyApp/MyApp.dll"]  
    

    When you build this Dockerfile, the ONBUILD commands will be triggered in the current directory, and the app will be built. You only had to include the "builder" base image, and you got all that for free.

    That's the goal I want to achieve with a generalised builder image. You should be able to include the base image, and it'll handle all your app building for you. In the next section, I'll show the solution I came up with, and walk through the layers it contains.

    The generalised Docker builder image

    The image I've come up with, is very close to the example shown at the start of this post. It uses the dotnet restore optimisation I described in my previous post, along with a workaround to allow running all the test projects in a solution:

    # Build image
    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln
    
    ONBUILD COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    ONBUILD COPY src/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    ONBUILD COPY test/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done 
    
    ONBUILD RUN dotnet restore
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src  
    ONBUILD RUN dotnet build -c Release --no-restore
    
    ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  
    

    If you've read my previous posts, then much of this should look familiar (with extra ONBUILD prefixes), but I'll walk through each layer below.

    FROM microsoft/aspnetcore-build:2.0.7-2.1.105 AS builder  
    WORKDIR /sln  
    

    This defines the base image and working directory for our builder, and hence for the downstream apps. I've used the microsoft/aspnetcore-builder image, as we're going to build ASP.NET Core apps.

    Note, the microsoft/aspnetcore-builder image is being retired in .NET Core 2.1 - you will need to switch to the microsoft/dotnet image instead.

    The next line shows our first use of ONBUILD:

    ONBUILD COPY ./*.sln ./NuGet.config ./*.props ./*.targets  ./  
    

    This will copy the .sln file, NuGet.config, and any .props or .targets files in the root folder of the downstream build.

    # Copy the main source project files
    ONBUILD COPY src/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    ONBUILD COPY test/*/*.csproj ./  
    ONBUILD RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done  
    

    The Dockerfile uses the optimisation described in my previous post to copy the .csproj files from the src and test directories. As we're creating a generalised builder, we have to use an approach like this in which we don't explicitly specify the filenames.

    ONBUILD RUN dotnet restore
    
    ONBUILD COPY ./test ./test  
    ONBUILD COPY ./src ./src  
    ONBUILD RUN dotnet build -c Release --no-restore  
    

    The next section is the meat of the Dockerfile - we restore the NuGet packages, copy the source code across, and then build the app (using the release configuration).

    ONBUILD RUN find ./test -name '*.csproj' -print0 | xargs -L1 -0 dotnet test -c Release --no-build --no-restore  
    

    Which brings us to the final statement in the Dockerfile, in which we run all the test projects in the test directory. Unfortunately, due to limitations with dotnet test, this line is a bit of a hack.

    Ideally, we'd be able to call dotnet test on the solution file, and it would test all the projects that are test projects. However, this won't give you the result you want - it will try to test non-test projects which will give you errors. There are several different issues looking at this problem, along with some workarounds, but most of them require changes to the app itself, or the addition of extra files. I decided to use a simple scripting approach based on this comment instead.

    Using find with xargs is a common approach in Linux to execute a command against a number of different files.

    The find command lists all the .csproj files in the test sub-directory, i.e. our test project files. The -print0 argument means that each filename is suffixed with a null character. The

    The xargs command takes each filename provided by the file command and executes it with the command dotnet test -c Release --no-build --no-restore. The additional -0 argument indicates that we're using a null character delimiter, and the -L1 argument indicates we should only use a single filename with each dotnet test command.

    This approach isn't especially elegant, but it does the job, and it means we can avoid having to explicitly specify the paths to the test project.

    That's as much as we can do in the builder image - the publishing step is very specific to each app, so it's not feasible to include that in the builder. Instead, you have to specify that step in your own downstream Dockerfile, as shown in the next section.

    Using the generalised build image

    You can use the generalised Docker image, to create much simpler Dockerfiles for your downstream apps. You can use andrewlock/aspnetcore-build as your base image, then all you need to do is publish your app, and copy it to the runtime image. The following shows an example of what this might look like, for a simple ASP.NET Core app.

    # Build image
    FROM andrewlock/aspnetcore-build:2.0.7-2.1.105 as builder
    
    # Publish
    RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" -c Release -o "../../dist" --no-restore
    
    #App image
    FROM microsoft/aspnetcore:2.0.7  
    WORKDIR /app  
    ENV ASPNETCORE_ENVIRONMENT Local  
    ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]  
    COPY --from=builder /sln/dist .  
    

    This obviously only works if you apps use the same conventions as the builder app assumes, namely:

    • Your app and library projects are in a src subdirectory
    • Your test projectts are in a test subdirectory
    • All project files have the same name as their containing folders
    • There is only a single solution file

    If these conventions don't match your requirements, then my builder image won't work for you. But now you know how to create your own builder images using the ONBUILD command.

    Summary

    In this post I showed how you could use the Docker ONBUILD command to create custom app-builder Docker images. I showed an example image that uses a number of optimisations to create a generalised ASP.NET Core builder image which will restore, build, and test your ASP.NET Core app, as long as it conforms to a number of standard conventions.


    Anuraj Parameswaran: Working with Microsoft Library Manager for ASP.NET Core

    This post is about Working with Microsoft Library Manager for ASP.NET Core. Recently ASP.NET Team posted a blog - about bower deprecation. And in there they mentioned about a new tool called Library Manager - Library Manager (“LibMan” for short) is Visual Studio’s experimental client-side library acquisition tool. It provides a lightweight, simple mechanism that helps users find and fetch library files from an external source (such as CDNJS) and place them in your project. LibMan is not a package management system. If you’re using npm / yarn / (or something else), you can continue use it. LibMan was not developed as a replacement for these tools.


    Damien Bowden: OAuth using OIDC Authentication with PKCE for a .NET Core Console Native Application

    This article shows how to use a .NET Core console application securely with an API using the RFC 7636 specification. The app logs into IdentityServer4 using the OIDC authorization code flow with a PKCE (Proof Key for Code Exchange). The app can then use the access token to consume data from a secure API. This would be useful for power shell script clients, or .NET Core console apps. Identity.Model.Samples provide a whole range of native client examples, and this code was built using the .NET Core native code example.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    History

    2018-05-15 Updated title because it is confusing, OAuth Authentication replaced with OAuth using OIDC Authentication

    Native App PKCE Authorization Code Flow

    The RFC 7636 specification provides a safe way in which native applications can get access tokens to use with secure applications. Native applications have similar problems to web applications, single sign on is required sometimes, the native apps may not handle passwords, the server requires a way of validating the identity, and the client app requires a way of validating the token and so on.

    The RFC 7636 provides one of the best ways of doing this and by using a RFC standard, tested libraries which implement this, can be used. There is no need to re-invent the wheel.

    Flow overview src: 1.1. Protocol Flow

    The Proof Key for Code Exchange by OAuth Public Clients was designed so that the code cannot be intercepted in the Authorization Code Flow and used to get an access token. This can help for example, when the code is leaked to shared logs on a mobile device and a malicious application uses this to get an access token.

    The extra protection is added on this flow by using a code_verifier, code_challenge and a code_challenge_method. The code_challenge and the code_challenge_method are sent to the server with the authorization request. The code_challenge is the derived version of the code_verifier. When requesting the access token, the code_verifier is sent to the server, and this is then validated on the OIDC server using the values sent in the orignal authorization request.

    STS Server Configuration

    On IdentityServer4 ,the Proof Key for Code Exchange by OAuth can be configured as follows:

    new Client
    {
    	ClientId = "native.code",
    	ClientName = "Native Client (Code with PKCE)",
    
    	RedirectUris = { "http://127.0.0.1:45656" },
    	PostLogoutRedirectUris = { "http://127.0.0.1:45656" },
    
    	RequireClientSecret = false,
    
    	AllowedGrantTypes = GrantTypes.Code,
    	RequirePkce = true,
    	AllowedScopes = { "openid", "profile", "email", "native_api" },
    
    	AllowOfflineAccess = true,
    	RefreshTokenUsage = TokenUsage.ReUse
     }
    

    The RequirePkce is set to true, and no secrets are used, unlike the Authorization Code flow for web applications, as it makes no sense on public mobile native devices. Depending on the native device, the RedirectUris can be configured as required.

    Native client using .NET Core

    Implementing the client for a .NET Core application is really easy thanks to the IdentityModel.OidcClient nuget package and the examples provided on github. This repo provides reference examples for lots of different native client types, really impressive.

    This example was built used the following project: .NET Core Native Code

    IdentityModel.OidcClient takes care of the PKCE handling and the flow.

    The login can be implemented as follows:

    private static async Task Login()
    {
    	var browser = new SystemBrowser(45656);
    	string redirectUri = "http://127.0.0.1:45656";
    
    	var options = new OidcClientOptions
    	{
    		Authority = _authority,
    		ClientId = "native.code",
    		RedirectUri = redirectUri,
    		Scope = "openid profile native_api",
    		FilterClaims = false,
    		Browser = browser,
    		Flow = OidcClientOptions.AuthenticationFlow.AuthorizationCode,
    		ResponseMode = OidcClientOptions.AuthorizeResponseMode.Redirect,
    		LoadProfile = true
    	};
    
    	_oidcClient = new OidcClient(options); 
    	 var result = await _oidcClient.LoginAsync(new LoginRequest());
    	 ShowResult(result);
    }
    

    The SystemBrowser class uses this implementation from the IdentityModel.OidcClient samples. The results can be displayed as follows:

    private static void ShowResult(LoginResult result)
    {
    	if (result.IsError)
    	{
    		Console.WriteLine("\n\nError:\n{0}", result.Error);
    		return;
    	}
    
    	Console.WriteLine("\n\nClaims:");
    	foreach (var claim in result.User.Claims)
    	{
    		Console.WriteLine("{0}: {1}", claim.Type, claim.Value);
    	}
    
    	Console.WriteLine($"\nidentity token: {result.IdentityToken}");
    	Console.WriteLine($"access token:   {result.AccessToken}");
    	Console.WriteLine($"refresh token:  {result?.RefreshToken ?? "none"}");
    }
    

    And the API can be called then using the access token.

    private static async Task CallApi(string currentAccessToken)
    {
    	_apiClient.SetBearerToken(currentAccessToken);
    	var response = await _apiClient.GetAsync("");
    
    	if (response.IsSuccessStatusCode)
    	{
    		var json = JArray.Parse(await response.Content.ReadAsStringAsync());
    		Console.WriteLine(json);
    	}
    	else
    	{
    		Console.WriteLine($"Error: {response.ReasonPhrase}");
    	}
    }
    

    The IdentityModel.OidcClient can be used to implement almost any native device which needs to or should implement the Proof Key for Code Exchange by OAuth for authorization. There is no need to do the password handling in your native application.

    Links:

    http://openid.net/2015/05/26/enhancing-oauth-security-for-mobile-applications-with-pkse/

    https://tools.ietf.org/html/rfc7636

    https://github.com/IdentityModel/IdentityModel.OidcClient.Samples

    https://connect2id.com/blog/connect2id-server-3.9

    https://www.davidbritch.com/2017/08/using-pkce-with-identityserver-from_9.html

    https://developer.okta.com/authentication-guide/implementing-authentication/auth-code-pkce

    OAuth 2.0 and PKCE

    https://community.apigee.com/questions/47397/why-do-we-need-pkce-specification-rfc-7636-in-oaut.html

    https://oauth.net/articles/authentication/

    https://www.scottbrady91.com/OAuth/OAuth-is-Not-Authentication


    Andrew Lock: Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    This is a follow-up to my recent posts on building ASP.NET Core apps in Docker:

    In this post I expand on a comment Aidan made on my last post:

    Something that we do instead of the pre-build tarball step is the following, which relies on the pattern of naming the csproj the same as the directory it lives in. This appears to match the structure of your project, so it should work for you too.

    I'll walk through the code he provides to show how it works, and how to use it to build a standard ASP.NET Core application with Docker. The technique in this post can be used instead of the tar-based approach from my previous post, as long as your solution conforms to some standard conventions.

    I'll start by providing some background to why it's important to optimise the order of your Dockerfile, the options I've already covered, and the solution provided by Aidan in his comment.

    Background - optimising your Dockerfile for dotnet restore

    When building ASP.NET Core apps using Docker, it's important to consider the way Docker caches layers to build your app. I discussed this process in a previous post on building ASP.NET Core apps using Cake in Docker, so if that's new to you, i suggest checking it out.

    A common way to take advantage of the build cache when building your ASP.NET Core app, is to copy across only the .csproj, .sln and nuget.config files for your app before doing dotnet restore, instead of copying the entire source code. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the result of the restore, so it doesn't need to run again if all you do is change a .cs file for example.

    Due to the nature of Docker, there are many ways to achieve this, and I've discussed two of them previously, as summarised below.

    Option 1 - Manually copying the files across

    The easiest, and most obvious way to copy all the .csporj files from the Docker context into the image is to do it manually using the Docker COPY command. For example:

    # Build image
    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./  
    COPY ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  ./src/AspNetCoreInDocker.Lib/AspNetCoreInDocker.Lib.csproj  
    COPY ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  ./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj  
    COPY ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj  ./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj
    
    RUN dotnet restore  
    

    Unfortunately, this has one major downside: You have to manually reference every .csproj (and .sln) file in the Dockerfile.

    Ideally, you'd be able to do something like the following, but the wildcard expansion doesn't work like you might expect:

    # Copy all csproj files (WARNING, this doesn't work!)
    COPY ./**/*.csproj ./  
    

    That led to my alternative solution: creating a tar-ball of the .csproj files and expanding them inside the image.

    Option 2 - Creating a tar-ball of the project files

    In order to create a general solution, I settled on an approach that required scripting steps outside of the Dockerfile. For details, see my previous post, but in summary:

    1. Create a tarball of the project files using

    find . -name "*.csproj" -print0 \  
        | tar -cvf projectfiles.tar --null -T -`
    

    2. Expand the tarball in the Dockerfile

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./aspnetcore-in-docker.sln ./NuGet.config  ./  
    COPY projectfiles.tar .  
    RUN tar -xvf projectfiles.tar
    
    RUN dotnet restore  
    

    3. Delete the tarball once build is complete

    rm projectfiles.tar  
    

    This process works, but it's messy. It involves running bash scripts both before and after docker build, which means you can't do things like build automatically using DockerHub. This brings us to the hybrid alternative, proposed by Aidan.

    The new-improved solution

    The alternative solution actually uses the wildcard technique I previously dismissed, but with some assumptions about your project structure, a two-stage approach, and a bit of clever bash-work to work around the wildcard limitations.

    I'll start by presenting the complete solution, and I'll walk through and explain the steps later.

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./*.sln ./NuGet.config  ./
    
    # Copy the main source project files
    COPY src/*/*.csproj ./  
    RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
    
    # Copy the test project files
    COPY test/*/*.csproj ./  
    RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
    
    RUN dotnet restore
    
    # Remainder of build process
    

    This solution is much cleaner than my previous tar-based effort, as it doesn't require any external scripting, just standard docker COPY and RUN commands. It gets around the wildcard issue by copying across csproj files in the src directory first, moving them to their correct location, and then copying across the test project files.

    This requires a project layout similar to the following, where your project files have the same name as their folders. For the Dockerfile in this post, it also requires your projects to all be located in either the src or test sub-directory:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    Step-by-step breakdown of the new solution

    Just to be thorough, I'll walk through each stage of the Dockerfile below.

    1. Set the base image

    The first steps of the Dockerfile are the same for all solutions: it sets the base image, and copies across the .sln and NuGet.config file.

    FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder  
    WORKDIR /sln
    
    COPY ./*.sln ./NuGet.config  ./  
    

    After this stage, your image will contain 2 files:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    2. Copy src .csproj files to root

    In the next step, we copy all the .csproj files from the src folder, and dump them in the root directory.

    COPY src/*/*.csproj ./  
    

    The wildcard expands to match any .csproj files that are one directory down, in the src folder. After it runs, your image contains the following file structure:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    3. Restore src folder hierarchy

    The next stage is where the magic happens. We take the flat list of csproj files, and move them back to their correct location, nested inside sub-folders of src.

    RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done  
    

    I'll break this command down, so we can see what it's doing

    1. for file in $(ls *.csproj); do ...; done - List all the .csproj files in the root directory. Loop over them, and assign the file variable to the filename. In our case, the loop will run twice, once with AspNetCoreInDocker.Lib.csproj and once with AspNetCoreInDocker.Web.csproj.

    2. ${file%.*} - use bash's string manipulation library to remove the extension from the filename, giving AspNetCoreInDocker.Lib and AspNetCoreInDocker.Web.

    3. mkdir -p src/${file%.*}/ - Create the sub-folders based on the file names. the -p parameter ensures the src parent folder is created if it doesn't already exist.

    4. mv $file src/${file%.*} - Move the csproj file into the newly created sub-folder.

    After this stage executes, your image will contain a file system like the following:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    4. Copy test .csproj files to root

    Now the src folder is successfully copied, we can work on the test folder. The first step is to copy them all into the root directory again:

    COPY test/*/*.csproj ./  
    

    Which gives a hierarchy like the following:

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    5. Restore test folder hierarchy

    The final step is to restore the test folder as we did in step 3. We can use pretty much the same code as in step 3, but with src replaced by test:

    RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done  
    

    After this stage we have our complete skeleton project, consisting of just our sln, NuGet.config, and .csproj files, all in their correct place.

    Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)

    That leaves us free to build and restore the project while taking advantage of Docker's layer-caching optimisations, without having to litter our Dockerfile with specific project names, or use outside scripting to create a tar-ball.

    Summary

    For performance purposes, it's important to take advantage of Docker's caching mechanisms when building your ASP.NET Core applications. Some of the biggest gains can be had by caching the restore phase of the build process.

    In this post I showed an improved way to achieve this without having to resort to external scripting using tar, or having to list every .csproj file in your Dockerfile. This solution was based on a comment by Aidan on my previous post, so a big thanks to him!


    Damien Bowden: ASP.NET Core Authorization for Windows, Local accounts

    This article shows how authorization could be implemented for an ASP.NET Core MVC application. The authorization logic is extracted into a separate project, which is required by some certification software requirements. This could also be deployed as a separate service.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    Blogs in this series:

    Application Authorization Service

    The authorization service uses the claims returned for the identity of the MVC application. The claims are returned from the ASP.NET Core MVC client app which authenticates using the OpenID Connect Hybrid flow. The values are then used to create or define the authorization logic.

    The authorization service supports a single API method, IsAdmin. This method checks if the username is a defined admin, and that the person/client used a Windows account to login.

    using System;
    
    namespace AppAuthorizationService
    {
        public class AppAuthorizationService : IAppAuthorizationService
        {
            public bool IsAdmin(string username, string providerClaimValue)
            {
                return RulesAdmin.IsAdmin(username, providerClaimValue);
            }
        }
    }
    
    

    The rules define the authorization process. This is just a simple static configuration class, but any database, configuration files, authorization API could be used to check, define the rules.

    In this example, the administrators are defined in the class, and the Windows value is checked for the claim parameter.

    using System;
    using System.Collections.Generic;
    using System.Text;
    
    namespace AppAuthorizationService
    {
        public static class RulesAdmin
        {
    
            private static List<string> adminUsers = new List<string>();
    
            private static List<string> adminProviders = new List<string>();
    
            public static bool IsAdmin(string username, string providerClaimValue)
            {
                if(adminUsers.Count == 0)
                {
                    AddAllowedUsers();
                    AddAllowedProviders();
                }
    
                if (adminUsers.Contains(username) && adminProviders.Contains(providerClaimValue))
                {
                    return true;
                }
    
                return false;
            }
    
            private static void AddAllowedUsers()
            {
                adminUsers.Add("SWISSANGULAR\\Damien");
            }
    
            private static void AddAllowedProviders()
            {
                adminProviders.Add("Windows");
            }
        }
    }
    
    

    ASP.NET Core Policies

    The application authorization service also defines the ASP.NET Core policies which can be used by the client application. An IAuthorizationRequirement is implemented.

    using Microsoft.AspNetCore.Authorization;
     
    namespace AppAuthorizationService
    {
        public class IsAdminRequirement : IAuthorizationRequirement{}
    }
    

    The IAuthorizationRequirement implementation is then used in the AuthorizationHandler implementation IsAdminHandler. This handler checks, validates the claims, using the IAppAuthorizationService service.

    using Microsoft.AspNetCore.Authorization;
    using System;
    using System.Linq;
    using System.Threading.Tasks;
    
    namespace AppAuthorizationService
    {
        public class IsAdminHandler : AuthorizationHandler<IsAdminRequirement>
        {
            private IAppAuthorizationService _appAuthorizationService;
    
            public IsAdminHandler(IAppAuthorizationService appAuthorizationService)
            {
                _appAuthorizationService = appAuthorizationService;
            }
    
            protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, IsAdminRequirement requirement)
            {
                if (context == null)
                    throw new ArgumentNullException(nameof(context));
                if (requirement == null)
                    throw new ArgumentNullException(nameof(requirement));
    
                var claimIdentityprovider = context.User.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/identityprovider");
    
                if (claimIdentityprovider != null && _appAuthorizationService.IsAdmin(context.User.Identity.Name, claimIdentityprovider.Value))
                {
                    context.Succeed(requirement);
                }
    
                return Task.CompletedTask;
            }
        }
    }
    

    As an example, a second policy is also defined, which checks that the http://schemas.microsoft.com/identity/claims/identityprovider claim has a Windows value.

    using Microsoft.AspNetCore.Authorization;
    
    namespace AppAuthorizationService
    {
        public static class MyPolicies
        {
            private static AuthorizationPolicy requireWindowsProviderPolicy;
    
            public static AuthorizationPolicy GetRequireWindowsProviderPolicy()
            {
                if (requireWindowsProviderPolicy != null) return requireWindowsProviderPolicy;
    
                requireWindowsProviderPolicy = new AuthorizationPolicyBuilder()
                      .RequireClaim("http://schemas.microsoft.com/identity/claims/identityprovider", "Windows")
                      .Build();
    
                return requireWindowsProviderPolicy;
            }
        }
    }
    
    

    Using the Authorization Service and Policies

    The Authorization can then be used, by adding the services to the Startup of the client application.

    services.AddSingleton<IAppAuthorizationService, AppAuthorizationService.AppAuthorizationService>();
    services.AddSingleton<IAuthorizationHandler, IsAdminHandler>();
    
    services.AddAuthorization(options =>
    {
    	options.AddPolicy("RequireWindowsProviderPolicy", MyPolicies.GetRequireWindowsProviderPolicy());
    	options.AddPolicy("IsAdminRequirementPolicy", policyIsAdminRequirement =>
    	{
    		policyIsAdminRequirement.Requirements.Add(new IsAdminRequirement());
    	});
    });
    

    The policies can then be used in a controller and validate that the IsAdminRequirementPolicy is fulfilled.

    using Microsoft.AspNetCore.Authorization;
    using Microsoft.AspNetCore.Mvc;
    
    namespace MvcHybridClient.Controllers
    {
        [Authorize(Policy = "IsAdminRequirementPolicy")]
        public class AdminController : Controller
        {
            public IActionResult Index()
            {
                return View();
            }
        }
    }
    

    Or the IAppAuthorizationService can be used directly if you wish to mix authorization within a controller.

    private IAppAuthorizationService _appAuthorizationService;
    
    public HomeController(IAppAuthorizationService appAuthorizationService)
    {
    	_appAuthorizationService = appAuthorizationService;
    }
    
    public IActionResult Index()
    {
    	// Windows or local => claim http://schemas.microsoft.com/identity/claims/identityprovider
    	var claimIdentityprovider = 
    	  User.Claims.FirstOrDefault(t => 
    	    t.Type == "http://schemas.microsoft.com/identity/claims/identityprovider");
    
    	if (claimIdentityprovider != null && 
    	  _appAuthorizationService.IsAdmin(
    	     User.Identity.Name, 
    		 claimIdentityprovider.Value)
    	)
    	{
    		// yes, this is an admin
    		Console.WriteLine("This is an admin, we can do some specific admin logic!");
    	}
    
    	return View();
    }
    

    If an admin user from Windows logged in, the admin view can be accessed.

    Or the local guest user only sees the home view.


    Notes:

    This is a good way of separating the authorization logic from the business application in your software. Some certified software processes, require that the application authorization, authentication is audited before each release, for each new deployment if anything changed.
    By separating the logic, you can deploy, update the business application without doing a security audit. The authorization process could also be deployed to a separate process if required.

    Links:

    https://docs.microsoft.com/en-us/aspnet/core/security/authorization/views?view=aspnetcore-2.1&tabs=aspnetcore2x

    https://docs.microsoft.com/en-us/aspnet/core/security/authentication/?view=aspnetcore-2.1

    https://mva.microsoft.com/en-US/training-courses/introduction-to-identityserver-for-aspnet-core-17945

    https://stackoverflow.com/questions/34951713/aspnet5-windows-authentication-get-group-name-from-claims/34955119

    https://github.com/IdentityServer/IdentityServer4.Templates

    https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/authentication/windowsauthentication/


    Anuraj Parameswaran: How to reuse HTML snippets inside a Razor view in ASP.NET Core

    This post is a small tip about reusing HTML snippets inside a Razor view in ASP.NET Core. In earlier versions of ASP.NET MVC this could be achieved with the help of helper - A helper is a reusable component that includes code and markup to perform a task that might be tedious or complex. But there is no equivalent implementation is available in ASP.NET Core MVC. In this post I am explaining how we can achieve similar functionality in ASP.NET Core. Unlike ASP.NET MVC, this implementation, you can’t use it in multiple page. This is very helpful if you want to do some complex logic in view.


    Andrew Lock: Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    In this post I shown how you can use an IActionFilter in ASP.NET Core MVC to read the method parameters for an action method before it executes. I'll show two different approaches to solve the problem, depending on your requirements.

    In the first approach, you know that the parameter you're interested in (a string parameter called returnUrl for this post) is always passed as a top level argument to the action, e.g.

    public class AccountController  
    {
        public IActionResult Login(string returnUrl)
        {
            return View();
        }
    }
    

    In the second approach, you know that the returnUrl parameter will be in the request, but you don't know that it will be passed as a top-level parameter to a method. For example:

    public class AccountController  
    {
        public IActionResult Login(string returnUrl)
        {
            return View();
        }
    
        public IActionResult Login(LoginInputModel model)
        {
            var returnUrl = model.returnUrl
            return View();
        }
    }
    

    The action filters I describe in this post can be used for lots of different scenarios. To give a concrete example, I'll describe the original use case that made me investigate the options. If you're just interested in the implementation, feel free to jump ahead.

    Background: why would you want to do this?

    I was recently working on an IdentityServer 4 application, in which we wanted to display a slightly different view depending on which tenant a user was logging in to. OpenID Connect allows you to pass additional information as part of an authentication request as acr_values in the querystring. One of the common acr_values is tenant - it's so common that IdentityServer provides specific methods for pulling the tenant from the request URL.

    When an unauthenticated user attempts to use a client application that relies on IdentityServer for authentication, the client app calls the Authorize endpoint, which is part of the IdentityServer middleware. As the user is not yet authenticated, they are redirected to the login page for the application, with the returnUrl parameter pointing back to the middleware authorize endpoint:

    Using an IActionFilter to read action method parameter values in ASP.NET Core MVC

    After the user has logged in, they'll be redirected to the IdentityServer Authorize endpoint, which will return an access/id token back to the original client.

    In my scenario, I needed to determine the tenant that the original client provided in the request to the Authorize endpoint. That information is available in the returnUrl parameter passed to the login page. You can use the IdentityServer Interaction Service (IIdentityServerInteractionService) to decode the returnUrl parameter and extract the tenant with code similar to the following:

    public class AccountController  
    {
        private readonly IIdentityServerInteractionService _service;
        public AccountController(IIdentityServerInteractionService  service)
        {
            _service = service;
        }
    
        public IActionResult Login(string returnUrl)
        {
            var context = await _service.GetAuthorizationContextAsync(returnUrl);
            ViewData["Tenant"] = context?.Tenant;
            return View();
        }
    }
    

    You could then use the ViewData in a Razor view to customise the display. For example, in the following _Layout.cshtml, the Tenant name is added to the page as a class on the <body> tag.

    @{
        var tenant = ViewData["Tenant"] as string;
        var tenantClass = "tenant-" + (string.IsNullOrEmpty(tenant) ? "unknown" : tenant);
    }
    <!DOCTYPE html>  
    <html>  
      <head></head>
      <body class="@tenantClass">
        @RenderBody
      </body>
    </html>  
    

    This works fine, but unfortunately it means you need to duplicate the code to extract the tenant in every action method that has a returnUrl - for example the GET and POST version of the login methods, all the 2FA action methods, the external login methods etc.

    var context = await _service.GetAuthorizationContextAsync(returnUrl);  
    ViewData["Tenant"] = context?.Tenant;  
    

    Whenever you have a lot of duplication in your action methods, it's worth thinking whether you can extract that work into a filter (or alternatively, push it down into a command handler using a mediator).

    Now we have the background, lets look at creating an IActionFilter to handle this for us.

    Creating an IActionFilter that reads action method parameters

    One of the good things about using an IActionFilter (as opposed to some other MVC Filter) is that it executes after model binding, but before the action method has been executed. That gives you a ton of context to work with.

    The IActionFilter below reads an action method's parameters, looks for one called returnUrl and sets it as an item in ViewData. There's a bunch of assumptions in this code, so I'll walk through it below.

    public class SetViewDataFilter : IActionFilter  
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            if (context.ActionArguments.TryGetValue("returnUrl", out object value))
            {
                // NOTE: this assumes all your controllers derive from Controller.
                // If they don't, you'll need to set the value in OnActionExecuted instead
                // or use an IAsyncActionFilter
                if (context.Controller is Controller controller)
                {
                    controller.ViewData["ReturnUrl"] = value.ToString();
                }
            }
        }
    
        public void OnActionExecuted(ActionExecutedContext context) { }
    }
    

    The ActionExecutingContext object contains details about the action method that's about to be executed, model binding details, the ModelState - just about anything you could want! In this filter, I'm calling ActionArguments and looking for a parameter named returnUrl. This is a case-insensitive lookup, so any method parameters called returnUrl, returnURL, or RETURNURL would all be a match. If the action method has a match, we extract the value (as an object) into the value variable.

    Note that we are getting the value after it's been model bound to the action method's parameter. We didn't need to inspect the querystring, form data, or route values; however the MVC middleware managed it, we get the value.

    We've extracted the value of the returnUrl parameter, but now we need to store it somewhere. ASP.NET Core doesn't have any base-class requirements for your MVC controllers, so unfortunately you can't easily get a reference to the ViewData collection. Having said that, if all your controllers derive from the Controller base class, then you could cast to the type and access ViewData as I have in this simple example. This may work for you, it depends on the conventions you follow, but if not, I show an alternative later.

    You can register your action filter as a global filter when you call AddMvc in Startup.ConfigureServices. Be sure to also register the filter as a service with the DI container

    public void ConfigureServices(IServiceCollection services)  
    {
        services.AddTransient<SetViewDataFilter>();
        services.AddMvc(options =>
        {
            options.Filters.AddService<SetViewDataFilter>();
        });
    }
    

    In this example, I chose to not make the filter an attribute. If you want to use SetViewDataFilter to decorate specific action methods, you should derive from ActionFilterAttribute instead.

    In this example, SetViewDataFilter implements the synchronous version of IActionFilter, so unfortunately it's not possible to use IdentityServer's interaction service to obtain the Tenant from the returnUrl (as it requires an async call). We can get round that by implementing IAsyncActionFilter instead.

    Converting to an asynchronous filter with IAsyncActionFilter

    If you need to make async calls in your action filters, you'll need to implement the asynchronous interface, IAsyncActionFilter. Conceptually, this combines the two action filter methods (OnActionExecuting() and OnActionExecuted()) into a single OnActionExecutionAsync().

    When your filter executes, you're provided the ActionExecutingContext as before, but also an ActionExecutionDelegate delegate, which represents the rest of the MVC filter pipeline. This lets you control exactly when the rest of the pipeline executes, as well as allowing you to make async calls.

    Lets rewrite the action filter, and extend it to actually lookup the tenant with IdentityServer:

    public class SetViewDataFilter : IAsyncActionFilter  
    {
        readonly IIdentityServerInteractionService _service;
        public SetViewDataFilter(IIdentityServerInteractionService service)
        {
            _service = service;
        }
    
        public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
        {
            var tenant = await GetTenant(context);
    
            // Execute the rest of the MVC filter pipeline
            var resultContext = await next();
    
            if (resultContext.Result is ViewResult view)
            {
                view.ViewData["Tenant"] = tenant;
            }
        }
    
        async Task<string> GetTenant(ActionExecutingContext context)
        {
            if (context.ActionArguments.TryGetValue("returnURl", out object value)
                && value is string returnUrl)
            {
                var authContext = await _service.GetAuthorizationContextAsync(returnUrl);
                return authContext?.Tenant;
            }
    
            // no string parameter called returnUrl
            return null;
        }
    }
    

    I've moved the code to extract the returnUrl parameter from the action context into it's own method, in which we also use the IIdentityServerInteractionService to check the returnUrl is valid, and to fetch the provided tenant (if any).

    I've also used a slightly different construct to pass the value in the ViewData. Instead of putting requirements on the base class of the controller, I'm checking that the result of the action method was a ViewResult, and setting the ViewData that way. This seems like a better option - if we're not returning a ViewResult then ViewData is a bit pointless anyway!

    This action filter is very close to what I used to meet my requirements, but it makes one glaring assumption: that action methods always have a string parameter called returnUrl. Unfortunately, that may not be the case, for example:

    public class AccountController  
    {
        public IActionResult Login(LoginInputModel model)
        {
            var returnUrl = model.ReturnUrl
            return View();
        }
    }
    

    Even though the LoginInputModel has a ReturnUrl parameter that would happily bind to a returnUrl parameter in the querystring, our action filter will fail to retrieve it. That's because we're looking specifically at the action arguments for a parameter called returnUrl, but we only have model. We're going to need a different approach to satisfy both action methods.

    Using the ModelState to build an action filter

    It took me a little while to think of a solution to this issue. I toyed with the idea of introducing an interface IReturnUrl, and ensuring all the binding models implemented it, but that felt very messy to me, and didn't feel like it should be necessary. Alternatively, I could have looked for a parameter called model and used reflection to check for a ReturnUrl property. That didn't feel right either.

    I knew the model binder would treat string returnUrl and LoginInputModel.ReturnUrl the same way: they would both be bound correctly if I passed a querystring parameter of ?returnUrl=/the/value. I just needed a way of hooking into the model binding directly, instead working with the final method parameters.

    The answer was to use context.ModelState. ModelState contains a list of all the values that MVC attempted to bind to the request. You typically use it at the top of an MVC action to check that model binding and validation was successful using ModelState.IsValid, but it's also perfect for my use case.

    Based on the async version of our attribute you saw previously, I can update the GetTenant method to retrieve values from the ModelState instead of the action arguments:

    async Task<string> GetTenantFromAuthContext(ActionExecutingContext context)  
    {
        if (context.ModelState.TryGetValue("returnUrl", out var modelState)
            && modelState.RawValue is string returnUrl
            && !string.IsNullOrEmpty(returnUrl))
        {
            var authContext = await _interaction.GetAuthorizationContextAsync(returnUrl);
            return authContext?.Tenant;
        }
    
        // reutrnUrl wasn't in the request
        return null;
    }
    

    And that's it! With this quick change, I can retrieve the tenant both for action methods that have a string returnUrl parameter, and those that have a model with a ReturnUrl property.

    Summary

    In this post I showed how you can create an action filter to read the values of an action method before it executes. I then showed how to create an asynchronous version of an action filter using IAsyncActionFilter, and how to access the ViewData after an action method has executed. Finally, I showed how you can use the ModelState collection to access all model-bound values, instead of only the top-level parameters passed to the action method.


    Damien Bowden: Supporting both Local and Windows Authentication in ASP.NET Core MVC using IdentityServer4

    This article shows how to setup an ASP.NET Core MVC application to support both users who can login in with a local login account, solution specific, or use a windows authentication login. The identity created from the windows authentication could then be allowed to do different tasks, for example administration, or a user from the local authentication could be used for guest accounts, etc. To do this, IdentityServer4 is used to handle the authentication. The ASP.NET Core MVC application uses the OpenID Connect Hybrid Flow.

    Code: https://github.com/damienbod/AspNetCoreWindowsAuth

    Posts in this series:

    Setting up the STS using IdentityServer4

    The STS is setup using the IdentityServer4 dotnet templates. Once installed, the is4aspid template was used to create the application from the command line.

    The windows authentication is activated in the launchSettings.json. To setup the windows authentication for the deployment, refer to the Microsoft Docs.

    {
      "iisSettings": {
        "windowsAuthentication": true,
        "anonymousAuthentication": true,
        "iisExpress": {
          "applicationUrl": "https://localhost:44364/",
          "sslPort": 44364
        }
      },
    

    The OpenID Connect Hybrid Flow was then configured for the client application.

    new Client
    {
    	ClientId = "hybridclient",
    	ClientName = "MVC Client",
    
    	AllowedGrantTypes = GrantTypes.HybridAndClientCredentials,
    	ClientSecrets = { new Secret("hybrid_flow_secret".Sha256()) },
    
    	RedirectUris = { "https://localhost:44381/signin-oidc" },
    	FrontChannelLogoutUri = "https://localhost:44381/signout-oidc",
    	PostLogoutRedirectUris = { "https://localhost:44381/signout-callback-oidc" },
    
    	AllowOfflineAccess = true,
    	AllowedScopes = { "openid", "profile", "offline_access",  "scope_used_for_hybrid_flow" }
    }
    

    ASP.NET Core MVC Hybrid Client

    The ASP.NET Core MVC application is configured to authenticate using the STS server, and to save the tokens in a cookie. The AddOpenIdConnect method configures the OIDC Hybrid client, which must match the settings in the IdentityServer4 application.

    The TokenValidationParameters MUST be used, to set the NameClaimType property, otherwise the User.Identity.Name property will be null. This value is returned in the ‘name’ claim, which is not the default.

    services.AddAuthentication(options =>
    {
    	options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    	options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
    })
    .AddCookie()
    .AddOpenIdConnect(options =>
    {
    	options.SignInScheme = "Cookies";
    	options.Authority = stsServer;
    	options.RequireHttpsMetadata = true;
    	options.ClientId = "hybridclient";
    	options.ClientSecret = "hybrid_flow_secret";
    	options.ResponseType = "code id_token";
    	options.GetClaimsFromUserInfoEndpoint = true;
    	options.Scope.Add("scope_used_for_hybrid_flow");
    	options.Scope.Add("profile");
    	options.Scope.Add("offline_access");
    	options.SaveTokens = true;
    	// Set the correct name claim type
    	options.TokenValidationParameters = new TokenValidationParameters
    	{
    		NameClaimType = "name"
    	};
    });
    

    Then all controllers can be secured using the Authorize attribute. The anti forgery cookie should also be used, because the application uses cookies to store the tokens.

    [Authorize]
    public class HomeController : Controller
    {
    

    Displaying the login type in the ASP.NET Core Client

    Then application then displays the authentication type in the home view. To do this, a requireWindowsProviderPolicy policy is defined, which requires that the identityprovider claim has the value Windows. The policy is added using the AddAuthorization method options.

    var requireWindowsProviderPolicy = new AuthorizationPolicyBuilder()
     .RequireClaim("http://schemas.microsoft.com/identity/claims/identityprovider", "Windows")
     .Build();
    
    services.AddAuthorization(options =>
    {
    	options.AddPolicy(
    	  "RequireWindowsProviderPolicy", 
    	  requireWindowsProviderPolicy
    	);
    });
    

    The policy can then be used in the cshtml view.

    @using Microsoft.AspNetCore.Authorization
    @inject IAuthorizationService AuthorizationService
    @{
        ViewData["Title"] = "Home Page";
    }
    
    <br />
    
    @if ((await AuthorizationService.AuthorizeAsync(User, "RequireWindowsProviderPolicy")).Succeeded)
    {
        <p>Hi Admin, you logged in with an internal Windows account</p>
    }
    else
    {
        <p>Hi local user</p>
    
    }
    

    Both applications can then be started. The client application is redirected to the STS server and the user can login with either the Windows authentication, or a local account.

    The text in the client application is displayed depending on the Identity returned.

    Identity created for the Windows Authentication:

    Local Identity:

    Next Steps

    The application now works for Windows authentication, or a local account authentication. The authorization now needs to be set, so that the different types have different claims. The identities returned from the Windows Authentication will have different claims, to the identities returned form the local logon, which will be used for guest accounts.

    Links:

    https://docs.microsoft.com/en-us/aspnet/core/security/authorization/views?view=aspnetcore-2.1&tabs=aspnetcore2x

    https://docs.microsoft.com/en-us/aspnet/core/security/authentication/?view=aspnetcore-2.1

    https://mva.microsoft.com/en-US/training-courses/introduction-to-identityserver-for-aspnet-core-17945

    https://stackoverflow.com/questions/34951713/aspnet5-windows-authentication-get-group-name-from-claims/34955119

    https://github.com/IdentityServer/IdentityServer4.Templates

    https://docs.microsoft.com/en-us/iis/configuration/system.webserver/security/authentication/windowsauthentication/


    Andrew Lock: Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    This post was inspired by Scott Brady's recent post on implementing "passwordless authentication" using ASP.NET Core Identity.. In this post I show how to implement his "optimisation" suggestions to reduce the lifetime of "magic link" tokens.

    I start by providing some some background on the use case, but I strongly suggest reading Scott's post first if you haven't already, as mine builds strongly on his. I'll show:

    I'll start with the scenario: passwordless authentication.

    Passwordless authentication using ASP.NET Core Identity

    Scott's post describes how to recreate a login workflow similar to that of Slack's mobile app, or Medium:

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Instead of providing a password, you enter your email and they send you a magic link:

    Implementing custom token providers for passwordless authentication in ASP.NET Core Identity

    Clicking the link automatically, logs you into the app. In nhis post, Scott shows how you can recreate the "magic link" login workflow using ASP.NET Core Identity. In this post, I want to address the very final section in his post, titled Optimisations:Existing Token Lifetime.

    Scott points out that the implementation he provided uses the default token provider, the DataProtectorTokenProvider to generate tokens, which generates large, long-lived tokens, something like the following:

    CfDJ8GbuL4IlniBKrsiKWFEX/Ne7v/fPz9VKnIryTPWIpNVsWE5hgu6NSnpKZiHTGZsScBYCBDKx/  
    oswum28dUis3rVwQsuJd4qvQweyvg6vxTImtXSSBWC45sP1cQthzXodrIza8MVrgnJSVzFYOJvw/V  
    ZBKQl80hsUpgZG0kqpfGeeYSoCQIVhm4LdDeVA7vJ+Fn7rci3hZsdfeZydUExnX88xIOJ0KYW6UW+  
    mZiaAG+Vd4lR+Dwhfm/mv4cZZEJSoEw==  
    

    By default, these tokens last for 24 hours. For a passwordless authentication workflow, that's quite a lot longer than we'd like. Medium uses a 15 minute expiry for example.

    Scott describes several options you could use to solve this:

    • Change the default lifetime for all tokens that use the default token provider
    • Use a different token provider, for example one of the TOTP-based providers
    • Create a custom data-protection base token provider with a different token lifetime

    All three of these approaches work, so I'll discuss each of them in turn.

    Changing the default token lifetime

    When you generate a token in ASP.NET Core Identity, by default you will use the DataProtectorTokenProvider. We'll take a closer look at this class shortly, but for now it's sufficient to know it's used by workflows such as password reset (when you click the "forgot your password?" link) and for email confirmation.

    The DataProtectorTokenProvider depends on a DataProtectionTokenProviderOptions object which has a TokenLifespan property:

    public class DataProtectionTokenProviderOptions  
    {
        public string Name { get; set; } = "DataProtectorTokenProvider";
        public TimeSpan TokenLifespan { get; set; } = TimeSpan.FromDays(1);
    }
    

    This property defines how long tokens generated by the provider are valid for. You can change this value using the standard ASP.NET Core Options framework inside your Startup.ConfigureServices method:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure<DataProtectionTokenProviderOptions>(
                x => x.TokenLifespan = TimeSpan.FromMinutes(15));
    
            // other services configuration
        }
        public void Configure() { /* pipeline config */ }
    }
    

    In this example, I've configured the token lifespan to be 15 minutes using a lambda, but you could also configure it by binding to IConfiguration etc.

    The downside to this approach, is that you've now reduced the token lifetime for all workflows. 15 minutes might be fine for password reset and passwordless login, but it's potentially too short for email confirmation, so you might run into issues with lots of rejected tokens if you choose to go this route.

    Using a different provider

    As well as the default DataProtectorTokenProvider, ASP.NET Core Identity uses a variety of TOTP-based providers for generating short multi-factor authentication codes. For example, it includes providers for sending codes via email or via SMS. These providers both use the base TotpSecurityStampBasedTokenProvider to generate their tokens. TOTP codes are typically very short-lived, so seem like they would be a good fit for the passwordless login scenario.

    Given we're emailing the user a short-lived token for signing in, the EmailTokenProvider might seem like a good choice for our paswordless login. But the EmailTokenProvider is designed for providing 2FA tokens, and you probably shouldn't reuse providers for multiple purposes. Instead, you can create your own custom TOTP provider based on the built-in types, and use that to generate tokens.

    Creating a custom TOTP token provider for passwordless login

    Creating your own token provider sounds like a scary (and silly) thing to do, but thankfully all of the hard work is already available in the ASP.NET Core Identity libraries. All you need to do is derive from the abstract TotpSecurityStampBasedTokenProvider<> base class, and override a couple of simple methods:

    public class PasswordlessLoginTotpTokenProvider<TUser> : TotpSecurityStampBasedTokenProvider<TUser>  
        where TUser : class
    {
        public override Task<bool> CanGenerateTwoFactorTokenAsync(UserManager<TUser> manager, TUser user)
        {
            return Task.FromResult(false);
        }
    
        public override async Task<string> GetUserModifierAsync(string purpose, UserManager<TUser> manager, TUser user)
        {
            var email = await manager.GetEmailAsync(user);
            return "PasswordlessLogin:" + purpose + ":" + email;
        }
    }
    

    I've set CanGenerateTwoFactorTokenAsync() to always return false, so that the ASP.NET Core Identity system doesn't try to use the PasswordlessLoginTotpTokenProvider to generate 2FA codes. Unlike the SMS or Authenticator providers, we only want to use this provider for generating tokens as part of our passwordless login workflow.

    The GetUserModifierAsync() method should return a string consisting of

    ... a constant, provider and user unique modifier used for entropy in generated tokens from user information.

    I've used the user's email as the modifier in this case, but you could also use their ID for example.

    You still need to register the provider with ASP.NET Core Identity. In traditional ASP.NET Core fashion, we can create an extension method to do this (mirroring the approach taken in the framework libraries):

    public static class CustomIdentityBuilderExtensions  
    {
        public static IdentityBuilder AddPasswordlessLoginTotpTokenProvider(this IdentityBuilder builder)
        {
            var userType = builder.UserType;
            var totpProvider = typeof(PasswordlessLoginTotpTokenProvider<>).MakeGenericType(userType);
            return builder.AddTokenProvider("PasswordlessLoginTotpProvider", totpProvider);
        }
    }
    

    and then we can add our provider as part of the Identity setup in Startup:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddIdentity<IdentityUser, IdentityRole>()
                .AddEntityFrameworkStores<IdentityDbContext>() 
                .AddDefaultTokenProviders()
                .AddPasswordlessLoginTotpTokenProvider(); // Add the custom token provider
        }
    }
    

    To use the token provider in your workflow, you need to provide the key "PasswordlessLoginTotpProvider" (that we used when registering the provider) to the UserManager.GenerateUserTokenAsync() call.

    var token = await userManager.GenerateUserTokenAsync(  
                    user, "PasswordlessLoginTotpProvider", "passwordless-auth");
    

    If you compare that line to Scott's post, you'll see that we're passing "PasswordlessLoginTotpProvider" as the provider name instead of "Default".

    Similarly, you'll need to pass the new provider key in the call to VerifyUserTokenAsync:

    var isValid = await userManager.VerifyUserTokenAsync(  
                      user, "PasswordlessLoginTotpProvider", "passwordless-auth", token);
    

    If you're following along with Scott's post, you will now be using tokens witth a much shorter lifetime than the 1 day default!

    Creating a data-protection based token provider with a different token lifetime

    TOTP tokens are good for tokens with very short lifetimes (nominally 30 seconds), but if you want your link to be valid for 15 minutes, then you'll need to use a different provider. The default DataProtectorTokenProvider uses the ASP.NET Core Data Protection system to generate tokens, so they can be much more long lived.

    If you want to use the DataProtectorTokenProvider for your own tokens, and you don't want to change the default token lifetime for all other uses (email confirmation etc), you'll need to create a custom token provider again, this time based on DataProtectorTokenProvider.

    Given that all you're trying to do here is change the passwordless login token lifetime, your implementation can be very simple. First, create a custom Options object, that derives from DataProtectionTokenProviderOptions, and overrides the default values:

    public class PasswordlessLoginTokenProviderOptions : DataProtectionTokenProviderOptions  
    {
        public PasswordlessLoginTokenProviderOptions()
        {
            // update the defaults
            Name = "PasswordlessLoginTokenProvider";
            TokenLifespan = TimeSpan.FromMinutes(15);
        }
    }
    

    Next, create a custom token provider, that derives from DataProtectorTokenProvider, and takes your new Options object as a parameter:

    public class PasswordlessLoginTokenProvider<TUser> : DataProtectorTokenProvider<TUser>  
    where TUser: class  
    {
        public PasswordlessLoginTokenProvider(
            IDataProtectionProvider dataProtectionProvider,
            IOptions<PasswordlessLoginTokenProviderOptions> options) 
            : base(dataProtectionProvider, options)
        {
        }
    }
    

    As you can see, this class is very simple! Its token generating code is completely encapsulated in the base DataProtectorTokenProvider<>; all you're doing is ensuring the PasswordlessLoginTokenProviderOptions token lifetime is used instead of the default.

    You can again create an extension method to make it easier to register the provider with ASP.NET Core Identity:

    public static class CustomIdentityBuilderExtensions  
    {
        public static IdentityBuilder AddPasswordlessLoginTokenProvider(this IdentityBuilder builder)
        {
            var userType = builder.UserType;
            var provider= typeof(PasswordlessLoginTokenProvider<>).MakeGenericType(userType);
            return builder.AddTokenProvider("PasswordlessLoginProvider", provider);
        }
    }
    

    and add it to the IdentityBuilder instance:

    public class Startup  
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddIdentity<IdentityUser, IdentityRole>()
                .AddEntityFrameworkStores<IdentityDbContext>() 
                .AddDefaultTokenProviders()
                .AddPasswordlessLoginTokenProvider(); // Add the token provider
        }
    }
    

    Again, be sure you update the GenerateUserTokenAsync and VerifyUserTokenAsync calls in your authentication workflow to use the correct provider name ("PasswordlessLoginProvider" in this case). This will give you almost exactly the same tokens as in Scott's original example, but with the TokenLifespan reduced to 15 minutes.

    Summary

    You can implement passwordless authentication in ASP.NET Core Identity using the approach described in Scott Brady's post, but this will result in tokens and magic-links that are valid for a long time period: 1 day by default. In this post I showed three different ways you can reduce the token lifetime: you can change the default lifetime for all tokens; use very short-lived tokens by creating a TOTP provider; or use the ASP.NET Core Data Protection system to create medium-length lifetime tokens.


    Anuraj Parameswaran: Getting started with Blazor

    This post is about how to get started with Blazor. Blazor is an experimental .NET web framework using C#/Razor and HTML that runs in the browser with WebAssembly. Blazor enables full stack web development with the stability, consistency, and productivity of .NET. While this release is alpha quality and should not be used in production, the code for this release was written from the ground up with an eye towards building a production quality web UI framework.


    Anuraj Parameswaran: Dockerize an ASP.NET MVC 5 Angular application with Docker for Windows

    Few days back I wrote a post about working with Angular 4 in ASP.NET MVC. I received multiple queries on deployment aspects - how to setup the development environment or how to deploy it in IIS, or in Azure etc. In this post I am explaining how to deploy a ASP.NET MVC - Angular application to Docker environment.


    Andrew Lock: Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    In this post I describe a .NET Core CLI global tool I created that can be used to compress images using the TinyPNG developer API. I'll give some background on .NET Core CLI tools, describe the changes to tooling in .NET Core 2.1, and show some of the code required to build your own global tools. You can find the code for the tool in this post at https://github.com/andrewlock/dotnet-tinify.

    The code for my global tool was heavily based on the dotnet-serve tool by Nate McMaster. If you're interested in global tools, I strongly suggest reading his post on them, as it provides background, instructions, and an explanation of what's happening under the hood. He's also created a CLI template you can install to get started.

    .NET CLI tools prior to .NET Core 2.1

    The .NET CLI (which can be used for .NET Core and ASP.NET Core development) includes the concept of "tools" that you can install into your project. This includes things like the EF Core migration tool, the user-secrets tool, and the dotnet watch tool.

    Prior to .NET Core 2.1, you need to specifically install these tools in every project where you want to use them. Unfortunately, there's no tooling for doing this either in the CLI or in Visual Studio. Instead, you have to manually edit your .csproj file and add a DotNetCliToolReference:

    <ItemGroup>  
        <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="2.0.0" />
    </ItemGroup>  
    

    The tools themselves are distributed as NuGet packages, so when you run a dotnet restore on the project, it will restore the tool at the same time.

    Adding tool references like this to every project has both upsides and downsides. On the one hand, adding them to the project file means that everyone who clones your repository from source control will automatically have the correct tools installed. Unfortunately, having to manually add this line to every project means that I rarely bother installing non-essential-but-useful tools like dotnet watch anymore.

    .NET Core 2.1 global tools

    In .NET Core 2.1, a feature was introduced that allows you to globally install a .NET Core CLI tool. Rather than having to install the tool manually in every project, you install it once globally on your machine, and then you can run the tool from any project.

    You can think of this as synonymous with npm -g global packages

    The intention is to expose all the first-party CLI tools (such as dotnet-user-secrets and dotnet-watch) as global tools, so you don't have to remember to explicitly install them into your projects. Obviously this has the downside that all your team have to have the same tools (and potentially the same version of the tools) installed already.

    You can install a global tool using the .NET Core 2.1 SDK). For example, to install Nate's dotnet serve tool, you just need to run:

    dotnet install tool --global dotnet-serve  
    

    You can then run dotnet serve from any folder.

    In the next section I'll describe how I built my own global tool dotnet-tinify that uses the TinyPNG api to compress images in a folder.

    Compressing images using the TinyPNG API

    Images make up a huge proportion of the size of a website - a quick test on the Amazon home page shows that 94% of the page's size is due to images. That means it's important to make sure your images aren't using more data than they need too, as it will slow down your page load times.

    Page load times are important when you're running an ecommerce site, but they're important everywhere else too. I'm much more likely to abandon a blog if it takes 10 seconds to load the page, than if it pops in instantly.

    Before I publish images on my blog, I always wake sure they're as small as they can be. That means resizing them as necessary, using the correct format (.png for charts etc, .jpeg for photos), but also squashing them further.

    Different programs will save images with different quality, different algorithms, and different metadata. You can often get smaller images without a loss in quality by just stripping the metadata and using a different compression algorithm. When I as using a Mac, I typically used ImageOptim; now I typically use the TinyPNG website.

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    To improve my workflow, rather than manually uploading and downloading images, I decided a global tool would be perfect. I could install it once, and run dotnet tinify . to squash all the images in the current folder.

    Creating a .NET Core global tool

    Creating a .NET CLI global tool is easy - it's essentially just a console app with a few additions to the .csproj file. Create a .NET Core Console app, for example using dotnet new console, and update your .csproj to add the IsPackable and PackAsTool elements:

    <Project Sdk="Microsoft.NET.Sdk">
    
      <PropertyGroup>
        <OutputType>Exe</OutputType>
        <IsPackable>true</IsPackable>
        <PackAsTool>true</PackAsTool>
        <TargetFramework>netcoreapp2.1</TargetFramework>
      </PropertyGroup>
    
    </Project>
    

    It's as easy as that!

    You can add NuGet packages to your project, reference other projects, anything you like; it's just a .NET Core console app! In the final section of this post I'll talk briefly about the dontet-tinify tool I created.

    dotnet-tinify: a global tool for squashing images

    To be honest, creating the tool for dotnet-tinify really didn't take long. Most of the hard work had already been done for me, I just plugged the bits together.

    TinyPNG provides a developer API you can use to access their service. It has an impressive array of client libraries to choose from (e.g HTTP, Ruby, PHP, Node.js, Python, Java and .NET), and is even free to use for the first 500 compressions per month. To get started, head to https://tinypng.com/developers and signup (no credit card) to get an API key:

    Creating a .NET Core global CLI tool for squashing images with the TinyPNG API

    Given there's already an official client library (and it's .NET Standard 1.3 too!) I decided to just use that in dotnet-tinify. Compressing an image is essentially a 4 step process:

    1. Set the API key on the static Tinify object:

    Tinify.Key = apiKey;  
    

    2. Validate the API key

    await Tinify.Validate();  
    

    3. Load a file

    var source = Tinify.FromFile(file);  
    

    4. Compress the file and save it to disk

    await source.ToFile(file);  
    

    There's loads more you can with the API: resizing images, loading and saving to buffers, saving directly to s3. For details, take a look at the documentation.

    With the functionality aspect of the tool sorted, I needed a way to pass the API key and path to the files to compress to the tool. I chose to use Nate McMaster's CommandLineUtils fork, McMaster.Extensions.CommandLineUtils, which is one of many similar libraries you can use to handle command-line parsing and help message generation.

    You can choose to use either the builder API or an attribute API with the CommandLineUtils package, so you can choose whichever makes you happy. With a small amount of setup I was able to get easy command line parsing into strongly typed objects, along with friendly help messages on how to use the tool with the --help argument:

    > dotnet tinify --help
    Usage: dotnet tinify [arguments] [options]
    
    Arguments:  
      path  Path to the file or directory to squash
    
    Options:  
      -?|-h|--help            Show help information
      -a|--api-key <API_KEY>  Your TinyPNG API key
    
    You must provide your TinyPNG API key to use this tool  
    (see https://tinypng.com/developers for details). This
    can be provided either as an argument, or by setting the  
    TINYPNG_APIKEY environment variable. Only png, jpeg, and  
    jpg, extensions are supported  
    

    And that's it, the tool is finished. It's very basic at the moment (no tests 😱!), but currently that's all I need. I've pushed an early package to NuGet and the code is on GitHub so feel free to comment / send issues / send PRs.

    You can install the tool using

    dotnet install tool --global dotnet-tinify  
    

    You need to set your tiny API key in the TINYPNG_APIKEY environment for your machine (e.g. by executing setx TINYPNG_APIKEY abc123 in a command prompt), or you can pass the key as an argument to the dotnet tinify command (see below)

    Typical usage might be

    • dotnet tinify image.png - compress image.png in the current directory
    • dotnet tinify . - compress all the png and jpeg images in the current directory
    • dotnet tinify "C:\content" - compress all the png and jpeg images in the "C:\content" path
    • dotnet tinify image.png -a abc123 - compress image.png , providing your API key as an argument

    So give it a try, and have a go at writing your own global tool, it's probably easier than you think!

    Summary

    In this post I described the upcoming .NET Core global tools, and how they differ from the existing .NET Core CLI tools. I then described how I created a .NET Core global tool to compress my images using the TinyPNG developer API. Creating a global tool is as easy as setting a couple of properties in your .csproj file, so I strongly suggest you give it a try. You can find the dotnet-tinify tool I created on NuGet or on GitHub. Thanks to Nate McMaster for (heavily) inspiring this post!


    Damien Bowden: Comparing the HTTPS Security Headers of Swiss banks

    This post compares the security HTTP Headers used by different banks in Switzerland. securityheaders.io is used to test each of the websites. The website of each bank as well as the e-banking login was tested. securityheaders.io views the headers like any browser.

    The tested security headers help protect against some of the possible attacks, especially during the protected session. I would have expected all the banks to reach at least a grade of A, but was surprised to find, even on the login pages, many websites are missing some of the basic ways of protecting the application.

    Credit Suisse provide the best protection for the e-banking login, and Raiffeisen have the best usage of the security headers on the website. Strange that the Raiffeisen webpage is better protected than the Raiffeisen e-banking login.

    Scott Helme explains each of the different headers here, and why you should use them:

    TEST RESULTS

    Best A+, Worst F

    e-banking

    1. Grade A Credit Suisse
    1. Grade A Basler Kantonalbank
    3. Grade B Post Finance
    3. Grade B Julius Bär
    3. Grade B WIR Bank
    3. Grade B DC Bank
    3. Grade B Berner Kantonalbank
    3. Grade B St. Galler Kantonalbank
    3. Grade B Thurgauer Kantonalbank
    3. Grade B J. Safra Sarasin
    11. Grade C Raiffeisen
    12. Grade D Zürcher Kantonalbank
    13. Grade D UBS
    14. Grade D Valiant

    web

    1. Grade A Raiffeisen
    2. Grade A Credit Suisse
    2. Grade A WIR Bank
    2. Grade A J. Safra Sarasin
    5. Grade A St. Galler Kantonalbank
    6. Grade B Post Finance
    6. Grade B Valiant
    8. Grade C Julius Bär
    9. Grade C Migros Bank
    10. Grade D UBS
    11. Grade D Zürcher Kantonalbank
    12. Grade D Berner Kantonalbank
    13. Grade F DC Bank
    14. Grade F Thurgauer Kantonalbank
    15. Grade F Basler Kantonalbank

    TEST RESULTS DETAILS

    UBS

    https://www.ubs.com

    This is one of the worst protected of all the bank e-banking logins tested. It is missing most of the security headers. The website is also missing most of the security headers.

    https://ebanking-ch.ubs.com

    The headers returned from the e-banking login is even worst than the D rating, as it is also missing the X-Frame-options protection.

    cache-control →no-store, no-cache, must-revalidate, private
    connection →Keep-Alive
    content-encoding →gzip
    content-type →text/html;charset=UTF-8
    date →Tue, 27 Mar 2018 11:46:15 GMT
    expires →Thu, 1 Jan 1970 00:00:00 GMT
    keep-alive →timeout=5, max=10
    p3p →CP="OTI DSP CURa OUR LEG COM NAV INT"
    server →Apache
    strict-transport-security →max-age=31536000
    transfer-encoding →chunked
    

    No CSP is present here…

    Credit Suisse

    The Credit Suisse website and login are protected with most of the headers and have a good CSP. The no-referrer header is missing from the e-banking login and could be added.

    https://www.credit-suisse.com/ch/en.html

    CSP

    default-src 'self' 'unsafe-inline' 'unsafe-eval' data: *.credit-suisse.com 
    *.credit-suisse.cspta.ch *.doubleclick.net *.decibelinsight.net 
    *.mookie1.com *.demdex.net *.adnxs.com *.facebook.net *.google.com 
    *.google-analytics.com *.googletagmanager.com *.google.ch *.googleapis.com 
    *.youtube.com *.ytimg.com *.gstatic.com *.googlevideo.com *.twitter.com 
    *.twimg.com *.qq.com *.omtrdc.net *.everesttech.net *.facebook.com 
    *.adobedtm.com *.ads-twitter.com t.co *.licdn.com *.linkedin.com 
    *.credit-suisse.wesit.rowini.net *.zemanta.com *.inbenta.com 
    *.adobetag.com sc-static.net
    

    The CORS header is present, but it allows all origins, which is a bit lax, but CORS is not really a securtiy feature. I think is still should be more strict.

    https://direct.credit-suisse.com/dn/c/cls/auth?language=en

    CSP

    default-src dnmb: 'self' *.credit-suisse.com *.directnet.com *.nab.ch; 
    script-src dnmb: 'self' 'unsafe-inline' 'unsafe-eval' *.credit-suisse.com 
    *.directnet.com *.nab.ch ; style-src 'self' 'unsafe-inline' *.credit-suisse.com *.directnet.com *.nab.ch; img-src 'self' http://img.youtube.com data: 
    *.credit-suisse.com *.directnet.com *.nab.ch; connect-src 'self' wss: ; 
    font-src 'self' data:
    

    Raiffeisen

    The Raiffeisen website is the best protected of all the tested banks. The e-banking could be improved.

    https://www.raiffeisen.ch/rch/de.html

    CSP

    This is pretty good, but it allows unsafe-eval, probably due to the javascript lib used to implement the UI. This could be improved.

    Security-Policy	default-src 'self' ; script-src 'self' 'unsafe-inline' 
    'unsafe-eval' assets.adobedtm.com maps.googleapis.com login.raiffeisen.ch ;
     style-src 'self' 'unsafe-inline' fonts.googleapis.com ; img-src 'self' 
    statistics.raiffeisen.ch dmp.adform.net maps.googleapis.com maps.gstatic.com 
    csi.gstatic.com khms0.googleapis.com khms1.googleapis.com www.homegate.ch 
    dpm.demdex.net raiffeisen.demdex.net ; font-src 'self' fonts.googleapis.com 
    fonts.gstatic.com ; connect-src 'self' api.raiffeisen.ch statistics.raiffeisen.ch 
    www.homegate.ch prod1.solid.rolotec.ch dpm.demdex.net login.raiffeisen.ch ;
     media-src 'self' ruz.ch ; child-src * ; frame-src * ;
    

    https://ebanking.raiffeisen.ch/

    Zürcher Kantonalbank

    https://www.zkb.ch/

    The website is pretty bad. It has a mis-configuration in the X-Frame-Options. The e-banking login is missing most of the headers.

    https://onba.zkb.ch/page/logon/logon.page

    Post Finance

    Post Finance is missing the CSP header and the no-referrer header in both the website and the login. This could be improved.

    https://www.postfinance.ch/de/privat.html

    https://www.postfinance.ch/ap/ba/fp/html/e-finance/home?login

    Julius Bär

    Julius Bär is missing the CSP header and the no-referrer header for the e-banking login, and the X-Frame-Options is also missing from the website.

    https://www.juliusbaer.com/global/en/home/

    https://ebanking.juliusbaer.com/bjbLogin/login?lang=en

    Migros Bank

    The website is missing a lot of headers as well.

    https://www.migrosbank.ch/de/privatpersonen.html

    Migro Bank provided no login link from the browser.

    WIR Bank

    The WIR bank have one of the best websites, and is missing the the no-referrer header. It’s e-banking solution is missing both a CSP Header as well as a referrer policy. Here the website is more secure than the e-banking, strange.

    https://www.wir.ch/

    CSP

    frame-ancestors 'self' https://www.jobs.ch;
    

    https://wwwsec.wir.ch/authen/login?lang=de

    DC Bank

    The DC Bank is missing all the security headers on the website. This could really be improved! The e-banking is better, but missing the CSP and the referrer policies.

    https://www.dcbank.ch/

    https://banking.dcbank.ch/login/login.jsf?bank=74&lang=de&path=layout/dcb

    Basler Kantonalbank

    This is an interesting test. Basler Kantonalbank has a no security headers in the website, and even an incorrect X-Frame-Options. The e-banking is good, but missing the no-referrer policy. So it has the best and the worst of the banks tested.

    https://www.bkb.ch/en

    https://login.bkb.ch/auth/login

    CSP

    default-src https://*.bkb.ch https://*.mybkb.ch; 
    img-src data: https://*.bkb.ch https://*.mybkb.ch; 
    script-src 'unsafe-inline' 'unsafe-eval' 
    https://*.bkb.ch https://*.mybkb.ch; style-src 
    https://*.bkb.ch https://*.mybkb.ch 'unsafe-inline';
    

    Berner Kantonalbank

    https://www.bekb.ch/

    The Berner Kantonalbank has implemented 2 security headers on the website , but is missing the HSTS header. The e-banking is missing 2 of the security headers, no-referrer policy and the CSP.

    CSP

    frame-ancestors 'self'
    

    https://banking.bekb.ch/login/login.jsf?bank=5&lang=de&path=layout/bekb

    Valiant

    Valiant has one of the better websites, but the worst e-banking concerning the security headers. Only has the X-Frame-Options supported.

    https://www.valiant.ch/privatkunden

    https://wwwsec.valiant.ch/authen/login

    St. Galler Kantonalbank

    The website is an A-Grade, but missing 2 headers, the X-Frame-Options and the no-referrer header. The e-banking is less protected compared to the website, has a grade B. It is missing the CSP and the referrer policy.

    https://www.sgkb.ch/

    CSP

    default-src 'self' 'unsafe-inline' 'unsafe-eval' recruitingapp-1154.umantis.com 
    *.googleapis.com *.gstatic.com prod1.solid.rolotec.ch beta.idisign.ch 
    test.idisign.ch dis.swisscom.ch www.newhome.ch www.wuestpartner.com; 
    img-src * data: android-webview-video-poster:; font-src * data:
    

    https://www.onba.ch/login/login

    Thurgauer Kantonalbank

    The Thurgauer website is missing all the security headers, not even the HSTS supported, and the e-banking is missing the CSP and the no-referrer headers.

    https://www.tkb.ch/

    https://banking.tkb.ch/login/login

    J. Safra Sarasin

    J. Safra Sarasin website uses most security headers, it is only missing the no-referrer header. The e-banking webite is missing the CSP and the referrer headers.

    https://www.jsafrasarasin.ch

    CSP

    frame-ancestors 'self'
    

    https://ebanking-ch.jsafrasarasin.com/ebankingLogin/login

    It would be nice if the this part of the security could be improved for all of these websites.


    Andrew Lock: How to create a Helm chart repository using Amazon S3

    How to create a Helm chart repository using Amazon S3

    Helm is a package manager for Kubernetes. You can bundle Kubernetes resources together as charts that define all the necessary resources and dependencies of an application. You can then use the Helm CLI to install all the pods, services, and ingresses for an application in one simple command.

    Just like Docker or NuGet, there's a common public repository for Helm charts that the helm CLI uses by default. And just like Docker and NuGet, you can host your own Helm repository for your charts.

    In this post, I'll show how you can use an AWS S3 bucket to host a Helm chart repository, how to push custom charts to it, and how to install charts from the chart repository. I won't be going into Helm or Kubernetes in depth, I suggest you check the Helm quick start guide if they're new to you.

    If you're not using AWS, and you'd like to store your charts on Azure, Michal Cwienczek has a post on how to create a Helm chart repository using Blob Storage instead.

    Installing the prerequisites

    Before you start working with Helm properly, youu need to do some setup. The Helm S3 plugin you'll be using later requires that you have the AWS CLI installed and configured on your machine. You'll also need an S3 bucket to use as your repository.

    Installing the AWS CLI

    I'm using an Ubuntu 16.04 virtual machine for this post, so all the instructions assume you have the same setup.

    The suggested approach to install the AWS CLI is to use pip, the Python package index. This obviously requires Python, which you can confirm is installed using:

    $ python -V
    Python 2.7.12  
    

    According to the pip website:

    pip is already installed if you are using Python 2 >=2.7.9 or Python 3 >=3.4

    However, running which pip returned nothing for me, so I installed it anyway using

    $ sudo apt-get install python-pip
    

    Finally, we can install the AWS CLI using:

    $ pip install awscli
    

    The last thing to do is to configure your environment to access your AWS account. Add the ~./aws/config and ~./aws/credentials files to your home directory with the appropriate access keys, as described in the docs

    Creating the repository S3 bucket

    You're going to need an S3 bucket to store your charts. You can create the bucket anyway you like, either using the AWS CLI, or using the AWS Management Console. I used the Management Console to create a bucket called my-helm-charts:

    How to create a Helm chart repository using Amazon S3

    Whenever you create a new bucket, it's a good idea to think about who is able to access it, and what they're able to do. You can control this using IAM policies or S3 policies, whatever works for you. Just make sure you've looked into it!

    The policy below, for example, grants read and write access to the IAM user andrew.

    Once your repository is working correctly, you might want to update this so that only your CI/CD pipeline can push charts to your repository, but that any of your users can list and fetch charts. It may also be wise to remove the delete action completely.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowListObjects",
          "Effect": "Allow",
          "Principal": {
            "AWS": ["arn:aws:iam::111122223333:user/andrew"]
          },
          "Action": [
            "s3:ListBucket"
          ],
          "Resource": "arn:aws:s3:::my-helm-charts"
        },
        {
          "Sid": "AllowObjectsFetchAndCreate",
          "Effect": "Allow",
          "Principal": {
            "AWS": ["arn:aws:iam::111122223333:user/andrew"]
          },
          "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:PutObject"
          ],
          "Resource": "arn:aws:s3:::my-helm-charts/*"
        }
      ]
    }
    

    Installing the Helm S3 plugin

    You're almost set now. If you've haven't already, install Helm using the instructions in the quick start guide.

    The final prerequisite is the Helm S3 plugin. This acts as an intermediary between Helm and your S3 bucket. It's not the only way to create a custom repository, but it simplifies a lot of things.

    You can install the plugin from the GitHub repo by running:

    $ helm plugin install https://github.com/hypnoglow/helm-s3.git
    Downloading and installing helm-s3 v0.5.2 ...  
    Installed plugin: s3  
    

    This downloads the latest version of the plugin from GitHub, and registers it with Helm.

    Creating your Helm chart repository

    You're finally ready to start playing with charts properly!

    The first thing to do is to turn the my-helm-charts bucket into a valid chart repository. This requires adding an index.yaml to it. The Helm S3 plugin has a helper method to do that for you, which generates a valid index.yaml and uploads it to your S3 bucket:

    $ helm S3 init s3://my-helm-charts/charts
    Initialized empty repository at s3://my-helm-charts/charts  
    

    If you fetch the contents of the bucket now, you'll find an index.yamlfile under the /charts key

    How to create a Helm chart repository using Amazon S3

    Note, the /charts prefix is entirely optional. If you omit the prefix, the Helm chart repository will be in the root of the bucket. I just included it for demonstration purposes here.

    The contents of the index.yaml file is very basic at the moment:

    apiVersion: v1  
    entries: {}  
    generated: 2018-02-10T15:27:15.948188154-08:00  
    

    To work with the chart repository by name instead of needing the whole URL, you can add an alias. For example, to create a my-charts alias:

    $ helm repo add my-charts s3://my-helm-charts/charts
    "my-charts" has been added to your repositories
    

    If you run helm repo list now, you'll see your repo listed (along with the standard stable and local repos:

    $ helm repo list
    NAME            URL  
    stable          https://kubernetes-charts.storage.googleapis.com  
    local           http://127.0.0.1:8879/charts  
    my-charts       s3://my-helm-charts/charts  
    

    You now have a functioning chart repository, but it doesn't have any charts yet! In the next section I'll show how to push charts to, and install charts from, your S3 repository.

    Uploading a chart to the repository

    Before you can push a chart to the repository, you need to create one. If you already have one, you could use that, or you could copy one of the standard charts from the stable repository. For the sake of completion, I'll create a basic chart, and use that for the rest of the post.

    Creating a simple test Helm chart

    I used the example from the Helm docs for this test, which creates one of the simplest templates, a ConfigMap, and adds it at the path test-chart/templates/configmap.yaml:

    $ helm create test-chart
    Creating test-chart  
    # Remove the initial cruft
    $ rm -rf test-chart/templates/*.*
    # Create a ConfigMap template at test-chart/templates/configmap.yaml
    $ cat >test-chart/templates/configmap.yaml <<EOL
    apiVersion: v1  
    kind: ConfigMap  
    metadata:  
      name: test-chart-configmap
    data:  
      myvalue: "Hello World"
    EOL  
    

    You can install this chart into your kubernetes cluster using:

    $ helm install ./test-chart
    NAME:   zeroed-armadillo  
    LAST DEPLOYED: Fri Feb  9 17:10:38 2018  
    NAMESPACE: default  
    STATUS: DEPLOYED
    
    RESOURCES:  
    ==> v1/ConfigMap
    NAME               DATA  AGE  
    test-chart-configmap  1     0s  
    

    and remove it again completely using the release name presented when you installed it (zeroed-armadillo) :

    # --purge removes the release from the "store" completely
    $ helm delete --purge zeroed-armadillo
    release "zeroed-armadillo" deleted  
    

    Now you have a chart to work with it's time to push it to your repository.

    Uploading the test chart to the chart repository

    To push the test chart to your repository you must first package it. This takes all the files in your ./test-chart repository and bundles them into a single .tgz file:

    $ helm package ./test-chart
    Successfully packaged chart and saved it to: ~/test-chart-0.1.0.tgz  
    

    Once the file is packaged, you can push it to your repository using the S3 plugin, by specifying the packaged file name, and the my-charts alias you specified earlier.

    $ helm s3 push ./test-chart-0.1.0.tgz my-charts
    

    Note that without the plugin you would normally have to "manually" sync your local and remote repos, merging the remote repository with your locally added charts. The S3 plugin handles all that for you.

    If you check your S3 bucket after pushing the chart, you'll see that the tgz file has been uploaded:

    How to create a Helm chart repository using Amazon S3

    That's it, you've pushed a chart to an S3 repository!

    Searching and installing from the repository

    If you do a search for a test chart using helm search you can see your chart listed:

    $ helm search test-chart
    NAME                    CHART VERSION   APP VERSION     DESCRIPTION  
    my-charts/test-chart    0.1.0           1.0             A Helm chart for Kubernetes  
    

    You can fetch and/or unpack the chart locally using helm fetch my-charts/test-chart or you can jump straight to installing it using:

    $ helm install my-charts/test-chart
    NAME:   rafting-crab  
    LAST DEPLOYED: Sat Feb 10 15:53:34 2018  
    NAMESPACE: default  
    STATUS: DEPLOYED
    
    RESOURCES:  
    ==> v1/ConfigMap
    NAME               DATA  AGE  
    mychart-configmap  1     0s  
    

    To remove the test chart from the repository, you provide the chart name and version you wish to delete:

    $ helm s3 delete test-chart --version 0.1.0 my-charts
    

    That's basically all there is to it! You now have a central repository on S3 for storing your charts. You can fetch, search, and install charts from your repository, just as you would any other.

    A warning - make sure you version your charts correctly

    Helm charts should be versioned using Semantic versioning, so if you make a change to a chart, you should be sure to bump the version before pushing it to your repository. You should treat the chart name + version as immutable.

    Unfortunately, there's currently nothing in the tooling to enforce this, and prevent you overwriting an existing chart with a chart with the same name and version number. There's an open issue to address this in the S3 plugin, but in the mean time, just be careful, and potentially enable versioning of files in S3 to catch any issues.
    As of version 0.6.0, the plugin will block overwriting a chart if it already exists.

    In a similar vein, you may want to disable the ability to delete charts from a repository. I feel like it falls under the same umbrella as immutability of charts in general - you don't want to break downstream charts that have taken a dependency on your chart.

    Summary

    In this post I showed how to create a Helm chart repository in S3 using the Helm S3 plugin. I showed how to prepare an S3 bucket as a Helm repository, and how to push a chart to it. Finally, I showed how to search and install charts from the S3 repository.
    .


    Andrew Lock: Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    In ASP.NET core 2.1 (currently in preview 1) Microsoft have changed the way the ASP.NET core framework is deployed for .NET Core apps, by moving to a system of shared frameworks instead of using the runtime store.

    In this post, I look at some of the history and motivation for this change, the changes that you'll see when you install the ASP.NET Core 2.1 SDK or runtime on your machine, and what it all means for you as an ASP.NET Core developer.

    If you're not interested in the history side, feel free to skip ahead to the impact on you as an ASP.NET Core developer:

    The Microsoft.AspNetCore.All metapackage and the runtime store

    In this section, I'll recap over some of the problems that the Microsoft.AspNetCore.All was introduced to solve, as well as some of the issues it introduces. This is entirely based on my own understanding of the situation (primarily gleaned from these GitHub issues), so do let me know in the comments if I've got anything wrong or misrepresented the situation!

    In the beginning, there were packages. So many packages.

    With ASP.NET Core 1.0, Microsoft set out to create a highly modular, layered, framework. Instead of the monolithic .NET framework that you had to install in it's entirety in a central location, you could reference individual packages that provide small, discrete piece of functionality. Want to configure your app using JSON files? Add the Microsoft.Extensions.Configuration.Json package. Need environment variables? That's a different package (Microsoft.Extensions.Configuration.EnvironmentVariables).

    This approach has many benefits, for example:

    • You get a clear "layering" of dependencies
    • You can update packages independently of others
    • You only have to include the packages that you actually need, reducing the published size of your app.

    Unfortunately, these benefits diminished as the framework evolved.

    Initially, all the framework packages started at version 1.0.0, and it was simply a case of adding or removing packages as necessary for the required functionality. But bug fixes arrived shortly after release, and individual packages evolved at different rates. Suddenly .csproj files were awash with different version numbers, 1.0.1, 1.0.3, 1.0.2. It was no longer easy to tell at a glance whether you were on the latest version of a package, and version management became a significant chore. The same was true when ASP.NET Core 1.1 was released - a brief consolidation was followed by diverging package versions:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    On top of that, the combinatorial problem of testing every version of a package with every other version, meant that there was only one "correct" combination of versions that Microsoft would support. For example, using the 1.1.0 version of the StaticFiles middleware with the 1.0.0 MVC middleware was easy to do, and would likely work without issue, but was not a configuration Microsoft could support.

    It's worth noting that the Microsoft.AspNetCore metapackage partially solved this issue, but it only included a limited number of packages, so you would often still be left with a degree of external consolidation required.

    Add to that the discoverability problem of finding the specific package that contains a given API, slow NuGet restore times due to the sheer number of packages, and a large published output size (as all packages are copied to the bin folder) and it was clear a different approach was required.

    Unifying package versions with a metapackage

    In ASP.NET Core 2.0, Microsoft introduced the Microsoft.AspNetCore.All metapackage and the .NET Core runtime store. These two pieces were designed to workaround many of the problems that we've touched on, without sacrificing the ability to have distinct package dependency layers and a well factored framework.

    I discussed this metapackage and the runtime store in a previous post, but I'll recap here for convenience.

    The Microsoft.AspNetCore.All metapackage solves the issue of discoverability and inconsistent version numbers by including a reference to every package that is part of ASP.NET Core 2.0, as well as third-party packages referenced by ASP.NET Core. This includes both integral packages like Newtonsoft.Json, but also packages like StackExchange.Redis that are used by somewhat-peripheral packages like Microsoft.Extensions.Caching.Redis.

    On the face of it, you might expect shipping a larger metapackage to cause everything to get even slower - there would be more packages to restore, and a huge number of packages in your app's published output.

    However, .NET Core 2.0 includes a new feature called the runtime store. This essentially lets you pre-install packages on a machine, in a central location, so you don't have to include them in the publish output of your individual apps. When you install the .NET Core 2.0 runtime, all the packages required by the Microsoft.AspNetCore.All metapackage are installed globally (at C:\Program Files\dotnet\store. on Windows):

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    When you publish your app, the Microsoft.AspNetCore.All metapackage trims out all the dependencies that it knows will be in the runtime store, significantly reducing the number of dlls in your published app's folder.

    The runtime store has some additional benefits. It can use "ngen-ed" libraries that are already optimised for the the target machine, improving start up time. You can also use the store to "light-up" features at runtime such as Application insights, but you can create your own manifests too.

    Unfortunately, there are a few downsides to the store...

    The ever-growing runtime stores

    By design, if your app is built using the Microsoft.AspNetCore.All metapacakge, and hence uses the runtime store output-trimming, you can only run your app on a machine that has the correct version of the runtime store installed (via the .NET Core runtime installer).

    For example, if you use the Microsoft.AspNetCore.All metapackage for version 2.0.1, you must have the runtime store for 2.0.1 installed, version 2.0.0 and 2.0.2 are no good. That means if you need to fix a critical bug in production, you would need to install the next version of the runtime store, and you would need to update, recompile, and republish all of your apps to use it. This generally leads to runtime stores growing, as you can't easily delete old versions.

    This problem is a particular issue if you're running a platform like Azure, so Microsoft are acutely aware of the issue. If you deploy your apps using Docker for example, this doesn't seem like as big of a problem.

    The solution Microsoft have settled on is somewhat conceptually similar to the runtime store, but it actually goes deeper than that.

    Introducing Shared Frameworks in ASP.NET Core 2.1

    In ASP.NET Core 2.1 (currently at preview 1), ASP.NET Core is now a Shared Framework, very similar to the existing Microsoft.NETCore.App shared framework that effectively "is" .NET Core. When you install the .NET Core runtime you can also install the ASP.NET Core runtime:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    After you install the preview, you'll find you have three folders in C:\Program Files\dotnet\shared (on Windows):

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    These are the three Shared frameworks for ASP.NET Core 2.1:

    • Microsoft.NETCore.App - the .NET Core framework that previously was the only framework installed
    • Microsoft.AspNetCore.App - all the dlls from packages that make up the "core" of ASP.NET Core, with as many packages that have third-party dependencies removed
    • Microsoft.AspNetCore.All - all the packages that were previously referenced by the Microsoft.AspNetCore.All metapackage, including all their dependencies.

    Each of these frameworks "inherits" from the last, so there's no duplication of libraries between them, but the folder layout is much simpler - just a flat list of libraries:

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    So why should I care?

    That's all nice and interesting, but how does it affect how we develop ASP.NET Core applications? Well for the most part, things are much the same, but there's a few points to take note of.

    Reference Microsoft.AspNetCore.App in your apps

    As described in this issue, Microsoft have introduced another metapackage called Microsoft.AspNetCore.App with ASP.NET Core 2.1. This contains all of the libraries that make up the core of ASP.NET Core that are shipped by the .NET and ASP.NET team themselves. Microsoft recommend using this package instead of the All metapackage, as that way they can provide direct support, instead of potentially having to rely on third-party libraries (like StackExchange.Redis or SQLite).

    In terms of behaviour, you'll still effectively get the same publish output dependency-trimming that you do currently (though the mechanism is slightly different), so there's no need to worry about that. If you need some of the extra packages that aren't part of the new Microsoft.AspNetCore.App metapackage, then you can just reference them individually.

    Note that you are still free to reference the Microsoft.AspNetCore.All metapackage, it's just not recommended as it locks you into specific versions of third-party dependencies. As you saw previously, the All shared framework inherits from the App shared framework, so it should be easy enough to switch between them

    Framework version mismatches

    By moving away from the runtime store, and instead moving to a shared-framework approach, it's easier for the .NET Core runtime to handle mis-matches between the requested runtime and the installed runtimes.

    With ASP.NET Core prior to 2.1, the runtime would automatically roll-forward patch versions if a newer version of the runtime was installed on the machine, but it would never roll forward minor versions. For example, if versions 2.0.2 and 2.0.3 were installed, then an app targeting 2.0.2 would use 2.0.3 automatically. However if only version 2.1.0 was installed and the app targeted version 2.0.0, the app would fail to start.

    With ASP.NET Core 2.1, the runtime can roll-forward by using a newer minor version of the framework than requested. So in the previous example, an app targeting 2.0.0 would be able to run on a machine that only has 2.1.0 or 2.2.1 installed for example.

    An exact minor match is always chosen preferentially; the minor version only rolls-forward when your app would otherwise be unable to run.

    Exact dependency ranges

    The final major change introduced in Microsoft.AspNetCore.App is the use of exact-version requirements for referenced NuGet packages. Typically, most NuGet packages specify their dependencies using "at least" ranges, where any dependent package will satisfy the requirement.

    For example, the image below shows some of the dependencies of the Microsoft.AspNetCore.All (version 2.0.6) package.

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Due to the way these dependencies are specified, it would be possible to silently "lift" a dependency to a higher version than that specified. For example, if you added a package which depended on a newer version, say 2.1.0 of Microsoft.AspNetCore.Authentication, to an app using version 2.0.0 of the All package then NuGet would select 2.1.0 as it satisfies all the requirements. That could result in you trying to use using untested combinations of the ASP.NET Core framework libraries.

    Consequently, the Microsoft.AspNetCore.App package specifies exact versions for it's dependencies (note the = instead of >=)

    Exploring the Microsoft.AspNetCore.App shared framework in ASP.NET Core 2.1 (preview 1)

    Now if you attempt to pull in a higher version of a framework library transitively, you'll get an error from NuGet when it tries to restore, warning you about the issue. So if you attempt to use version 2.2.0 of Microsoft.AspNetCore.Antiforgery with version 2.1.0 of the App metapackage for example, you'll get an error.

    It's still possible to pull in a higher version of a framework package if you need to, by referencing it directly and overriding the error, but at that point you're making a conscious decision to head into uncharted waters!

    Summary

    ASP.NET Core 2.1 brings a surprising number of fundamental changes under the hood for a minor release, and fundamentally re-architects the way ASP.NET Core apps are delivered. However as a developer you don't have much to worry about. Other than switching to the Microsoft.AspNetCore.App metapackage and making some minor adjustments, the upgrade from 2.0 to 2.1 should be very smooth. If you're interested in digging further into the under-the-hood changes, I recommend checking out the links below:


    Damien Bowden: Using Message Pack with ASP.NET Core SignalR

    This post shows how SignalR could be used to send messages between different C# console clients using Message Pack as the protocol. An ASP.NET Core web application is used to host the SignalR Hub.

    Code: https://github.com/damienbod/AspNetCoreAngularSignalR

    Posts in this series

    History

    2018-05-31 Updated Microsoft.AspNetCore.SignalR 2.1

    2018-05-08 Updated Microsoft.AspNetCore.SignalR 2.1 rc1

    Setting up the Message Pack SignalR server

    Add the Microsoft.AspNetCore.SignalR and the Microsoft.AspNetCore.SignalR.MsgPack NuGet packages to the ASP.NET Core server application where the SignalR Hub will be hosted. The Visual Studio NuGet Package Manager can be used for this.

    Or just add it directly to the .csproj project file.

    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR" 
      Version="1.0.0-rc1-final" />
    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Protocols.MessagePack" 
      Version="1.0.0-rc1-final" />
    

    Setup a SignalR Hub as required. This is done by implementing the Hub class.

    using Dtos;
    using Microsoft.AspNetCore.SignalR;
    using System.Threading.Tasks;
    
    namespace AspNetCoreAngularSignalR.SignalRHubs
    {
        // Send messages using Message Pack binary formatter
        public class LoopyMessageHub : Hub
        {
            public Task Send(MessageDto data)
            {
                return Clients.All.SendAsync("Send", data);
            }
        }
    }
    
    

    A DTO class is created to send the Message Pack messages. Notice that the class is a plain C# class with no Message Pack attributes, or properties.

    using System;
    
    namespace Dtos
    {
        public class MessageDto
        {
            public Guid Id { get; set; }
    
            public string Name { get; set; }
    
            public int Amount { get; set; }
        }
    }
    
    

    Then add the Message Pack protocol to the SignalR service.

    services.AddSignalR()
    .AddMessagePackProtocol();
    

    And configure the SignalR Hub in the Startup class Configure method of the ASP.NET Core server application.

    app.UseSignalR(routes =>
    {
    	routes.MapHub<LoopyMessageHub>("/loopymessage");
    });
    

    Setting up the Message Pack SignalR client

    Add the Microsoft.AspNetCore.SignalR.Client and the Microsoft.AspNetCore.SignalR.Client.MsgPack NuGet packages to the SignalR client console application.

    The packages are added to the project file.

    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Client" 
      Version="1.0.0" />
    <PackageReference 
      Include="Microsoft.AspNetCore.SignalR.Protocols.MessagePack" 
      Version="1.0.0" />
    

    Create a Hub client connection using the Message Pack Protocol. The Url must match the URL configuration on the server.

    public static async Task SetupSignalRHubAsync()
    {
    	_hubConnection = new HubConnectionBuilder()
    		 .WithUrl("https://localhost:44324/loopymessage")
    		 .AddMessagePackProtocol()
    		 .ConfigureLogging(factory =>
    		 {
    			 factory.AddConsole();
    			 factory.AddFilter("Console", level => level >= LogLevel.Trace);
    		 }).Build();
    
    	 await _hubConnection.StartAsync();
    }
    

    The Hub can then be used to send or receive SignalR messages using the Message Pack as the binary serializer.

    using Dtos;
    using System;
    using System.Threading.Tasks;
    using Microsoft.AspNetCore.SignalR.Client;
    using Microsoft.Extensions.Logging;
    using Microsoft.AspNetCore.SignalR.Protocol;
    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.DependencyInjection.Extensions;
    
    namespace ConsoleSignalRMessagePack
    {
        class Program
        {
            private static HubConnection _hubConnection;
    
            public static void Main(string[] args) => MainAsync().GetAwaiter().GetResult();
    
            static async Task MainAsync()
            {
                await SetupSignalRHubAsync();
                _hubConnection.On<MessageDto>("Send", (message) =>
                {
                    Console.WriteLine($"Received Message: {message.Name}");
                });
                Console.WriteLine("Connected to Hub");
                Console.WriteLine("Press ESC to stop");
                do
                {
                    while (!Console.KeyAvailable)
                    {
                        var message = Console.ReadLine();
                        await _hubConnection.SendAsync("Send", new MessageDto() { Id = Guid.NewGuid(), Name = message, Amount = 7 });
                        Console.WriteLine("SendAsync to Hub");
                    }
                }
                while (Console.ReadKey(true).Key != ConsoleKey.Escape);
    
                await _hubConnection.DisposeAsync();
            }
    
            public static async Task SetupSignalRHubAsync()
            {
                _hubConnection = new HubConnectionBuilder()
                     .WithUrl("https://localhost:44324/loopymessage")
                     .AddMessagePackProtocol()
                     .ConfigureLogging(factory =>
                     {
                         factory.AddConsole();
                         factory.AddFilter("Console", level => level >= LogLevel.Trace);
                     }).Build();
    
                 await _hubConnection.StartAsync();
            }
        }
    }
    
    

    Testing

    Start the server application, and 2 console applications. Then you can send and receive SignalR messages, which use Message Pack as the protocol.


    Links:

    https://msgpack.org/

    https://github.com/aspnet/SignalR

    https://github.com/aspnet/SignalR#readme

    https://radu-matei.com/blog/signalr-core/


    Anuraj Parameswaran: Exploring Global Tools in .NET Core

    This post is about Global Tools in .NET Core, Global Tools is new feature in .NET Core. Global Tools helps you to write .NET Core console apps that can be packaged and delivered as NuGet packages. It is similar to npm global tools.


    Damien Bowden: First experiments with makecode and micro:bit

    At the MVP Global Summit, I heard about MakeCode for the first time. The project makes it really easy for people to get a first contact, introduction with code and computer science. I got the chance to play around with the Micro:bit which has a whole range of sensors and can easily be programmed from MakeCode.

    I decided to experiment and tried it out with two 12 year olds and a 10 ten old.

    MakeCode

    The https://makecode.com/ website provides a whole range of links, getting started lessons, or great ideas on how people can use this. We experimented firstly with the MakeCode Micro:bit. This software can be run from any browser, and the https://makecode.microbit.org/ can be used to experiment, or programme the Micro:bit.

    Micro:bit

    The Micro:bit is a 32 bit arm computer with all types of sensors, inputs and outputs. Here’s a link with the features: http://microbit.org/guide/features/

    The Micro:bit can be purchased almost anywhere in the world. Links for your country can be found here: https://www.microbit.org/resellers/

    The Micro:bit can be connected to your computer using a USB cable.

    Testing

    Once setup, I gave the kids a simple introduction, explained the different blocks, and very quickly they started to experiment themselves. The first results looked like this, which they called a magic show. Kind of like the name.

    I also explained the code, and they could understand how to change to code, and how it mapped back to the block code. The mapping between the block code, and the text code is a fantastic feature of MakeCode. First question was why bother with the text code, which meant they understood the relationship, so I could explain the advantages then.

    input.onButtonPressed(Button.A, () => {
        basic.showString("HELLO!")
        basic.pause(2000)
    })
    input.onGesture(Gesture.FreeFall, () => {
        basic.showIcon(IconNames.No)
        basic.pause(4000)
    })
    input.onButtonPressed(Button.AB, () => {
        basic.showString("DAS WARS")
    })
    input.onButtonPressed(Button.B, () => {
        basic.showIcon(IconNames.Angry)
    })
    input.onGesture(Gesture.Shake, () => {
        basic.showLeds(`
            # # # # #
            # # # # #
            # # # # #
            # # # # #
            # # # # #
            `)
    })
    basic.forever(() => {
        led.plot(2, 2)
        basic.pause(30)
        led.unplot(2, 2)
        basic.pause(300)
    })
    

    Then it was downloaded to the Micro:bit. The MakeCode Micro:bit software provides a download button. If you use this from the browser, it creates a hex file, which can be downloaded to the hardware per drag and drop. If you use the MakeCode Micro:bit Windows Store application, it will download it directly for you.

    Once downoaded, the magic show could begin.

    The following was produced by the 10 year, who needed a bit more help. He discovered the sound.

    Notes

    This is a super project, and would highly recommended it to schools, or as a present for kids. There are so many ways to try out new things, or code with different hardware, or even Minecraft. The kids have started to introduce it to other kids already. It would be great, if they could do this in school. If you have questions or queries, the MakeCode team are really helpful and can be reached here at twitter: @MsMakeCode, or you can create a github issue. The docs are really excellent if you require help with programming, and provides some really cool examples and ideas.

    Links:

    http://makecode.com/

    https://makecode.microbit.org/

    https://www.microbit.org/resellers/

    http://microbit.org/guide/features/


    Damien Bowden: Securing the CDN links in the ASP.NET Core 2.1 templates

    This article uses the the ASP.NET Core 2.1 MVC template and shows how to secure the CDN links using the integrity parameter.

    A new ASP.NET Core MVC application was created using the 2.1 template in Visual Studio.

    This template uses HTTPS per default and has added some of the required HTTPS headers like HSTS which is required for any application. The template has added the integrity parameter to the javascript CDN links, but on the CSS CDN links, it is missing.

    <script src="https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.2.0.min.js"
     asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
     asp-fallback-test="window.jQuery"  
     crossorigin="anonymous"
     integrity="sha384-K+ctZQ+LL8q6tP7I94W+qzQsfRV2a+AfHIi9k8z8l9ggpc8X+Ytst4yBo/hH+8Fk">
    </script>
    

    If the value of the integrity is changed, or the CDN script was changed, or for example a bitcoin miner was added to it, the MVC application will not load the script.

    To test this, you can change the value of the integrity parameter on the script, and in the production environment, the script will not load and fallback to the localhost deployed script. By changing the value of the integrity parameter, it simulates a changed script on the CDN. The following snapshot shows an example of the possible errors sent to the browser:

    Adding the integrity parameter to the CSS link

    The template creates a bootstrap link in the _Layout.cshtml as follows:

    <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
                  asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
                  asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />
    

    This is missing the integrity parameter. To fix this, the integrity parameter can be added to the link.

    <link rel="stylesheet" 
              integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" 
              crossorigin="anonymous"
              href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
              asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
              asp-fallback-test-class="sr-only"
              asp-fallback-test-property="position" 
              asp-fallback-test-value="absolute" />
    

    The value of the integrity parameter was created using SRI Hash Generator. When creating this, you have to be sure, that the link is safe. By using this CDN, your application trusts the CDN links.

    Now if the css file was changed on the CDN server, the application will not load it.

    The CSP Header of the application can also be improved. The application should only load from the required CDNs and no where else. This can be forced by adding the following CSP configuration:

    content-security-policy: 
    script-src 'self' https://ajax.aspnetcdn.com;
    style-src 'self' https://ajax.aspnetcdn.com;
    img-src 'self';
    font-src 'self' https://ajax.aspnetcdn.com;
    form-action 'self';
    frame-ancestors 'self';
    block-all-mixed-content
    

    Or you can use NWebSec and add it to the startup.cs

    app.UseCsp(opts => opts
    	.BlockAllMixedContent()
    	.FontSources(s => s.Self()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    	.FormActions(s => s.Self())
    	.FrameAncestors(s => s.Self())
    	.ImageSources(s => s.Self())
    	.StyleSources(s => s.Self()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    	.ScriptSources(s => s.Self()
    		.UnsafeInline()
    		.CustomSources("https://ajax.aspnetcdn.com"))
    );
    

    Links:

    https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity

    https://www.srihash.org/

    https://www.troyhunt.com/protecting-your-embedded-content-with-subresource-integrity-sri/

    https://scotthelme.co.uk/tag/cdn/

    https://rehansaeed.com/tag/subresource-integrity-sri/

    https://rehansaeed.com/subresource-integrity-taghelper-using-asp-net-core/


    Andrew Lock: Fixing Nginx "upstream sent too big header" error when running an ingress controller in Kubernetes

    Fixing Nginx

    In this post I describe a problem I had running IdentityServer 4 behind an Nginx reverse proxy. In my case, I was running Nginx as an ingress controller for a Kubernetes cluster, but the issue is actually not specific to Kubernetes, or IdentityServer - it's an Nginx configuration issue.

    The error: "upstream sent too big header while reading response header from upstream"

    Initially, the Nginx ingress controller appeared to be configured correctly. I could view the IdentityServer home page, and could click login, but when I was redirected to the authorize endpoint (as part of the standard IdentityServer flow), I would get a 502 Bad Gateway error and a blank page.

    Looking through the logs, IdentityServer showed no errors - as far as it was concerned there were no problems with the authorize request. However, looking through the Nginx logs revealed this gem (formatted slightly for legibility):

    2018/02/05 04:55:21 [error] 193#193:  
        *25 upstream sent too big header while reading response header from upstream, 
    client:  
        192.168.1.121, 
    server:  
        example.com, 
    request:  
      "GET /idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2Fclient%2F%23%2Fcallback%3F&amp;response_type=id_token%20token&amp;scope=profile%20openid%20email&amp;acr_values=tenant%3Atenant1 HTTP/1.1",
    upstream:  
      "http://10.32.0.9:80/idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2F.client%2F%23%
    

    Apparently, this is a common problem with Nginx, and is essentially exactly what the error says. Nginx sometimes chokes on responses with large headers, because its buffer size is smaller than some other web servers. When it gets a response with large headers, as was the case for my IdentityServer OpenID Connect callback, it falls over and sends a 502 response.

    The solution is to simply increase Nginx's buffer size. If you're running Nginx on bare metal you could do this by increasing the buffer size in the config file, something like:

    proxy_buffers         8 16k;  # Buffer pool = 8 buffers of 16k  
    proxy_buffer_size     16k;    # 16k of buffers from pool used for headers  
    

    However, in this case, I was working with Nginx as an ingress controller to a Kubernetes cluster. The question was, how do you configure Nginx when it's running in a container?

    How to configure the Nginx ingress controller

    Luckily, the Nginx ingress controller is designed for exactly this situation. It uses a ConfigMap of values that are mapped to internal Nginx configuration values. By changing the ConfigMap, you can configure the underlying Nginx Pod.

    The Nginx ingress controller only supports changing a subset of options via the ConfigMap approach, but luckily proxy‑buffer‑size is one such option! There's two things you need to do to customise the ingress:

    1. Deploy the ConfigMap containing your customisations
    2. Point the Nginx ingress controller Deployment to your ConfigMap

    I'm just going to show the template changes in this post, assuming you have a cluster created using kubeadm and kubectl

    Creating the ConfigMap

    The ConfigMap is one of the simplest resources in kubernets; it's essentially just a collection of key-value pairs. The following manifest creates a ConfigMap called nginx-configuration and sets the proxy-buffer-size to "16k", to solve the 502 errors I was seeing previously.

    kind: ConfigMap  
    apiVersion: v1  
    metadata:  
      name: nginx-configuration
      namespace: kube-system
      labels:
        k8s-app: nginx-ingress-controller
    data:  
      proxy-buffer-size: "16k"
    

    If you save this to a file nginx-configuration.yaml then you can apply it to your cluster using

    kubectl apply -f nginx-configuration.yaml  
    

    However, you can't just apply the ConfigMap and have the ingress controller pick it up automatically - you have to update your Nginx Deployment so it knows which ConfigMap to use.

    Configuring the Nginx ingress controller to use your ConfigMap

    In order for the ingress controller to use your ConfigMap, you must pass the ConfigMap name (nginx-configuration) as an argument in your deployment. For example:

    args:  
      - /nginx-ingress-controller
      - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      - --configmap=$(POD_NAMESPACE)/nginx-configuration
    

    Without this argument, the ingress controller will ignore your ConfigMap. The complete deployment manifest will look something like the following (adapted from the Nginx ingress controller repo)

    apiVersion: extensions/v1beta1  
    kind: Deployment  
    metadata:  
      name: nginx-ingress-controller
      namespace: ingress-nginx 
    spec:  
      replicas: 1
      template:
        metadata:
          labels:
            app: ingress-nginx
          annotations:
            prometheus.io/port: '10254'
            prometheus.io/scrape: 'true' 
        spec:
          initContainers:
          - command:
            - sh
            - -c
            - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
            image: alpine:3.6
            imagePullPolicy: IfNotPresent
            name: sysctl
            securityContext:
              privileged: true
          containers:
            - name: nginx-ingress-controller
              image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
              args:
                - /nginx-ingress-controller
                - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
                - --configmap=$(POD_NAMESPACE)/nginx-configuration
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              ports:
              - name: http
                containerPort: 80
              - name: https
                containerPort: 443
              livenessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
    

    Summary

    While deploying a local Kubernetes cluster locally, the Nginx ingess controller was returning 502 errors for some requests. This was due to the headers being too large for Nginx to handle. Increasing the proxy_buffer_size configuration parmeter solved the problem. To achieve this with the ingress controller, you must provide a ConfigMap and point your ingress controller to it by passing an additional arg in your Deployment.


    Ben Foster: Injecting UrlHelper in ASP.NET Core MVC

    One of our APIs has a dynamic routing system that invokes a different handler based on attributes of the incoming HTTP request.

    Each of these handlers is responsible for building the API response which includes generating hypermedia links that describe the state and capabilities of the resource, for example:

    {
      "total_count": 3,
      "limit": 10,
      "from": "2018-01-25T06:36:08Z",
      "to": "2018-03-10T07:13:24Z",
      "data": [
        {
          "event_id": "evt_b7ykb47ryaouznsbmbn7ul4uai",
          "event_type": "payment.declined",
          "created_on": "2018-03-10T07:13:24Z",
          "_links": {
            "self": {
              "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai"
            },
            "webhooks-retry": {
              "href": "https://example.com/events/evt_b7ykb47ryaouznsbmbn7ul4uai/webhooks/retry"
            }
          }
        },
      ...
    }
    

    To avoid hardcoding paths into these handlers we wanted to take advantage of UrlHelper to build the links. Unlike many components in ASP.NET Core, this is not something that is injectable by default.

    To register it with the built-in container, add the following to your Startup class:

    services.AddSingleton<IActionContextAccessor, ActionContextAccessor>();
    services.AddScoped<IUrlHelper>(x => {
        var actionContext = x.GetRequiredService<IActionContextAccessor>().ActionContext;
        var factory = x.GetRequiredService<IUrlHelperFactory>();
        return factory.GetUrlHelper(actionContext);
    });
    

    Both IActionContextAccessor and IUrlHelperFactory live in the Microsoft.AspNetCore.Mvc.Core package. If you're using the Microsoft.AspNetCore.All metapackage you should have this referenced already.

    Once done, you'll be able to use IUrlHelper in any of your components, assuming you're in the context of a HTTP request:

    if (authResponse.ThreeDsSessionId.HasValue)
    {
        return new PaymentAcceptedResponse
        {
            Id = id,
            Reference = paymentRequest.Reference,
            Status = authResponse.Status
        }
        .WithLink("self", _urlHelper.PaymentLink(id))
        .WithLink("redirect",
            _urlHelper.Link("AcsRedirect", new { id = authResponse.ThreeDsSessionId }));
    }
    


    Anuraj Parameswaran: Bulk Removing Azure Active Directory Users using PowerShell

    This post is about deleting Azure Active directory. Sometimes you can’t remove your Azure Active Directory, because of the users and / or applications created or synced on it. So you can’t remove the users from Azure Portal.


    Anuraj Parameswaran: WebHooks in ASP.NET Core

    This post is about consuming webhooks in ASP.NET Core. A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. From ASP.NET Core 2.1 preview onwards ASP.NET Core supports WebHooks. As usual, to use WebHooks, you need to install package for WebHook support. In this post I am consuming webhook from GitHub. So you need to install Microsoft.AspNetCore.WebHooks.Receivers.GitHub. You can do it like this.


    Dominick Baier: NDC London 2018 Artefacts

    “IdentityServer v2 on ASP.NET Core v2: An update” video

    “Authorization is hard! (aka the PolicyServer announcement) video

    DotNetRocks interview audio

     


    Anuraj Parameswaran: Deploying Your Angular Application To Azure

    This post is about deploying you Angular application to Azure App service. Unlike earlier versions of Angular JS, Angular CLI is the preferred way to develop and deploy Angular applications. In this post I will show you how to build a CI/CD pipeline with GitHub and Kudu, which will deploy your Angular application to an Azure Web App. I am using ASP.NET Core Web API application, which will be the backend and Angular application is the frontend.


    Anuraj Parameswaran: Anti forgery validation with ASP.NET MVC and Angular

    This post is how to implement anti forgery validation with ASP.NET MVC and Angular. The anti-forgery token can be used to help protect your application against cross-site request forgery. To use this feature, call the AntiForgeryToken method from a form and add the ValidateAntiForgeryTokenAttribute attribute to the action method that you want to protect.


    Anuraj Parameswaran: Using Yarn with Angular CLI

    This post is about using Yarn in Angular CLI instead of NPM. Yarn is an alternative package manager for NPM packages with a focus on reliability and speed. It has been released in October 2016 and already gained a lot of traction and enjoys great popularity in the JavaScript community.


    Anuraj Parameswaran: Measuring code coverage of .NET Core applications with Visual Studio 2017

    This post is about Measuring code coverage of .NET Core applications with Visual Studio. Test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.


    Anuraj Parameswaran: Building Progressive Web apps with ASP.NET Core

    This post is about building Progressive Web Apps or PWA with ASP.NET Core. Progressive Web App (PWA) are web applications that are regular web pages or websites, but can appear to the user like traditional applications or native mobile applications. The application type attempts to combine features offered by most modern browsers with the benefits of mobile experience.


    Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.