Henrik F. Nielsen: Autofac and Azure Mobile Services .NET Backend (Link)

Posted the blog Autofac and Azure Mobile Services .NET Backend on the Azure Mobile Services Team Blog explaining how the .NET backend and you can use Autofac for all things dependency injection.

Have fun!

Henrik


Ali Kheyrollahi: CacheCow 0.5 Released

[Level T1]


OK, finally CacheCow 0.5 (actually 0.5.1 as 0.5 was released by mistake and pulled out quickly) is out. So here is the list of changes, some of which are breaking. Breaking changes, however, are very easy to fix and if you do not need new features, you can happily stay at 0.4 stable versions. If you have moved to 0.5 and you see issues, please let me know ASAP. For more information, see my other post on Alpha release.

So here are the features:

Azure Caching for CacheCow Server and Client

Thanks to Ugo Lattanzi, we have now CacheCow storage in Azure Caching. This is both client and server. Considering calls to Azure Cache takes around 1ms (based on some ad hoc tests I have done, do not quote this as a proper benchmark), this makes a really good option if you have deployed your application in Azure. You just need to specify the cacheName, otherwise "default" cache will be used.

Complete re-design of cache invalidation

I have now put some sense into cache invalidation. Point is that strong ETag is generated for a particular representation of the resource while cache invalidation happens on the resource including all its representation. For example, if you send application/xml representation of the resource, ETag is generated for this particular representation. As such, application/json representation will get its own ETag. However, on PUT both need to be invalidated. On the other hand, in case of PUT or DELETE on a resource (let's say /api/customer/123) the collection resource (/api/customer) needs to be invalidated since the collection will be different. 

But how we would find out if a resource is collection or single? I have implemented some logic that infers whether the item is collection or not - and this is based on common and conventional routes design in ASP.NET Web API. If your routing is very custom this will not work. 

Another aspect is hierarchical routes. When a resource is invalidated, its parent resource will be invalidated as well. For example in case of PUT /api/customer/123/order/456 , customer resource will change too - if orders are returned as part of customer.  So in this case, not only the order collection resource but the customer needs to be invalidated. This is currently done in CacheCow.Server and will work for conventional routes.

Using MemoryCache for InMemory stores (both server and client)

Previously I have been using dictionaries for InMemory stores. The problem with Dictionary is that it just grows until system runs out of memory while MemoryCache limits its use of memory when system deals with a memory pressure.

Dependency of CachingHandler to HttpConfiguration

As I announced before, you need to pass the configuration to he CachingHandler on server. This should be an easy change but a breaking one.


Future roadmap and 0.6

I think we are now in a point where CacheCow requires a re-write for the features I want to introduce. One of such features is enabling Content-based ETag generation and cache invalidation. Currently Content-based ETag generation is there and can be used now but without content-based invalidation is not much use especially that in fact this now generates a weak ETag. Considering the changes required for achieving this, almost a re-write is required.

Please keep the feedbacks coming. Thanks for using and supporting CacheCow!



Darrel Miller: It’s time for a change, and more of the same.

 

Starting next week, I will be joining the Runscope team.  Runscope provides tools that help developers debug, test and monitor Web APIs.  This is a company that lives and breathes HTTP.  If you know me, I’m sure you understand why that appeals to me.

Although I have been building distributed business applications for more than 20 years, it wasn’t until 2006 when I started to begin to really appreciate the power of HTTP and discover the REST constraints.  That appreciation and discovery continues.  There is not a day goes by that I don’t learn something new about how people use and abuse HTTP.  My regular visits to StackOverflow, conversations on Twitter and a healthy dose of mailing list subscriptions guarantee there is no shortage of new material.

It appears that I have become addicted to trying to help people solve their problems and fortuitously for all, Runscope have offered me the opportunity to do that full time.  I plan to be the carbon-based Runscope service providing guidance, in all shapes and sizes, to the developer community on how to get the most out of HTTP.

I recently visited the Runscope offices in San Francisco and got to meet all of the team.  There is a fabulous atmosphere is the office and I am somewhat sad that I will be working remotely from my home office in Montreal. 

I’ve known John Sheehan, digitally for what seems like a lifetime in internet years, but this was the first time we had met in person.   I’ve always admired John’s enthusiasm and ability to get stuff done.  There have been many occasions when, I have heard John in my head saying “Less talk, more code”.  It’s easy to get lost in architectural debates and forget about actually delivering tangible value. 

My visit to Runscope convinced me it was a company I wanted to be a part of.  John and Frank have brought together a team of experienced developers who genuinely care about the product they are building. 

Once you have used the Runscope tooling, you will see it is a high quality service that addresses a very real need for today’s API developers.  What you don’t see is the infrastructure that exists behind that tooling.  This is one case where you really do want to see how the sausage is made, because the same care has been put into deployment, monitoring, analysis, reliability and development as you see on the public facing service.   

This is a service that has been built to evolve and I am very excited to be a part of it.  If it sounds like something you might also want to be a part of, we’re hiring.


Taiseer Joudeh: Building OData Service using ASP.Net Web API Tutorial – Part 3

This is the third part of Building OData Service using Asp.Net Web API. The topics we’ll cover are:

CRUD Operations on OData endpoint using Asp.Net Web API

In this post We’ll add another OData controller which will support all the CRUD operations, as we talked before OData follows the conventions of HTTP and REST; so if we want to create a resources we’ll issue HTTP POST, if we want to delete a resource we’ll issue HTTP DELETE and so on. One more thing we want to add support for is partial updates (HTTP PATCH) which will be very efficient when we update certain properties on the entity, the request payload for PATCH will come on a key/value pair and will contain the properties changed only.

Step 1: Add OData Controller (Support for CRUD operations)

Let’s add a new Web API Controller which will handle all HTTP requests issued against the OData URI “/odata/Tutors”. To do this right-click on Controllers folder->Select Add->Name the controller “TutorsController” and choose “Empty API Controller” Template.

As we did before we have to derive our “TutorsController” from class “EntitySetController“, then we’ll add support for the Get() and GetEntityByKey(int key) methods and will implement creating a new Tutor; so when the client issue HTTP POST request to the URI “/odata/Tutors” the “CreateEntity(Tutor entity)” method will be responsible to handle the request. The code will look as the below:

public class TutorsController : EntitySetController<Tutor, int>
    {
        LearningContext ctx = new LearningContext();

        [Queryable()]
        public override IQueryable<Tutor> Get()
        {
            return ctx.Tutors.AsQueryable();
        }

        protected override Tutor GetEntityByKey(int key)
        {
            return ctx.Tutors.Find(key);
        }

        protected override int GetKey(Tutor entity)
        {
            return entity.Id;
        }

        protected override Tutor CreateEntity(Tutor entity)
        {
            Tutor insertedTutor = entity;
            insertedTutor.UserName = string.Format("{0}.{1}",entity.FirstName, entity.LastName);
            insertedTutor.Password = Helpers.RandomString(8);
            ctx.Tutors.Add(insertedTutor);
            ctx.SaveChanges();
            return entity;
        }
  }

By looking at the code above you’ll notice that we overrided the CreateEntity() method which accepts a strongly typed Tutor entity, the nice thing here that once you override the CreateEntity() method the base class “EntitySetController” will be responsible of returning the right HTTP response message (201) and adding the correct location header for the created resource. To test this out let’s issue issue an HTTP POST request to URI “odata/Tutors”, the Post request will look as the below:

web api odata crud

Step 2: Testing OData Prefer Header

By looking at the request above you will notice that status code is 201 created is returned as well the response body contains the newly created Tutor, but what if the client do not want to return a duplicate entity of the newly created Tutor from the server? The nice thing here that OData allow us to send Prefer header with request to indicate if we need to return the created entity back to the client. The default value for the header is “return-content” so if we tried to issue another POST request and pass the Prefer header with value “return-no-content” the server will create the resource and will return 204 no content response.

Step 3: Add Support for Resource Deletion

Adding support for delete is simple, we need to override method Delete(int key) so any HTTP DELETE request sent to the URI “/odata/Tutors “will be handled by this method, let’s implement this in the code below:

public override void Delete(int key)
	{
		var tutor = ctx.Tutors.Find(key);
		if (tutor == null) 
		{
			throw new HttpResponseException(HttpStatusCode.NotFound);
		}

		ctx.Tutors.Remove(tutor);
		ctx.SaveChanges();
	}

To test this out we need to issue HTTP DELETE request to the URI “OData/Tutors(12)” and if the tutor exists the HTTP response will be 204 no content. What worth mentioning here that if we tried to delete a tutor which doesn’t exists then by looking at code on line 6 you will notice that I’m throwing HttpResponseException with status code 404 along with empty response body; nothing wrong about this but if you want to build OData compliant service then handling the errors should be done on a different way; to fix this we need to return an HttpResponseException with an object of type “Microsoft.Data.OData.ODataError” in the response body.

Step 4: Return Complaint OData Errors

To fix the issue above I’ll create a helper class which will return ODataError, we’ll use this class in different methods and for another controllers, so add new class named “Helpers” and paste the code below:

public static class Helpers
    {
        public static HttpResponseException ResourceNotFoundError(HttpRequestMessage request)
        {
            HttpResponseException httpException;
            HttpResponseMessage response;
            ODataError error;

            error = new ODataError
            {
                Message = "Resource Not Found - 404",
                ErrorCode = "NotFound"
            };

            response = request.CreateResponse(HttpStatusCode.NotFound, error);

            httpException = new HttpResponseException(response);

            return httpException;
        }
    }

By looking at the code above you will notice that there is nothing special here, we only returning an object of ODataError in the response body, you can set the Message and ErrorCode properties for something meaningful for your OData service clients. Now insted of throwing the exception directly in the controller we’ll call this helper method as the code below:

if (tutor == null) 
	{
		throw Helpers.ResourceNotFoundError(Request);
	}

Step 5: Add Support for Partial Updates (PATCH)

In cases we want to update 1 or 2 properties of a resource which has 30 properties; it will be very efficient if we are able to send only over the wire the properties which have been changed, as well it will be very efficient on the database level if we generated an update statement which contains the update fields only. To achieve this we need to override the method PatchEntity() as the code below:

protected override Tutor PatchEntity(int key, Delta<Tutor> patch)
        {
            var tutor = ctx.Tutors.Find(key);
            if (tutor == null)
            {
                throw Helpers.ResourceNotFoundError(Request);
            }   
            patch.Patch(tutor);
            ctx.SaveChanges();
            return tutor;
        }

As I mentioned before the changed properties will be in key/value form in the response body thanks to the Delta<T> OData class which makes it easy to perform partial updates on any entity, to test this out we’ll issue HTTP Patch request to the URI “/odata/Tutors(10)” where we will modify the LastName property only for this Tutor, request will be as the below:

web api odata crud ODataPatchRequest

Before we send the request Let’s open SQL Server Profile to monitor the database query which will be generated to update this Tutor, you will notice from the image below how efficient is using the Patch update when we want to modify certain properties on an entity, the query generated do an update only for the changed property.

ODataSqlProfiler

In the next post we’ll see how we can consume this OData service from Single Page Application built using AngularJS and Breeze.js

Please drop your questions and comments on the comments section below.

Source code is available on GitHub.

The post Building OData Service using ASP.Net Web API Tutorial – Part 3 appeared first on Bit of Technology.


Taiseer Joudeh: Building OData Service using ASP.Net Web API Tutorial – Part 2

This is the second part of Building OData Service using Asp.Net Web API. The topics we’ll cover are:

Create read-only OData endpoint using Asp.Net Web API

In this tutorial I’ll be using the same solution I’ve used in the previous tutorial but I will start new Web API 2 project then we’ll implement OData service using Web API. The source code for the new OData Service project can be found on my GitHub repository.

Practical example covering how to build OData Service

To keep things simple we’ll depend on my previous tutorial eLearning API concepts to demonstrate how to build OData Service. All we want to use from the eLearning API is its data access layer and database model, in other words w’ll use only project Learning.Data. I recommend you to check how we built the Learning.Data project here as the navigation properties between database entities are important to understand the relations that will be generated between OData entities.

Once you have your database access layer project ready then you can follow along with me to create new Learning.ODataService project.

I will assume that you have Web Tools 2013.1 for VS 2012 installed on your machine or you have VS 2013, so you can use the ASP.NetWeb API 2 template directly which will add the needed assemblies for ASP.Net Web API2.

Step 1: Create new empty ASP.Net Web API 2 project

We’ll start by creating a new empty ASP.Net Web API 2 project as the image below, you can name your project “Learning.ODataService”, do not forget to choose .NET framework 4.5 version. Once The project is created we need install Entity Framework version 6 using NuGet package manager or NuGet package console, the package we’ll install is named “EntityFramework“. When the package is installed, we need to  add a reference for the class library “Learning.Data” which will act as our database access layer.

ODataServiceProject

Step 2: Add support for OData to Web API

By default the ASP.Net Web API doesn’t come with a support for OData, to add this we need to install NuGet package named “Microsoft.ASP.NET Web API 2.1 OData” to our Web API project. To do this open package manager console and type “Install-Package Microsoft.AspNet.WebApi.OData”.

Step 3: Configure OData Routes

The ASP.Net Web API 2 template we used has by default a class named “WebApiConfig” inside the “App_Start” folder, when you open this class you will notice that inside the “Register” method it has a “config.MapHttpAttributeRoutes()” line and “DefaultApi” route which is used to configure traditional Web API routes, the nice thing here that we can define OData end points along with traditional Web API end points in the same application.

In this tutorial we want to define only OData end points so feel free to delete the default route named “DefaultApi” as well the “config.MapHttpAttributeRoutes()” line .

Now you need to replace the code in “WebApiConfig” class with the code below:

public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            config.Routes.MapODataRoute("elearningOData", "OData", GenerateEdmModel());
        }

        private static IEdmModel GenerateEdmModel()
        {
            var builder = new ODataConventionModelBuilder();
            builder.EntitySet<Course>("Courses");
            builder.EntitySet<Enrollment>("Enrollments");
            builder.EntitySet<Subject>("Subjects");
	    builder.EntitySet<Tutor>("Tutors");

            return builder.GetEdmModel();
        }
    }

The code above is responsible for defining the routes for our OData service and generating an Entity Data Model (EDM) for our OData service.

The EDM is responsible to define the type system, relationships, and actions that can be expressed in the OData formats, there are two approaches to define an EDM, the first one which depends on conventions should use class ODataConventionModelBuilder, and the second one should use class ODataModelBuilder. 

We’ll use the ODataConventionModelBuilder as it will depend on the navigation properties defined between your entities to generate the association sets and relationship links. This method requires less code to write. If you want more flexibility and control over the association sets then you have to use the ODataModelBuilder approach.

So we will add four different entities to the model builder, note that the string parameter “Courses” defines the entity set name and should match the controller name, so our controller name must be named “CoursesController”.

The MapODataRoute is an extension method which will be availabe after we installed OData package, it is responsible to define the routes for our OData service, the first parameter of this method is friendly name that is not visible for service clients, the second parameter is the URI prefix for the OData endpoint, so in our case the URI for the Courses resource will be as the following: http://hostname/odata/Courses. As we mentioned before you can have multiple OData endpoints in the same application, all you need to do is to call MapODataRoute with different URI prefix.

Step 4: Add first OData Controller (Read only controller)

Now we want to add a Web API Controller which will handle HTTP requests issued against the OData URI “/odata/Courses”. To do this right-click on Controllers folder->Select Add->Name the controller “CoursesController” and choose “Empty API Controller” Template.

First thing to do here is to derive our “CoursesController” from “System.Web.Http.OData.EntitySetController“, the constructor of this class accepts two parameters, the first one is the entity type mapped to this controller, and the second one is the data type of the primary key for this entity, the code for this OData controller will look as the below:

public class CoursesController : EntitySetController<Course, int>
    {
        LearningContext ctx = new LearningContext();

        [Queryable(PageSize=10)]
        public override IQueryable<Course> Get()
        {
            return ctx.Courses.AsQueryable();
        }

        protected override Course GetEntityByKey(int key)
        {
            return ctx.Courses.Find(key);
        }
    }

The EntitySetController class has number of abstract and override methods for updating or querying an entity, so you will notice that there are many methods you can override such as: Get(), GetEntityByKey(), CreateEntity(), PatchEntity(), UpdateEntity(), etc..

As I mentioned before, this controller will be a read only controller, which means that I’ll implement only read support for URI “/odata/Courses”, so by looking at the code above we’ve implemented the following:

  • Overriding the Get() method and attributing it with [Queryable] attribute which allow clients to issue HTTP Get request to this endpoint where they can encode filter, order by, pagination parameter in the URI. This Queryable attribute is an action filter which is responsible to parse and validate the query sent in the URI. This attribute is useful to protect your end point service from clients who might issue a query which takes long time to execute or ask for large sets of data, so for example I’ve set the PageSize of the response to return only 10 records at a time. you can read more about the set of parameters available here.
  • Overriding the GetEntityByKey(int key) method which allow clients to issue HTTP Get request to a single Course on the form: “/odata/Courses(5)”, note that the key data type is integer as it represents the primary key in Courses entity.

 Step 5: Testing The Courses Controller

Now we need to test the controller, we’ll use fiddler or PostMan to compose the HTTP Get requests, the accept header for all requests will be application/json so we’ll get JSON Light response, you can check how the results are formatted if you passed application/json;odata=verbose or application/atom+xml. The scenarios we want to cover as the below:

  • use $filter: We need to filter all courses where the duration of the course is greater than 4 hours
    • Get Request: http://hostname/OData/Courses?$filter=Duration%20gt%204
  • use $orderby, $take: We need to order by course name and take the top 5 records
    • Get Request: http://hostname/OData/Courses?$orderby=Name&$top=5
  • use $select: We need to get the Name and Duration fields only for all fields
    • Get Request: http://hostname/OData/Courses?$select=Name,Duration
  • $expand: We need to get related Course Tutor and Subject for each Course and order the Courses by Name descending
    • Get Request: http://hostname/OData/Courses?$expand=CourseTutor,CourseSubject&$orderby=Name desc

Notes about the $expand query:

  • The $expand allow clients to ask for related entities inline based on the navigation properties defined between the entities, as you notice the $expand accepts comma separated values so you can ask for different entities at the same time, for more information about the $expand query you can visit this link.
  • By looking at the JSON response below for the $expand query we requested you will notice that the fields UserName and Password (Lines 5 and 6) for each Tutor is returned in the response which doesn’t make sense.

{
  "CourseTutor": {
    "Id": 5,
    "Email": "Iyad.Radi@gmail.com",
    "UserName": "IyadRadi",
    "Password": "MXELYDAC",
    "FirstName": "Iyad",
    "LastName": "Radi",
    "Gender": "Male"
  },
  "CourseSubject": {
    "Id": 5,
    "Name": "Commerce"
  },
  "Id": 15,
  "Name": "Commerce, Business Studies and Economics Teaching Methods 1",
  "Duration": 3,
  "Description": "The course will talk in depth about: Commerce, Business Studies and Economics Teaching Methods 1"
}

The nice thing here that we can fix fix this issue by ignoring those two properties from the EDM model only without changing any property on the data model, to do this open the class “WebApiConfig” and replace the code in method GenerateEdmModel() with the code below, notice how we specified the ignored properties in lines 8 and 9:

private static IEdmModel GenerateEdmModel()
        {
            var builder = new ODataConventionModelBuilder();
            builder.EntitySet<Course>("Courses");
            builder.EntitySet<Enrollment>("Enrollments");
            builder.EntitySet<Subject>("Subjects");

            var tutorsEntitySet = builder.EntitySet<Tutor>("Tutors");

            tutorsEntitySet.EntityType.Ignore(s => s.UserName);
            tutorsEntitySet.EntityType.Ignore(s => s.Password);

            return builder.GetEdmModel();
        }

In the next post we’ll see how we can implement a controller which will support full CRUD operations on the Tutors resource.

Please drop your questions and comments on the comments section below.

Source code is available on GitHub.

The post Building OData Service using ASP.Net Web API Tutorial – Part 2 appeared first on Bit of Technology.


Dominick Baier: New Pluralsight Course: “Web API v2 Security”

It is finally online! Hope you like it.

http://pluralsight.com/training/Courses/TableOfContents/webapi-v2-security


Filed under: ASP.NET, AuthorizationServer, Katana, OAuth, OWIN, WebAPI


Taiseer Joudeh: Building OData Service using ASP.Net Web API Tutorial – Part 1

In my previous tutorial we’ve covered different aspects of how to Build RESTful service using Asp.NET Web API, in this multi-part series tutorial we’ll be building OData service following the same REST architecture we’ve talked about previously.

Before jump into code samples let’s talk a little bit about OData definition and specifications.

OData Introduction

OData stands for Open Data Protocol. It is standard to for providing access to data over the internet, OData is championed by Microsoft as an open specification and adopted by many different platforms. By using OData it will allow you to build a uniform way to expose full featured data APIs (Querying, Updating), the nice thing about OData that it builds on top of mature standard web technologies and protocols such as HTTP, Atom Publishing Protocol, JSON, and follow the REST architecture in order to provide data access from different applications, services and data stores.

It is well known that any service follows the REST principles well adhere to the aspects below:

  • Resources are identified by a unique URI only.
  • Actions on the resource should be done using using HTTP verbs (GET, POST, PUT, and DELETE).
  • Enable content negotiation to allow clients specifying the format of the data type returned: XML format (AtomPub), or JSON format (Verbose/Light).

odata service asp net web api

Querying Existing OData Service

Before start building our OData service let’s examine querying an existing OData service published on the internet and uses the Northwind database, the base URI for this service is http://services.odata.org/Northwind/Northwind.svc. You can use any REST client such as (Fiddler, PostMan) to compose those HTTP requests, as we are only querying  the service now (No sending updates) you can use your favorite browser as well. Note you can use Linq4Pad to generate and test complex OData qyeries

The tables below illustrates some of the OData query options which can be used to query the service:

Option
OData Service URL
Notes
$filterhttp://services.odata.org/Northwind/Northwind.svc/Products?$filter=ProductName eq 'Tofu'Filter the results based on Boolean condition, I.e. get product name = 'Tofu'
$orderbyhttp://services.odata.org/Northwind/Northwind.svc/Products?$orderby=ProductNameSort the results, i.e: Sort the products by Product Name
$skiphttp://services.odata.org/Northwind/Northwind.svc/Products?$skip=10Skip the first n results, used for server side paging
$tophttp://services.odata.org/Northwind/Northwind.svc/Products?$top=10Returns only the first n the results, used for server side paging
$selecthttp://services.odata.org/Northwind/Northwind.svc/Products?$filter=ProductName eq 'Tofu'&$select=ProductName,UnitPriceSelect which properties to be returned in the response. i.e. returning ProductName and UnitPrice
$expandhttp://services.odata.org/Northwind/Northwind.svc/Products?$expand=SupplierExpand the related entities inline i.e. expand the supplier entity for each product
$inlinecounthttp://services.odata.org/Northwind/Northwind.svc/Products?$inlinecount=allpagesInform the server to return the total count of matching records in the response.

As we see in the table above we can combine different query options together and provide complex search criteria for example if we want to implement server side paging we can issue the following HTTP GET request: http://services.odata.org/Northwind/Northwind.svc/Products?$top=10&$skip=0&$orderby=ProductName&$inlinecount=allpages where $skip represents the number of records to skip usually (PageSize x PageIndex) and $inlinecount represents the total number of products. To view the complete list of available query options you can visit this link.

As we stated before OData service allows content negotiation, so the client can choose the response format by setting the Accept header of the request, each response format has its own pros and cons, the table below illustrates the differences:

 
XML (Atom Publishing)
JSON Verbose
JSON Light
OData VersionVersion 1, 2, and 3Version 1, 2 and 3Version 3
Metadata and Hyper LinksData and MetaDataData and MetadataNo Metadata, Just the Data
Payload Size for all Products entity28.67 KBs14.34 KBs, smaller by 50%4.25 KBs, smaller by 75%
Easy to consume in mobile clientsNoYesYes
Accept Headerapplication/atom+xmlapplication/json;odata=verboseapplication/json

I’ve decided to split this tutorial into four parts as the below:

Source Code for this series is available on GitHub, you can download it locally or you can fork it. If you have any question or if there is nothing unclear please drop me a comment.

The post Building OData Service using ASP.Net Web API Tutorial – Part 1 appeared first on Bit of Technology.


Dominick Baier: List of Libaries and Projects for OpenID Connect and JWT

..can be found here

http://openid.net/developers/libraries/


Filed under: OAuth, OpenID Connect, WebAPI


Dominick Baier: Announcing Thinktecture IdentityServer v3 – Preview 1

The last months we’ve been heads down re-writing IdentityServer from scratch (see here for background) – and we are now at a point where we think we have enough up and running to show it to you!

What we’ve done so far

  • Started with File –> New
  • Implemented OpenID Connect basic and implicit client profile (including support for form post response mode)
  • Implemented support for OpenID Connect discovery documents and session logout
  • Implemented OAuth2 code, client credentials, password and assertion grants
  • Created a general purpose login page and consent screen for local and external accounts
    • created out of the box support for MembershipReboot and ASP.NET Identity
    • integrated existing Katana authentication middleware for social providers
      • and made that all pluggable
  • Defined an authorization enforcement policy around clients, flows, redirect URIs and scopes
  • Designed everything to run on minimal data access interfaces so you can seamlessly scale from in-memory objects to simple config files up to relational or document databases for configuration and state management
  • Designed everything to be API-first
  • Defined several extensibility points that allow customization of request validation, token creation, claims acquisition and transformation and more
    • and yes, we don’t use MEF anymore …
  • Split up IdSrv into composable components like core token engine and authentication, configuration APIs, configuration UIs and user management
    • These components use OWIN/Katana and Web API as abstractions which means we have quite a bit of flexibility when it comes to logical hosting – embeddable in an existing application or standalone
    • When it comes to physical hosting, we have no dependency on IIS and System.Web which means you can use a command line, OWIN host, an NT Service, of course IIS or any other OWIN/Katana compatible server

Minimal startup code:

public void Configuration(IAppBuilder app)

{

    app.Map(“/core”, coreApp =>

        {

            var factory = TestOptionsFactory.Create(

                issuerUri:         https://idsrv3.com,

                siteName:          “Thinktecture IdentityServer v3″,

                certificateName:   “CN=idsrv3test”,

                publicHostAddress: http://localhost:3333);

                   

            var opts = new IdentityServerCoreOptions

            {

                Factory = factory,

            };

 

            coreApp.UseIdentityServerCore(opts);

        });

}

What’s missing?

  • quite a bit, e.g.
  • a persistence layer for configuration and state – everything is in-memory right now which is good enough for testing
  • Refresh tokens
  • Admin UI and APIs
  • OpenID Connect session management and cleanup
  • Support for WS-Federation and OpenID Connect based identity providers for federation
  • A lot more testing
  • Your feedback!

What’s next?

  • We’ve defined several milestones over the next months for implementing the next rounds of features. We currently plan to be done with v1 around end of summer.
  • Participate in OpenID Connect compatibility and interop testing (see here).

Where to get it?

The github repo is here, the issue tracker here and the wiki here. We also recorded videos on Overview, Samples Walkthrough and Extensibility Check them out…

Oh – and I should mention – while designing IdentityServer v3 we realized that we really also need a good solution for managing users, identity, claims etc – and that this should be ideally a separate project – so I’d also like to announce Thinktecture IdentityManager – head over to Brock’s blog to find out more!!!

Looking forward to your feedback!


Filed under: ASP.NET, AuthorizationServer, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Filip Woj: Opt in and opt out from ASP.NET Web API Help Page

The autogenerated ASP.NET Web API help page is an extremely useful tool for documenting your Web API. It can not only present information about the routes, but also show sample requests and responses in all of supported media type formats, … Continue reading

The post Opt in and opt out from ASP.NET Web API Help Page appeared first on StrathWeb.


Henrik F. Nielsen: Push Notifications Using Notification Hub and .NET Backend (Link)

Posted the blog Push Notifications Using Notification Hub and .NET Backend on the Azure Mobile Services Team Blog explaining how to leverage Azure Notification Hubs and Azure Mobile Services to push notifications to any mobile platform.

Henrik


Dominick Baier: Integrating AuthorizationServer with Auth0

AuthorizationServer is a lightweight OAuth2 implementation that is designed to integrate with arbitrary identity management systems. I wrote about integration with Thinktecture IdentityServer, ADFS and even plain Windows integrated authentication before.

Another really compelling and feature rich identity management is Auth0. Auth0 supports local account databases, federation with almost anything you can imagine, claims transformation, UI customization and a great documentation and SDKs. The fact that it is also available as an on-premises appliance (in addition to their cloud offering) is especially interesting for my European customers and me.

Here’s what I had to do to integrate AuthorizationServer with Auth0.

1 Create a new application in Auth0
Auth0 has support for many pre-packaged application types like Salesforce, Office 365 or SharePoint. Since AS is a WIF-based application, I chose WS-Fed (WIF) App.

1 Create App

Next, you can choose which identity providers or account types you want to allow for your new application (Auth0 calls that connections). I decided to start with local accounts only, and to add other connections later once I have the basic setup up and running.

2 Select Connections

One thing I especially like about Auth0 is their personalized documentation and walkthroughs. All of the samples and config snippets they show already have your URLs, keys etc. in it, so you can simply copy and paste the “sample” configuration to your local project. You start by entering some basic information about your application:

3 Initial Config

..and are being presented with a fully working WIF configuration:

4 Config Snippet

Another option would be to point your Identity & Access Tool or the ASP.NET project template to your personalized WS-Federation metadata endpoint. Very nice!

Next, I created a user account in Auth0 that should act as an AuthorizationServer administrator:

5 Admin User

2 Setup AuthorizationServer
I then grabbed a fresh copy of AuthorizationServer from Github and did the standard installation steps (see here).
Since Auth0 already gave me a ready to use federation configuration, I only had to copy that over to the identityModel.config and identityModel.services.config files (in the config folder) respectively. Then I ran the initial configuration “wizard” and entered the user ID of the admin account I created earlier.

6 AS Initial Config

Now when I try to enter the admin UI, I am presented with the Auth0 login screen and I can start creating AS applications, clients, etc. (see also this walkthrough).

7 Auth0 Login

3 Using AuthorizationServer with Auth0 Accounts
To do some testing, I quickly created a few more local accounts (alice and bob of course) and used the standard AS sample to inspect the resulting access tokens. Here’s the output for the code flow sample client:

{
    “iss”: “AS”,
    “aud”: “users”,
    “nbf”: 1394728033,
    “exp”: 1394731633,
    “client_id”: “codeclient”,
    “scope”: [
        "read",
        "search"
    ],
    “sub”: “auth0|52e50b42f66ae38e8f00057e”
}

Auth0 uses a NameIdentifier claims and the idp|userid format to uniquely identify a user account. AS understands that by default and strips away all other claims. If you want to pass through all claims from Auth0, you can set the filterIncomingClaims appSetting in web.config to false, which results in all profile claims, e.g.:

{
  “iss”: “AS”,
  “aud”: “users”,
  “nbf”: 1394728205,
  “exp”: 1394731805,
  “client_id”: “codeclient”,
  “scope”: [
    "read",
    "search"
  ],
  “sub”: “auth0|52e50b42f66ae38e8f00057e”,
  “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier “: “auth0|52e50b42f66ae38e8f00057e”,
  “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress “: “bob@leastprivilege.com”,
  “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name “: “bob@leastprivilege.com”,
  “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn “: “bob@leastprivilege.com”,
  “http://schemas.auth0.com/identities/default/provider “: “auth0″,
  “http://schemas.auth0.com/identities/default/connection “: “Username-Password-Authentication”,
  “http://schemas.auth0.com/identities/default/isSocial “: “false”,
  “http://schemas.auth0.com/clientID “: “wU1MA7nUlpoMryMUqrd39CeXTMio1O6x”,
  “http://schemas.auth0.com/created_at “: “Sun Jan 26 2014 13:18:58 GMT+0000 (UTC)”,
  “http://schemas.auth0.com/email_verified “: “false”,
  “http://schemas.auth0.com/nickname “: “bob”,
  “http://schemas.auth0.com/picture “: https://secure.gravatar.com/…silhouette80.png,
  “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod “: “http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/password “,
  “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant “: “2014-03-14T00:29:57.101Z”
}

So you can get a whole lot of information about the Auth0 user from their authentication token. You can either modify the claims via the Auth0 profile editor in the web interface or modify the claims transformation logic either in Auth0 or AS to just pick the claims that are relevant to your APIs.

8 Profile editor

4 Adding external Accounts
Auth0 also allows adding external identity providers, e.g. social logins like Facebook or Google as well as enterprise systems like ADFS, WAAD, Ping Identity or LDAP and SAML2p based systems. You can simply “activate” those connections per application.

9 External connections

Once activated, you will see the new identity providers on the login dialog.

5 Resource Owner Password Flow and programmatic authentication with Auth0
For supporting the OAuth2 resource owner password flow, AS needs programmatic access to the Auth0 authentication endpoint. That’s easily possible too, and via the excellent documentation system, you can inspect the relevant endpoint as well as a sample payload.

10 API Docs

The above endpoint will return an OpenID Connect style JWT identity token. With that information, you can use the standard AS extensibility point for resource owner flow to programmatically authenticate users against the Auth0 user store:

public class Auth0ResourceOwnerCredentialValidation : IResourceOwnerCredentialValidation

{

    string endpoint = https://leastprivilege.auth0.com/oauth/ro&#8221;;

    string issuer = https://leastprivilege.auth0.com/&#8221;;

    string client_id = “wU…6x”;

    string client_secret = “j6a…Z9″;

 

    public ClaimsPrincipal Validate(string userName, string password)

    {

        var client = new Thinktecture.IdentityModel.Client.OAuth2Client(

            new Uri(endpoint),

            client_id,

            client_secret,

            OAuth2Client.ClientAuthenticationStyle.PostValues);

 

        var response = client.RequestResourceOwnerPasswordAsync(

            userName,

            password,

            “openid profile”,

            new Dictionary<string, string>

            {

                { “connection”, “Username-Password-Authentication” }

            }).Result;

 

 

        if (!response.IsError)

        {

            return FederatedAuthentication.FederationConfiguration

                                            .IdentityConfiguration

                                            .ClaimsAuthenticationManager

                                            .Authenticate(“”, ValidateIdentityToken(response.IdentityToken));

        }

 

        throw new InvalidOperationException(response.Error);

    }

 

    private ClaimsPrincipal ValidateIdentityToken(string identityToken)

    {

        var handler = new JwtSecurityTokenHandler();

          

        var parameters = new TokenValidationParameters

        {

            AllowedAudience = client_id,

            ValidIssuer = issuer,

            SigningToken = new BinarySecretSecurityToken(
             
Base64Url
.Decode(client_secret))

        };

 

        return handler.ValidateToken(identityToken, parameters);

    }

}

4 Where to go from here
There are some advanced features I haven’t tried but wanted to mention. First of all you have full control over the login page look and feel by updating the HTML/script/CSS of your tenant. You can also write dynamic claims transformation rules using JavaScript, which looks pretty powerful. And last but not least, when you run Auth0 on-premises, you can also connect it to Active Directory as well as custom databases like SQL Server.
So all in all this is a pretty complete package when you are looking for an out of the box identity and federation solution – and together with AuthorizationServer you get OAuth2 application authorization model backed by all the various authentication options that Auth0 provides. Nice!


Filed under: ASP.NET, AuthorizationServer, OAuth, WebAPI


Dominick Baier: The Web API v2 OAuth2 Authorization Server Middleware–Is it worth it?

Adding the concept of an authorization server to your web APIs is the recommended architecture for managing authentication and authorization. But writing such a service from scratch is not an easy task.

To simplify that, Microsoft included an OAuth2 based authorization server “toolkit” as part of the Katana project, which is also used in the standard Web API templates that ship with Visual Studio 2013. I get a lot of questions about how this middleware works, whether I like it, what the limitations are and if I would use it at all. To make this discussion easier in the future – here’s my take.

What I like
Microsoft’s intentions were really noble – they tried to make it very easy for the “average” (read: not security expert) developer to do the right thing – namely push them to an architecture where authentication and authorization decisions can be abstracted away into a separate “component” while using a standard protocol (OAuth2). This would result in token-based authentication in general – and would also get rid of the cookie and CSRF problems for SPAs in particular. The pit of success so to speak.

The middleware produces encrypted and signed tokens (protected with the usual machine key mechanism) which can be automatically consumed by the corresponding token middleware, creates separate endpoints,  forces you to at least think about client validation and requires SSL by default.

The two most common flows I see for Web API clients is resource owner flow and implicit flow. Resource owner flow is now very easy to implement with the new AS middleware. It really just takes a few lines of code and you are done. I like that a lot (if resource owner flow is the right architecture for you – which is a different discussion). I wrote about the implementation here.

The same applies to client credentials flow (not so common), and custom grant types (even less common).

What I don’t like
If you want to do anything beyond what I described so far, the complexity increases unproportionally – e.g. adding a “simple” extension to resource owner flow – namely the concept of refresh tokens, suddenly forces you to understand OAuth2 (see here) and the security implications around it. You also have to implement persistence yourself – which on its own is challenging if you would follow security best practices and the OAuth2 threat model. No pit of success.

Another example is implicit flow, which is often the more appropriate flow for native or JS based applications. Unfortunately the middleware implementation is riddled by the fact there is no Katana or Web API based view engine yet, which means you need to use an additional framework when you want to render proper login or consent screens. Scattering the logic of the flow between views, controllers and middleware providers doesn’t make it more intuitive. And when you made it that far, it seems some parts of the protocol are just missing – like returning proper errors from the UI.

In other words – the middleware forces you to understand the underlying protocol, forces you to understand the middleware itself and all its various callbacks and extensibility points, makes you implement the hard (and security related) things yourself, and in the end you have to live with the restrictions that the thin abstraction imposes on you.

Next is documentation – the built-in templates don’t do a very good job of isolating the features and trade-offs of the middleware. They combine all the new concepts of Katana, OWIN, OAuth2 and ASP.NET identity into a hard to follow sample “application”. It took me three blog posts (see here) to describe how the “Individual Accounts” sample works – and I am sure I missed some of the subtleties.

Just recently Microsoft released this very long article that describes all the aspects of the OAuth2 middleware – read it here. It’s good that this document finally exists, but I am not a huge fan of this disclaimer – after you followed the 83 easy steps:

Note: This outline should not be intended to be used for creating a secure production app. This tutorial is intended to provide only an outline on how to implement an OAuth 2.0 Authorization Server using OWIN OAuth middleware.

What would I like to see instead?
Well – it is always easy to bitch and moan about other people’s work – but I was actually asked for feedback during the development of the middleware – and this is just a repeat of what I said back then.

OAuth2 is not the most straightforward protocol to implement – but it is also not too hard. When you start working on it, you realize that the “protocol aspects” of it – like query string formats or response message layouts are actually the easiest part. The hard part is state management, secure data storage, input validation etc. And my main point of criticism is, that the middleware does not help you in any way with that. It simply implements a framework that exposes a developer to a limited subset of OAuth2 without really understanding the use cases.

Much more useful would have been super easy to use, focused and specific implementations of the two most common flows for Web API apps: resource owner and implicit – including ready to use support for login and consent page, scope handling and refresh tokens. That would have also included a persistence layer and state management. Our OSS project AuthorizationServer e.g. takes care of all that.

So overall I’d conclude with saying – even when the intentions were good – the whole authorization server middleware project was a little over ambitious and I can’t really recommend using it (besides for the most simplest cases).


Filed under: AuthorizationServer, IdentityServer, Katana, OAuth, OWIN, WebAPI


Filip Woj: ASP.NET Web API exception logging with Raygun.io

Jon Galloway recently wrote a monster 4-part series covering the new features of MVC 5.1 and Web API 2.1 releases. One thing he mentioned was the new IExceptionLogger for Web API, and he called out the community to provide some … Continue reading

The post ASP.NET Web API exception logging with Raygun.io appeared first on StrathWeb.


Henrik F. Nielsen: Azure Mobile Services and .NET

It’s great to be back blogging! We’ll be posting a bunch of blogs about Azure Mobile Services and in particular the new .NET Backend written using ASP.NET Web API. The blogs will be posted to the Azure Mobile Services Team Blog so if you are interested then follow us there.

First blog is about remote debugging the .NET backend from Visual Studio but there will be many more coming…

As always, you can of course also find me on twitter (@frystyk) as well as look for everything #AzureMobile.

Have fun!

Henrik


stevanus w. : #webapi101: HTTP Caching With SqlCacheDependency (Category-Products Scenario)

I don't actually have category/product tables in my database but I will simulate it with department/employee that I used as an example throughout the development.

Department:
Id   Name
----------
 1   Finance
 2   IT
 3   Logistic

Employee:
Id   FirstName   LastName   DepartmentId
--------------------------------------------
 1   Terry           Mollica        1
 2   Jason           Jack            2
 3   Mary           Ann             3

Altering the foreign key so that it allows me to delete a department that will also automatically delete all the employees belonging to the department (certainly not good in practice, but it's only for testing purpose).

ALTER TABLE [dbo].[Employee] DROP CONSTRAINT [FK__Employee__Depart__4CA06362];
GO

ALTER TABLE [dbo].[Employee]  WITH CHECK ADD FOREIGN KEY([DepartmentId])
REFERENCES [dbo].[Department] ([Id])
ON DELETE CASCADE;
GO

#Testing.


Figure 1. api/employees/ (200 OK).



Figure 2. api/employees/3 (200 OK).



Figure 3. api/employees/3 {If-None-Match: "245638da-eb5b-4548-9fe2-3498be606c39"} (304 Not Modified).



Figure 4. api/employees/ {If-None-Match: "9fe3866a-2fc0-4538-a206-795a63391bd0"} (304 Not Modified).

Now, I delete Logistic department (DepartmentId = 3).
DELETE FROM dbo.Department
WHERE Id = 3;



Figure 5.  api/employees/3 {If-None-Match: "245638da-eb5b-4548-9fe2-3498be606c39"} (404 Not Found).



Figure 6. api/employees/ {If-None-Match: "9fe3866a-2fc0-4538-a206-795a63391bd0"} (200 OK).


stevanus w. : #webapi101 (Web API 2): Solution 08/03/2014

Please feel free to contribute, use and/or modify any part of the codes for any personal or commercial use.

Please make sure that you do your own testing before using any part of the codes.

Notes:
1. Ported to ASP.NET Web API 2.
2. Include HTTP Caching with SqlCacheDependency through SqlCacheableAttribute.

 
 
Figure 1. webapi101 solution.
 



Filip Woj: Per request tracing in ASP.NET Web API

Web API allows you to plug in extensive logging mechanism through the ITraceWriter service. This will log all important events in the pipeline – such as selection of the controller, action, parameter binding and so on – all of which … Continue reading

The post Per request tracing in ASP.NET Web API appeared first on StrathWeb.


stevanus w. : #webapi101: HTTP Caching Made Easy With SqlCacheDependency

Previously, HTTP caching was experimented with an action filter attribute with an idea to let any controller that would like to participate has a cacheable attribute declared. However, it works only with GET all. Any POST, PUT and DELETE operation to the controller will invalidate the cache and the next GET all will return the 200 OK response again. It's not very appealing since we don't deal with individual record. The problem is that we need to be able to automatically invalidate all the cached objects that are dependent to the object being changed. For example, when a product category record is deleted, all the cached records referencing the category such as its products should be invalidated (category-products scenario).


.NET in conjunction with SQL Server have had a quite sophisticated caching framework built-in through SqlCacheDependency that could help to answer the above scenario via query notification that further supports row-based caching instead of only table-based caching starting from SQL2005+. Therefore, it may be a good way to utilize it rather than reinventing the wheel. However, please note that query notification has restrictions and requirements so that in some scenarios it may not be the solution.

To start with, the biggest problem I am encountering is not actually about the caching implementation itself but integrating EF with SqlCacheDependency. It cannot be denied that EF dominates the preferred data access layer in the world of .NET although I still prefer the classic SqlCommand and fully control all the SQL queries. In this article, I don't cover the implementation of the classic SqlCommand.

#1. Repository's supports for the ObjectSet.

The generated SQL query has to be able to be programmatically translated into SqlCommand and the only way we can achieve it is to work with the top level object in EF itself, which is ObjectContext. Through ObjectContext, we could get ObjectSet<T> that represents an entity type. It will further allows us to get the strongly-typed representation of the generated query through ObjectQuery<T>, in which we can extract all the required query parameters through its Parameters property.

public class Repository<T> : IRepository<T> where T : class, IIdentifiable
{
   private readonly Context _context;
   private readonly ObjectContext _oContext;

   public Repository()
   {
      _context = new Context();
      _oContext = (_context as IObjectContextAdapter).ObjectContext;
   }

   public Repository(string connectionString)
   {
      _context = new Context(connectionString);
      _oContext = (_context as IObjectContextAdapter).ObjectContext;

   }

   ...

   // ObjectContext
   public IQueryable<T> ObjectSet
   {
      get
      {
         return _oContext.CreateObjectSet<T>();
      }
   }

   public IQueryable<T> Execute(SqlCommand command)
   {
      return _oContext.Translate<T>(command.ExecuteReader()).AsQueryable<T>();
   }

   public SqlCommand ToSqlCommand(ObjectQuery<T> query)
   {
      SqlCommand command = new SqlCommand();
      command.Connection = new SqlConnection(_context.Database.Connection.ConnectionString);
      command.CommandText = query.ToTraceString();
      foreach (var param in query.Parameters)
      {
         command.Parameters.Add(new SqlParameter(param.Name, param.Value));
      }

      return command;
   }

   ...
}

#2. Supporting SqlCacheDependency on OnActionExecuted.

The controller's action method has to pass the SqlCacheDependency object to the cacheable's OnActionExecuted method in order to insert the dependency into the cache object, in which I simply use the built-in ASP.NET System.Web.Caching.Cache, which is thread-safe.

public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
   if (actionExecutedContext.Response != null &&
       actionExecutedContext.Request.Method == HttpMethod.Get &&
       actionExecutedContext.Response.StatusCode == HttpStatusCode.OK &&
       actionExecutedContext.Response.Content != null &&
       actionExecutedContext.Request.Properties.Keys.Contains("webapi101.cacheable") &&
       actionExecutedContext.Request.Properties["webapi101.cacheable"] is SqlCacheDependency)
   {
      SqlCacheDependency depend = actionExecutedContext.Request.Properties["webapi101.cacheable"] as SqlCacheDependency;

      var key = actionExecutedContext.Request.RequestUri.PathAndQuery.ToLower();

      ...

      string cacheKey = string.Format("{0}:{1}", key, vary.ToString());
      if (HttpRuntime.Cache[cacheKey] == null)
     {
         HttpRuntime.Cache.Insert(cacheKey,
                                                    new object(),
                                                    depend,
                                                    Cache.NoAbsoluteExpiration,
                                                    Cache.NoSlidingExpiration,
                                                    CacheItemPriority.Default,
                                                    new CacheItemRemovedCallback(OnCacheItemRemoved));
     }
   }

   base.OnActionExecuted(actionExecutedContext);
}

static void OnCacheItemRemoved(string key,
                                                        object value,
                                                        CacheItemRemovedReason reason)
{
   ConcurrentDictionary<string, CacheValidator> validator;
   _cache.TryRemove(key.Split(':')[0], out validator);
}

#3. Controller's supports for SqlCacheDependency utilizing #1.

The codes are pretty much self-explanatory.

[SqlCacheable(ObjectType=typeof(Employee))]
public HttpResponseMessage Get()
{
   IEnumerable<Employee> employees;

   var query = repository.ObjectSet;

   using (var command = repository.ToSqlCommand(query as ObjectQuery<Employee>))
   {
      SqlCacheDependency depend = new SqlCacheDependency(command);
      command.Connection.Open();

      employees = repository.Execute(command);

      Request.Properties["webapi101.cacheable"] = depend;
   }

   return Request.CreateResponse(HttpStatusCode.OK, employees);
}

[SqlCacheable(ObjectType = typeof(Employee))]
public HttpResponseMessage Get(int id)
{
   Employee employee;

   var query = from e in repository.ObjectSet
                     where e.Id == id
                     select e;

   using (var command = repository.ToSqlCommand(query as ObjectQuery<Employee>))
   {
      SqlCacheDependency depend = new SqlCacheDependency(command);
      command.Connection.Open();

      employee = repository.Execute(command).SingleOrDefault();

      Request.Properties["webapi101.cacheable"] = depend;
   }

   if (employee != null)
   {
      return Request.CreateResponse(HttpStatusCode.OK, employee);
   }

   return Request.CreateResponse(HttpStatusCode.NotFound);
}

#4. Firing up SqlDependency in Global.asax.cs.

protected void Application_Start()
{
   ...
  
   SqlDependency.Start(ConfigurationManager.ConnectionStrings["employee"].ConnectionString);
}


protected void Application_End()
{
   SqlDependency.Stop(ConfigurationManager.ConnectionStrings["employee"].ConnectionString);
}


#5. Testing.


Figure 1. api/employees/ (200 OK).



Figure 2. api/employees/ {If-None-Match: "8366d1c7-4921-433f-9746-6338c3fbc845"} (304 Not Modified).



Figure 3. api/employees/1 {Accept: application/xml} (200 OK).



Figure 4. api/employees/1 {Accept: application/json} (200 OK).



Figure 5. api/employees/1 {Accept: application/json, If-None-Match: "f30a7ac0-2d9f-403b-8c53-5f8dd81d5dfa"} (304 Not Modified).



Figure 6. api/employees/3 {Accept: whatever} (200 OK).



Figure 7. Update api/employees/3 (204 No Content).




Figure 8. api/employees/3 {Accept: whatever, If-Modified-Since: Sat, 08 Mar 2014 05:30:38 GMT} (200 OK), correct since api/employees/3 is invalidated.



Figure 9. api/employees/ {If-None-Match: "8366d1c7-4921-433f-9746-6338c3fbc845"} (200 OK), correct since api/employees/3 is invalidated => api/employees/ is invalidated.



Figure 10.  api/employees/1 {If-None-Match: "f30a7ac0-2d9f-403b-8c53-5f8dd81d5dfa"} (304 Not Modified), correct since only api/employees/3 is invalidated.


Taiseer Joudeh: What is New in ASP.Net Web API 2 – Part 2

In the previous post we’ve covered ASP.Net Web API 2 attribute routing, in this post we’ll complete covering new features, we’ll start by discussing the new response return type IHttpActionResult then cover the support for CORS.

Source code is available on GitHub.

ASP.Net Web API 2 IHttpActionResult:

As we talked before ASP.Net Web API 2 has introduced new simplified interface named IHttpActionResult, this interface acts like a factory for HttpResponseMessage, it comes with custom built in responses such as (Ok, BadRequest, NotFound, Unauthorized, Exception, Conflict, Redirect).

Let’s modify the GetCourse(int id) method to return IHttpActionResult instead of HttpResponseMessage as the code below:

[Route("{id:int}")]
public IHttpActionResult GetCourse(int id)
{
	Learning.Data.LearningContext ctx = null;
	Learning.Data.ILearningRepository repo = null;
	try
	{
		ctx = new Learning.Data.LearningContext();
		repo = new Learning.Data.LearningRepository(ctx);

		var course = repo.GetCourse(id, false);
		if (course != null)
		{
			return Ok<Learning.Data.Entities.Course>(course);
		}
		else
		{
			return NotFound();
		}
	}
	catch (Exception ex)
	{
		return InternalServerError(ex);
	}
	finally
	{
		ctx.Dispose();
	}
}

Notice how we’re returning Ok (200 HTTP status code) with custom negotiated content, the body of the response contains JSON representation of the returned course. As well we are returning NotFound (404 HTTP status code) when the course is not found.

But what if want to extend the NotFound() response and customize it to return plain text message in response body? This is straight forward using IHttpActionResult interface as the below:

Construct our own Action Result:

We want to build our own NotFound(“Your custom not found message”) action result, so we need to add a class which implements IHttpActionResult, let’s add new file named NotFoundPlainTextActionResult as the code below:

public class NotFoundPlainTextActionResult : IHttpActionResult
{
    public string Message { get; private set; }
    public HttpRequestMessage Request { get; private set; }

    public NotFoundPlainTextActionResult(HttpRequestMessage request, string message)
    {
        this.Request = request;
        this.Message = message;
    }

    public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
    {
        return Task.FromResult(ExecuteResult());
    }

    public HttpResponseMessage ExecuteResult()
    {
        HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.NotFound);

        response.Content = new StringContent(Message);
        response.RequestMessage = Request;
        return response;
    }
}

public static class ApiControllerExtension
{
    public static NotFoundPlainTextActionResult NotFound(ApiController controller, string message)
    {
        return new NotFoundPlainTextActionResult(controller.Request, message);
    }
}

By looking at the code above you will notice that the class “NotFoundPlainTextActionResult” implements interface “IHttpActionResult”, then we’ve added our own implementation to the method “ExecuteAsync” which returns a Task of type HttpResponseMessage. This HttpResponseMessage will return HTTP 404 status code along with the custom message we’ll provide in the response body.

In order to be able to reuse this in different controllers we need and new class named ApiControllerExtension which contains a method returns this customized Not Found response type.

Now back to our” CoursesController” we need to change the implementation of the standard NotFound() to the new one as the code below:

if (course != null)
	{
		return Ok<Learning.Data.Entities.Course>(course);
	}
	else
	{
		return eLearning.WebAPI2.CustomIHttpActionResult.ApiControllerExtension.NotFound(this, "Course not found");
	}

You can read more about extending IHttpActionResult result by visiting Filip W. post.

ASP.Net Web API 2 CORS Support:

Enabling Cross Origin Resource Sharing in ASP.Net Web API 2 is simple, once it is enabled any client sending AJAX requests from webpage (on a different domain) to our API will be accepted.

By default CORS assembly doesn’t exist within ASP.NET Web API 2 assemblies so we need install it from NuGet, so open your NuGet package console, and type the following 

Install-Package Microsoft.AspNet.WebApi.Cors -Version 5.0.0
 once the package is installed open “WebApiConfig” class and add the following line of code inside the register method
config.EnableCors();
 by doing this we didn’t enable CORS yet, there are different levels to enable CORS on ASP.Net Web API, levels are:

1. On controller level

You can now add attribute named EnableCors on the entire controller so by default every action on this controller will support CORS as well, to do this open file “CoursesController” and add the highlighted line of code below:

[RoutePrefix("api/courses")]
[EnableCors("*", "*", "GET,POST")]
public class CoursesController : ApiController
{
	//Rest of implementation is here
}

The EnableCors attribute accept 3 parameters, in the first one you can specify the origin of the domain where requests coming from, so if you want to allow only domain www.example.com to send requests for your API; then you specify this explicitly in this parameter, most of the cases you need to allow * which means all requests are accepted from any origin. In the second parameter you can specify if you need a certain header to be included in each request, so you can force consumers to send this header along with each request, in our case we will use * as well. The third parameter is used to specify which HTTP verbs this controller accepts, you can put * as well, but in our case we want to allow only GET and POST verbs to be called from requests coming from different origin.

In case you want to exclude a certain action in the controller from CORS support you can add the attribute DisableCors to this action, let we assume we want to disable CORS support on method GetCourse so the code to disable CORS support on this action will be as the below:

[Route("{id:int}")]
[DisableCors()]
public IHttpActionResult GetCourse(int id)
{
	//Rest of implementation is here
}

2. On action level

Enabling CORS on certain actions is fairly simple, all you want to do is adding the attribute EnableCors on this action, as well EnableCors accepts the same 3 parameters we discussed earlier. Enabling it on action level will be as the below code:

[Route(Name = "CoursesRoute")]
[EnableCors("*", "*", "GET")]
public HttpResponseMessage Get(int page = 0, int pageSize = 10)
{
	//Rest of implemntation is here
}

 3. On the entire API

In some situations where you have many controllers and you want to allow CORS on your entire API, it is not convenient to visit each controller and add EanbleCors attribute to it, in this situation we can allow it on the entire API inside the Register method in Class WebApiConfig, so open your WebApiConfig class and add the code below:

//Support for CORS
EnableCorsAttribute CorsAttribute = new EnableCorsAttribute("*", "*", "GET,POST");
config.EnableCors(CorsAttribute);

By adding this we enabled CORS support for every controller we have in our API, still you can disable CORS support on selected actions by adding DisableCors attribute on action level.

Source code is available on GitHub.

That’s all for now!

Thanks for reading and please feel free to drop any comment or question if there is nothing clear. Happy Coding!

The post What is New in ASP.Net Web API 2 – Part 2 appeared first on Bit of Technology.


Taiseer Joudeh: What is New in ASP.Net Web API 2 – Part 1

Asp.Net Web API 2 has been released with the release of Asp.Net MVC 5 since 5 months ago, Web API 2 can be used with .NET framework 4.5 only, the Web API 2 template is available by default on VS 2013 and you can install Web Tools 2013.1 for VS 2012 to have this template as well by visiting this link.

In my previous tutorial on Building ASP.Net Web API RESTful service, I’ve covered different features of the first release of ASP.Net Web API, in this two series post (part 2 can be found here) I will focus on the main features in version 2, I’ll be using the same project I’ve used in the previous tutorial but I will start new Web API 2 project then we’ll implement and discuss the new features. The source code for the new Web API 2 project can be found on my GitHub repository.

Note: I will not upgrade the existing eLearning Web API to version 2, but if you are interested to know how to upgrade version 1 to version 2 then you can follow the steps in ASP.Net official post here.

What is new in ASP.Net Web API 2?

ASP.Net Web API 2 comes with a couple of nice features and enhancements, the most four important features in my opinion are:

  1. ASP.Net Web API 2 Attribute Routing:

    • Web API 2 now supports attribute routing along with the conventional based routing where we used to define a route per controller inside “WebApiConfig.cs” class. Using attribute routing is very useful in situation that we have a single controller which is responsible for multiple actions, with version 2 the routes now can be defined directly on the controller or at the action level.
  2. IHttpActionResult:

    • In the first version of Asp.Net Web API we always returned HTTP response messages for all HTTP verbs using the extension method Request.CreateResponse and this was really easy, now with version 2 of ASP.Net Web API this is simplified more by introducing a new interface named IHttpActionResult, this interface acts like a factory for HttpResponseMessage with a support for custom responses such as (Ok, BadRequest,Notfound, Unauthorized, etc…).
  3. Cross Origin Resource Sharing Support (CORS):

    • Usually webpages from different domain are not allowed to issue AJAX requests to HTTP Services (API) on other domains due to Same-Origin Policy, the solution for this was to enable JSONP response. But in ASP.Net Web API 2 enabling CORS support is made easy, once you enable CORS it will allow JavaScript on webpage to send AJAX requests to another HTTP Service (API) on a different domain. This can be configured on the entire API, on certain controller, or even on selected actions.
  4. Enhancement on oData Support.

    • Note: I’ll not cover this in the current post as I’m planing to do dedicated series of posts on oData support for ASP.Net Web API.
    • Multi part series tutorial for Building OData Service using ASP.Net Web API is published now.

Let’s cover the new features in practical example

As stated before and to keep things simple we’ll depend on my previous tutorial eLearning API concepts to demonstrate those features. All we want to use from the eLearning API is its data access layer and database model, as well the same repository pattern, in other words w’ll use only project eLearning.Data, if you are interested to know how we built the eLearning.Data project then you can follow along on how we created the database model here and how we applied the repository pattern here.

Once you have your database access layer project ready then you can follow along with me to create new eLearning.WebAPI2 project to demonstrate those new features.

I will assume that you have Web Tools 2013.1 for VS 2012 installed on your machine, so you can use the ASP.NetWeb API 2 template directly which will add the needed assemblies for ASP.Net Web API2.

Step 1: Create new empty ASP.Net Web API 2 project

We’ll start by creating a new empty ASP.Net Web API 2 project as the image below, you can name your project “eLearning.WebAPI2”, do not forget to choose .NET framework 4.5 version. Once The project is added we need install Entity Framework version 6 using NuGet package manager or NuGet package console, the package we’ll install is named “EntityFramework“. When the package is installed, we need to  add a reference for the class library “eLearning.Data” which will act as our database repository.

Web API 2 Project Template

Step 2: Configuring routes

The ASP.Net Web API 2 template we used has by default a class named “WebApiConfig” inside the “App_Start” folder, when you open this class you will notice that there is new line 

config.MapHttpAttributeRoutes();
 this didn’t exists in Web API version one, the main responsbility for this line of code is to enable Attribute Routing feature in Web API version 2 where we can define routes on the controller directly. In this tutorial we want to use routing by attributes only to route all requests, so feel free to delete the default route named “DefaultApi”.

The other important change introduced in Web API 2 is how the “WebApiConfig” file is registered, go ahead and open class “Global.asax”  and you will notice that there is new line used for configuring the routes 

GlobalConfiguration.Configure(WebApiConfig.Register);
 this line is responsible for registering the routes when the global configuration class is initiated and ready to be called.

Step 3: Adding first controller (Courses Controller)

Now we want to add a Web API Controller which will handle HTTP requests issues against the URI “/api/courses”. To do this right-click on Controllers folder->Select Add->Name the controller “CoursesController” and choose “Empty API Controller” Template. Note: It is not required to name the controller “CoursesController” as we used in Web API version 1, now the controllers selection are based on the attributes we define not the routes which used be configured in class “WebApiConfig”

We’ll add support for getting single course by id and getting all courses. There are different ways to define routing attributes, you can define route prefix on controller level, and you can define them on action level. As well we’ll cover how we can configure routing to an exceptional route within the same controller. Below are the scenarios we’ll cover:

1. Define attributes on action level

Let’s Implement the code below which add support for handling and HTTP GET request sent to the URI “api/courses/23″:

public class CoursesController : ApiController
    {
        [Route("api/courses/{id:int}")]
        public HttpResponseMessage GetCourse(int id)
        {
            Learning.Data.LearningContext ctx = null;
            Learning.Data.ILearningRepository repo = null;

            try
            {
                ctx = new Learning.Data.LearningContext();
                repo = new Learning.Data.LearningRepository(ctx);

                var course = repo.GetCourse(id, false);
                if (course != null)
                {
                    return Request.CreateResponse(HttpStatusCode.OK, course);
                }
                else
                {
                    return Request.CreateResponse(HttpStatusCode.NotFound);
                }

            }
            catch (Exception ex)
            {
                return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ex);
            }
            finally
            {
                ctx.Dispose();
            }
        }
}

What we have done here is simple, if you look at the highlighted line of code above you will notice how we defined the route on action level by using attribute 

System.Web.Http.Route
 where we specified the request URI and stated that id parameter should be integer, if you tried to issue GET request to URI “/api/courses/abc” you will get 404 response.

In Web API version 1 we used to define names for routes inside “WebApiConfig” class, those names were useful as we used them to link each resource to URI or for results navigation purpose, in our case we still need to return “PrevPageLink” and “NextPageLink” in the response body when user issue HTTP GET request to URI “/api/courses”. This still available in Web API version 2 but defining route names will take place on the attribute level, let’s implement the code below:

public class CoursesController : ApiController
    {
	[Route("api/courses/", Name = "CoursesRoute")]
        public HttpResponseMessage Get(int page = 0, int pageSize = 10)
        {
            IQueryable<Course> query;

            Learning.Data.LearningContext ctx = new Learning.Data.LearningContext();

            Learning.Data.ILearningRepository repo = new Learning.Data.LearningRepository(ctx);

            query = repo.GetAllCourses().OrderBy(c => c.CourseSubject.Id);
            var totalCount = query.Count();
            var totalPages = (int)Math.Ceiling((double)totalCount / pageSize);

            var urlHelper = new UrlHelper(Request);
            var prevLink = page > 0 ? urlHelper.Link("CoursesRoute", new { page = page - 1, pageSize = pageSize }) : "";
            var nextLink = page < totalPages - 1 ? urlHelper.Link("CoursesRoute", new { page = page + 1, pageSize = pageSize }) : "";

            var results = query
                          .Skip(pageSize * page)
                          .Take(pageSize)
                          .ToList();

            var result = new
            {
                TotalCount = totalCount,
                TotalPages = totalPages,
                PrevPageLink = prevLink,
                NextPageLink = nextLink,
                Results = results
            };

            return Request.CreateResponse(HttpStatusCode.OK, result);

        }
   }

In the code above notice how we defined the route name on attribute level, you can not define route names on Controller level so if you have another actions need route names then you have to define them on each attribute.

2. Define attributes on controller level

Now instead of repeating the prefix “/api/courses” on each action, we can add this prefix on controller level and define the specific route attributes information on each action, the change is fairly simple as the code below:

[RoutePrefix("api/courses")]
public class CoursesController : ApiController
	{
		[Route(Name = "CoursesRoute")]
		public HttpResponseMessage Get(int page = 0, int pageSize = 10)
		{
			/*Rest of implementation goes here*/
		}

		[Route("{id:int}")]
		public HttpResponseMessage GetCourse(int id)
		{
			/*Rest of implementation goes here*/
		}
	}

3. Routing to exceptional route

In some cases you need to route to different URI while you are on the same controller, this was not possible in Web API version 1 but it’s very easy to implement in version 2, assume that we want to get all courses names based on certain subject, our URI have to be on the form: “api/subjects/{subjectid}/courses” lets implement this by adding the code below:

[Route("~/api/subject/{subjectid:int}/courses")]
public HttpResponseMessage GetCoursesBySubject(int subjectid)
{
	Learning.Data.LearningContext ctx = null;
	Learning.Data.ILearningRepository repo = null;
	IQueryable<Course> query;

	try
	{
		ctx = new Learning.Data.LearningContext();
		repo = new Learning.Data.LearningRepository(ctx);

		query = repo.GetCoursesBySubject(subjectid);
		var coursesList = query.Select(c => c.Name).ToList();
		if (coursesList != null)
		{
			return Request.CreateResponse(HttpStatusCode.OK, coursesList);
		}
		else
		{
			return Request.CreateResponse(HttpStatusCode.NotFound);
		}

	}
	catch (Exception ex)
	{
		return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ex);
	}
	finally
	{
		ctx.Dispose();
	}
}

Now how we defined the routing attribute and prefixed it with “~” so Web API will route any HTTP GET request coming to the URI “api/subject/8/courses” to this route, this is really nice feature when you want to introduce exceptional routing in the same controller.

In the next post I’ll cover the IHttpActionResult return type as well the support for CORS in ASP.Net Web API 2.

Source code is available on GitHub.

The post What is New in ASP.Net Web API 2 – Part 1 appeared first on Bit of Technology.


Dominick Baier: OAuth2 and OpenID Connect Scope Validation for OWIN/Katana

In OAuth2 or OpenID Connect you don’t necessarily always use the audience to partition your token space – the scope concept is also commonly used (see also Vittorio’s post from yesterday). A while ago I created a Web API authorize attribute to do the validation based on scopes (see here).

Since scope-based token validation can become so fundamental to your APIs – I moved the logic to an OWIN/Katana component so all frameworks can benefit from it, e.g.:

public class Startup

{

    public void Configuration(IAppBuilder app)

    {

        // read OR write

        app.RequireScopes(“read”, “write”);

 

        app.UseWebApi(WebApiConfig.Register());

    }

}

or…

public class Startup

{

    public void Configuration(IAppBuilder app)

    {

        // read AND write

        app.RequireScopes(“read”);

        app.RequireScopes(“write”);

 

        app.UseWebApi(WebApiConfig.Register());

    }

}

 

Source code here, sample here, nuget here.


Filed under: IdentityModel, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Ugo Lattanzi: How to use CORS with ASP.NET Web API 2.0

With the latest version of ASP.NET Web API, Microsoft introduced support for cross domain requests, usually called CORS (Cross-Origin Resource Sharing)

By default it's not possible to make HTTP requests using Javascript from a source domain that is different from the called endpoint. For example, this means that it's not possible to call the URL http://mysite.com/api/myrestendpoint from a domain http://yoursite.com

This limitation has been introduced for security reasons: in fact, without this protection, a malicious javascript code could get info from another site without noticing the user.

However, even if the reason of this limitation is clear, sometimes we need to call anway something that is not hosted in our site. The first solution is is to use JSONP. This approach is easy to use and it's supported by all browsers; the only problem is that the only HTTP VERB supported is GET, which has a limitation on the lenght of the string that can be passed as query parameter.

Otherwise, if you need to send lot of information we can't use this way, so the soulution could be to "proxy" the request locally and forward the data server side or to use CORS.

Basically CORS communication allow you to overtake the problem by defining some rules that makes the request more "secure". Of course the first thing we need is a browser that support CORS: fortunately all the latest browsers support it. Anyway, we have to consider that, looking at the real world, there are several clients that are still using Internet Explorer 8 which, among other things, doesn't support CORS.

The following table (http://caniuse.com/cors) shows which browsers offer CORS support.

CORS SUPPORT TABLE

there are several workaround that allows you to use CORS with IE8/9 but there are some limitations with the VERBS (more info here)

Now that it's clear what is CORS, it's time to configure it using one of the following browsers:

  • Internet Explorer 10/11
  • Chrome (all versions)
  • Firefox 3.5+
  • Safari 4.x

Now we need two different project, one for the client application and another one for the server, both hosted in different domains (in my esamply I used Azure Web Sites, so I've http://imperdemo.azurewebsite.net for the server and http://imperclient.azurewebsite.net for the client)

 

Server Application

Once the project has been created, it's important to enable CORS for our "trusted" domains, in my sample imperclient.azurewebsite.net

If you used the default Visual Studio 2013 template, your Global.asax.cs should look like this:

Next, it's time to edit the file with the API configuration, "WebApiConfig.cs" into "App_Start".

N.B.: Before editing the file it's important to install the right NuGet Package; the default template included with Visual Studio doesn't have CORS package, so you have to install it manually.

PM> Install-Package Microsoft.AspNet.WebApi.Cors

Once all the "ingredients" are ready, it's time to enable CORS:

Our API Controller looks like this:

The most important part of this code is EnableCors method and the namesake attribute (included the VERBS, Domain and HEADERS)

In case you don't want to completely "open" the controller to CORS requests, you can use the single attribute in the Action or leave the attribute in the controller and apply the DisableCors attribute to the actions you want to "close"

Client Application

At this time, the server is ready, it's time to work client side.

The code we'll see for the client is just plain Javascript, so you can use a simple .html page without any server side code.

The HTML Code:

Javascript Code:

If you did everything right, you can to deploy our apps (server and client) to test them.

image

When you click on the "Test It now" button the result should look like this:

image

Otherwise if something goes wrong, check the point above.

 

How does it work

CORS is a simple "check" based on HEADERS between the caller and the server.

The browser (client) adds the current domain into the hader of the request using the key Origin.

The server check that this value matches with the allowed domains specified in the attribute, answering with another HEADER information named Access-Control-Allow-Origin

If both keys have the same values, you have the data, otherwise you'll get an error.

The screenshot below shows the headers:

image

Here is, instead, the classic error in case the HEADERS doesn't match:

image

 

Conclusions

For me that I love to sperate the application using an API layer, CORS is absolutely cool. The onyl downside is that it's not supported by all the browser. Let's just hope that hope that all the IE8/9 installations will be replaced soon :-)

The demo is available here


Dominick Baier: OpenID Connect and the IdentityServer Roadmap

Since OpenID Connect has been officially released now, I thought I’ll tell you a little bit more about our plans around our identity open source projects.

IdentityServer
IdSrv is a very popular identity provider with excellent support for WS-Federation and WS-Trust. It has decent support for OAuth2 and OpenID Connect (basic client profile) since quite some time – but these protocols were more like an afterthought than part of the initial design. IdSrv uses by default the pretty dated ASP.NET membership, but can be easily extended to use different data stores.

AuthorizationServer
We built AS because we wanted to implement OAuth2 “the right way” and we thought it is easier to start from scratch than retrofitting it into IdSrv. We deliberately didn’t add any authentication / user management pieces to AS because we wanted it to be freely combinable with existing infrastructure. For people without existing infrastructure, this was often a problem because setting up the IdP and AS at the same time was a steep learning curve.

While both projects solved very specific problems, there was room for improvement:

  • Combine identity management and a full featured OAuth2 implementation in a single component – still be able to use them independently if needed
  • Make it more lightweight so it can be hosted more flexibly
  • Incorporate OpenID Connect true cross platform client / relying party support (which was always a problem with WS-Federation)
  • Make authentication and acquisition of access tokens possible in a single round trip – which is a scenario that becomes more and more common.

So we actually decided to combine all those improvements and feature ideas into a new project currently codenamed “IdentityServer v3” :

  • Single codebase and server component to deploy
  • OpenID Connect and OAuth2 as the core protocols – but extensible to add support for e.g. WS-Federation
  • Built as an OWIN/Katana component with no dependencies on a specific host (e.g. IIS)
  • Completely customizable – think “STS toolkit”
  • Ability to incorporate arbitrary identity management systems – support for Brock’s MembershipReboot and ASP.NET Identity by default
  • Separation of core STS and user / configuration management for more flexibility
  • ..and more

We are not done yet – but will have more details soon – in the meantime – here’s a preview screenshot of a consent screen for a native client requesting identity claims (user id and email) as well as access tokens for an HTTP API backend.

image


Filed under: AuthorizationServer, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Darrel Miller: Caching is hard, draw me a picture

 

I’ve been working on a Pluralsight course that talks about how to use the Microsoft HttpClient library.  One of the areas I cover is how to take advantage of Http caching.  In the process I have been doing quite a bit of reading of the HTTPbis spec document on caching.  It isn’t the easiest of specifications to read as there are many interdependencies between the directives and there are a many different scenarios that are supported.

To help me get a grip, I decided I needed to draw some diagrams to help me get a clearer picture of the rules.  The rules break down into two distinct steps. First, is a cache allowed to store a response that is returned from an Origin Server? and the second is can the response be served from the cache for a particular request?  The following two diagrams cover the basic set of scenarios. 

 

image

 

image

There are quite a few scenarios that these diagrams don’t cover and there are a number of directives that are not mentioned.  Also, some of the steps could use some explanation.  Hopefully, when I get this course completed I’ll come back to this post and fill in some more details.  In the meanwhile, I decided that some might find it useful as is.


Filip Woj: Running the OWIN pipeline in the new .NET Azure Mobile Services

Yesterday, a preview of the .NET Azure Mobile Services has been released. Despite the fact that I’d rather see a scripted C# support – I am still very excited about this new .NET support, as ZUMO is one of my … Continue reading

The post Running the OWIN pipeline in the new .NET Azure Mobile Services appeared first on StrathWeb.


Filip Woj: Getting started with OData v4 in ASP.NET Web API

Since yesterday, the ASP.NET Web stack nightly feed contains the packages supporting OData v4. The package is called Microsoft.AspNet.OData and has a working version 5.2.0 – so I’m guessing this is intended to ship with Web API 2. It relies … Continue reading

The post Getting started with OData v4 in ASP.NET Web API appeared first on StrathWeb.


Dominick Baier: Workshop: Identity & Access Control for modern Web Applications and APIs

Brock and I are currently working on a brand new two day workshop about all things security when building modern web applications and APIs.

You can either attend the full two day version at NDC Oslo (June) – or a stripped down one day version at SDD London (May). Both still have early bird offerings.

Hope to see you!

With ASP.NET MVC, Web API and SignalR tied together using the new OWIN and Katana framework, Microsoft provides a compelling server-side stack to write modern web applications and services. In this new world we typically want to connect client platforms like iOS, Windows or Android as well as JavaScript-based applications using frameworks like AngularJS.

This two day workshop is your chance to dive into all things security related to these new technologies. Learn how to securely connect native and browser-based applications to your back-ends and integrate them with enterprise identity management systems as well as social identity providers and services.

Tags: WS-Federation, SAML, OAuth2, OpenID Connect, OWIN, JSON Web Tokens, Single Sign-on and off, Federation, Delegation, Home Realm Discovery, CORS

Day 1: Web Applications

  • Authentication & Authorization on .NET 4.5 and beyond
  • Introduction to OWIN and the Katana Project
  • Katana Security Framework
    • Cookie-based Authentication
    • Enterprise Authentication with WS-Federation
    • Social Logins (e.g. Google, Facebook, Twitter, etc.)
    • OpenID Connect
  • Web Application Patterns
    • Single Sign On / Single Sign Off
    • Federation Gateway
    • Account & Identity Linking
    • Delegation
    • Home Realm Discovery

Day 2: Web APIs

  • ASP.NET Web API Security
    • Architecture
    • Authentication & Authorization
    • CORS
    • Katana Integration
  • Web API Patterns
    • Token-based Authentication
    • Delegated Authorization
  • OAuth2
    • Flows
    • Scopes
    • OAuth2 Middleware
    • Federation
  • OpenID Connect (revisited)
  • Bringing it all together

 


Filed under: AuthorizationServer, Conferences & Training, IdentityModel, IdentityServer, Katana, OAuth, OpenID Connect, OWIN, WebAPI


Dominick Baier: Thinktecture.IdentityModel.Owin.*

To be more in-line with the OWIN / middleware mindset (and because Damian said so) – I broke up our OWIN security assembly into smaller components:

http://www.nuget.org/packages?q=Thinktecture.IdentityModel.Owin.*

Currently there are four packages:

  • Basic Authentication
  • X.509 client certificate authentication
  • Claims transformation
  • Requiring SSL and client certs

…plus a base package with various helpers…

A sample showing all four main packages together can be found here.

More to come…


Filed under: IdentityModel, Katana, OWIN, WebAPI


Dominick Baier: AuthorizationServer v1.2

I just uploaded version 1.2 of AuthorizationServer.

The big change is that AS is now using MVC and Web API v5.1.1 – additionally there are some bug fixes and a new configuration switch – set the following appSetting to false if you want to pass through all incoming IdP claims (default is true):

<add key=”authz:FilterIncomingClaims” value=”true”/>

 


Filed under: AuthorizationServer, OAuth, WebAPI


Taiseer Joudeh: Building ASP.Net Web API RESTful Service – Part 11

This is the eleventh part of Building ASP.Net Web API RESTful Service Series. The topics we’ll cover are:

Update (2014-March-5) Two new posts which cover ASP.Net Web API 2 new features:

Caching resources using CacheCow and ETag

In this post we’ll discuss how we can implement resource caching by using an open source framework for HTTP caching on the client and server, this framework is called CacheCow. It is created by Ali Kheyrollahi, we’ll cover in this post the server side caching.

API Source code is available on GitHub.

Using resource caching will enhance API performance, reduce the overhead on the server, and minimize the response size.

The form of caching we want to achieve here called Conditional Requests, the idea is pretty simple; the client will ask the server if it has an updated copy of the resource by sending some information about the cached resources it holds using a request header called ETag, then the server will decide weather there is an updated resource that should be returned to the client, or the client has the most recent copy, so if the client has the most recent copy the server will return HTTP status 304 (Not modified) without the resource content (empty body). By using Conditional Requests the client will send HTTP requests always to the server but the added value here that the server will only return full response 200 (OK) including resource content within the body only if the client has stale copy of the resource.

So what is ETag (Entity Tag)?

ETag is a unique key (string) generated at the server for particular resource, you can think of it as check-sum for a resource that semantically changes when the resource has updated.

ETag has two types, weak, and strong. The value of weak ETag is prefixed by W, i.e. ETag: “W/53fsfsd322″ and the strong ETag is not prefixed with anything, i.e. ETag: “534928jhfr4″. usually Weak ETags indicates that the cached resource is useful to be used for short time (In memory caching) and the strong ETag means that the resource caching is implemented using persistence storage and the content of the both resources (client and server) are identical byte for byte.

How ETags work?

By looking on the illustration below we will notice that at the beginning the client initiate HTTP GET request asking for the course with Id: 4 assuming that that this resource has not requested before, then the server will respond by returning the resource in the response body along with a generated ETag in the response header.

Now the client wants to request the same resource again (Course id: 4) by sending another HTTP GET, assuming that the client is interested in using caching, then the GET request initiated will include a header called If-None-Match with the value of the ETag for this resource, once the server receives this request it will read the ETag value and compare it with the ETag value saved on the server, if they are identical then the server will send HTTP status 304 (Not modified) without the resource content (empty body) and the client will know that it has the most recent copy of the resource.

For HTTP GET and DELETE we can use the header If-None-Match, but when updating a resource and we want to use ETags we have to send the header If-Match with the HTTP PUT/PATCH requestso the server will examine the ETag and if they are different; the server will respond with HTTP Status 412 (Precondition Failed) so the client knows that there is a fresher version of the resource on the server and the update won’t take place until the client has the same resource version on the server.

WebApiCachingETag

Configuring Web API to use CacheCow

After we had a brief explanation of how conditional requests and ETags work let’s implement this feature in our Web API.

Now we need to Install CacheCow server from NuGet, so open NuGet package manager console and type “Install-Package CacheCow.Server -Version 0.4.12” this will install two dlls, the  server version and common dll.

Configuring CacheCow is pretty easy, all we need to do is to create Cache Handler and inject it into the Web API pipeline, this this handler will be responsible to inspect each request and response coming to our API by looking for ETags and Match headers and do the heavy lifting for us.

To implement this open the file “WebApiConfig.cs” and at the end of the file add the below line of codes:

//Configure HTTP Caching using Entity Tags (ETags)
var cacheCowCacheHandler = new CacheCow.Server.CachingHandler();
config.MessageHandlers.Add(cacheCowCacheHandler);

Till this moment we have configured our API to use In-memory caching; this is the default configuration for CacheCow, this will be fine if you have a single server or for demo purposes, but in case of load balancers and web farms, the cache state should be persisted for the whole farm in single place and all web servers should be aware of this cache. In this case we need to configure CaschCow to use persistence medium (SQL Server, MongoDB, MemCache). But before moving to persistence store let we test the in-memory caching.

Cache Cow in-memory caching:

To test this we need to open fiddler and choose the Composer tab, we’ll issue a GET request to the URI: http://localhost:{your_port}/api/courses/4 The request will look as the image below:

CacheCowGet-WeakeTag

In this GET request we need to note the following:

  • The HTTP status response is 200 OK which means that server returned the resource content in the response body.
  • New two headers are added to the response which they are eTag and Last-Modified, we are interested here on the eTag value and we’ll get rid off the Last-Modified header later on as we’ll send conditional requests using eTag only.
  • The value of the eTag is weak (Prefixed with W) because we are using in-memory caching for now, in other words if we restarted IIS or shutdown its worker process all eTag values the client saving are useless.

Now after receiving the eTag header value, the client is responsible to send this value along with any subsequent requests for the same resource, the client will use the header “If-None-Match” when sending the request and the server will respond by HTTP status 304 with empty response body if the requested resource has not changed, other wise it will return 200 OK with new eTag value and resource content in the response body. To examine this let we open fiddler and issue GET request to the same resource (Course with Id: 4) as the image below:

CacheCow-WeakeTag-Response

In this GET request we need to note the following:

  • The HTTP status response is 304 Not modified which means that server returned empty body and the copy of the resource at client side is the latest.
  • The same eTag header value is returned in the response.

Now let’s move to configure CacheCow to use persistence caching medium (SQL Server) instead of in-memory caching.

Cache Cow persistence caching using SQL Server:

Configuring CacheCow to use SQL server is easy we need to install NuGet package which works with the medium we want to use, in our case SQL server, so open NuGet package manager console and type:  ”Install-Package CacheCow.Server.EntityTagStore.SqlServer -Version 0.4.11“, this will install the requiered dlls.

Now open the file “WebApiConfig.cs” again paste the code snippet below:

//Configure HTTP Caching using Entity Tags (ETags)
var connString = System.Configuration.ConfigurationManager.ConnectionStrings["eLearningConnection"].ConnectionString;
var eTagStore = new CacheCow.Server.EntityTagStore.SqlServer.SqlServerEntityTagStore(connString);
var cacheCowCacheHandler = new CacheCow.Server.CachingHandler(eTagStore);
cacheCowCacheHandler.AddLastModifiedHeader = false;
config.MessageHandlers.Add(cacheCowCacheHandler);

What we implemented above is obvious, CacheCow needs to store caching information in SQL server, so we need to inform which database to use, in our case we’ll put this caching information in the same database of the API “eLearning”. Then we need to pass the SQL server eTagStore instance to the caching handler. As well we’ve turned off adding last modified header to the response because we are interested in eTag values only.

If you directly tried to issue any request against our API you will receive 500 error (Internal Server Error) because as we mentioned before CacheCow needs to store information about the cached resource on the server, this means it needs a table and some stored procedures to manipulate this table, so we need run SQL script before we go on. To do this navigate to the path where you NuGet packages are installed, usually they are on “{projectpath}packagesCacheCow.Server.EntityTagStore.SqlServer.0.4.11scripts“, open the file named “script.sql” and execute all its content against your local eLearning Database.

After running the script navigate to SQL server explorer and open eLearning database, you will notice that this script created a table named “CacheState” and another five stored procedures mainly used for manipulating this table.

To test this out we need to issue the same GET request as the image below:

CacheCow-StrongETag-Request

As you notice from the above image, the ETag value now is not weak, it is strong because we persisted it on SQL server, so if we opened the table “CacheState” you will notice that CacheCow has inserted new record including the ETag value, Route Pattern, Last Modified date, and binary hash key as the image below:

CacheCow-CacheStateTable

Now if the client sends any subsequent requests to the same resource, the same ETag value will be returned as long as no other clients updated the resource by issuing HTTP PUT/PATCH on the same resource.

So if the client includes this ETag value within the If-None-Match header then the server will respond by 304 HTTP status.

Now let’s simulate the update scenario, the client will issue HTTP PUT on the resource (Course with Id 4) including the ETag value in the header If-Match along with the request as the image below:

CacheCow-PUT-StrongETag

In this PUT request we need to note the following:

  • The HTTP status response is 200 OK which means that client has the most recent copy of the resource and the update of the resource has took place successfully.
  • New ETag has been returned in the response because the resource has been changed on the server, so the client is responsible to save this new ETag for any subsequent requests for the same resource.
  • The new ETag value and last modified date has been updated in table “CacheStore” for this resource.

Now if we directly tried to issue another PUT request using the old ETag (8fad5904b1a749e1b99bc3f5602e042b) we will receive HTTP Status 412 (Precondition Failed) which means that updating the resource has failed because the client doesn’t have the latest version of the resource. Now client needs to issue GET request to get the latest version  of the resource and the new generated ETag (ad508f75c5144729a1563a4363a7a158), let’s test this as the image below:

CacheCow-If-Match-PreconditionFailed

That’s all for now!

I hope you liked this tutorial, I tried my best to explain all important features of Web API which allows you to get started and build your RESTful API. You can always get back to the complete running source code on github.

Please feel free to drop any comment or suggestion to enhance this tutorial. if you have quick question please send me on twitter.

The post Building ASP.Net Web API RESTful Service – Part 11 appeared first on Bit of Technology.


stevanus w. : #webapi101: Solution 07/02/2014

Please feel free to contribute, use and/or modify any part of the codes for any personal or commercial use.

Please make sure that you do your own testing before using any part of the codes.


Notes:
1. A single DLL WebApi101.Infrastructure targeting .NET 4.5.
2. Request response tracing + exceptions + model state errors.
3. Basic and digest authentication (qop="auth").
4. HTTP caching.
5. Major and minor bug fixes and code improvements from the previous version.
6. WebApi101.Test project is not being used at the moment.



Figure 1. webapi101 solution.


Filip Woj: Dynamic action return with Web API 2.1

One of the small things (aka hidden gems) that was released with Web API 2.1, was the support of dynamic return type. This went largely unnoticed, since it’s buried deep in the Web API source code but it has some … Continue reading

The post Dynamic action return with Web API 2.1 appeared first on StrathWeb.


stevanus w. : #webapi101: HTTP Caching On Action

Here is an example of HTTP caching implementation via action filter. My idea is to allow us to select controller that support the caching and this can be done through the action filter attribute. Ideally, each controller should expose the fundamental CRUD operations that in the world of REST-based HTTP services are translated into POST, GET, PUT and DELETE method. Therefore, GET is the only operation that tells that its related resource should be cached while the corresponding POST, PUT and DELETE will on the other hand invalidate the cache. Note that I intentionally make the filter attribute is applied to only at the controller level because search operations such as Employee GET(int id) should ideally not become a candidate for caching while on the other hand IEnumerable<Employee> GET() that certainly return all records is what we are (or at least to me at the moment) interested in. Furthermore, please refer to caching in HTTP to understand more about caching components such as Etag, Last-Modified and Vary header attributes that are used in the implementation.


Basically, there are two main parts in the implementation which are cache validation and invalidation and we will inspect it firstly from the invalidation section. This will be done through OnActionExecuted method.

public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
   if (actionExecutedContext.Response != null)
  {
      var key = GetCacheKey(actionExecutedContext.ActionContext.ControllerContext);

      if (actionExecutedContext.Request.Method == HttpMethod.Get &&
          actionExecutedContext.Response.StatusCode == HttpStatusCode.OK &&
          actionExecutedContext.Response.Content != null &&
          key.Equals(actionExecutedContext.Request.RequestUri.AbsolutePath.Trim('/'), StringComparison.OrdinalIgnoreCase))
     {
        var validator = new CacheValidator()
        {
           Etag = Guid.NewGuid().ToString(),
           LastModified = DateTimeOffset.Now
        };

       var vary = new CacheValidator.Vary()
       {
          Accept = actionExecutedContext.Response.Content.Headers.ContentType.MediaType,
          AcceptCharset = actionExecutedContext.Response.Content.Headers.ContentType.CharSet
       };

       ConcurrentDictionary<string, CacheValidator> validators = new ConcurrentDictionary<string,CacheValidator>();
       validators.TryAdd(vary.ToString(), validator);

       validators = _cache.AddOrUpdate(key, validators, (k, v) =>
       {
          validator = v.AddOrUpdate(vary.ToString(), v.First().Value, (k1, v1) =>
         {
             return v1;
         });

         return v;
       });

       actionExecutedContext.Response.Headers.ETag = new EntityTagHeaderValue(string.Format(@"""{0}""", value.Etag.Split(':')[0]), false);
       actionExecutedContext.Response.Headers.CacheControl = new CacheControlHeaderValue()
      {
          Private = true,
          MustRevalidate = true,
          MaxAge = TimeSpan.FromSeconds(0)
       };
       actionExecutedContext.Response.Content.Headers.LastModified = validator.LastModified;
       actionExecutedContext.Response.Headers.Vary.Add("Accept");
       actionExecutedContext.Response.Headers.Vary.Add("Accept-Charset");
    }
    else if ((actionExecutedContext.Request.Method == HttpMethod.Post ||
                actionExecutedContext.Request.Method == HttpMethod.Put ||
                actionExecutedContext.Request.Method == HttpMethod.Delete) &&
                (actionExecutedContext.Response.StatusCode == HttpStatusCode.OK ||
                 actionExecutedContext.Response.StatusCode == HttpStatusCode.NoContent ||
                 actionExecutedContext.Response.StatusCode == HttpStatusCode.Created))
    {
       ConcurrentDictionary<string, CacheValidator> validator;
       _cache.TryRemove(key, out validator);
    }
 }

 base.OnActionExecuted(actionExecutedContext);
}

The logic of the cache invalidation is as below.
1. Get the resource key that will be the route segments of the controller ignoring any parameter.
2. If the request method is GET and its URL matches with the key and its corresponding response status is OK (200), then try to add or update the resource into cache and generate the required caching attributes in the response.
3. Any POST, PUT and DELETE operation with OK (200), NoContent (204) or Created (201) status will remove the resource from the cache.

The cache validation part will be done through OnActionExecuting method.

public override void OnActionExecuting(HttpActionContext actionContext)
{
      if (actionContext.Request.Method == HttpMethod.Get)
     {
        var key = GetCacheKey(actionContext.ControllerContext);
        var etags = actionContext.Request.Headers.IfNoneMatch;
        ConcurrentDictionary<string, CacheValidator> validators;

        IContentNegotiator negotiator = actionContext.Request.GetConfiguration().Services.GetService(typeof(IContentNegotiator)) as IContentNegotiator;
        ContentNegotiationResult result = negotiator.Negotiate(ObjectType, actionContext.Request, actionContext.Request.GetConfiguration().Formatters);

        string varyKey = string.Format("{0}:{1}", result.MediaType.MediaType, result.MediaType.CharSet);
        CacheValidator validator;

        if (_cache.TryGetValue(key, out validators) &&
         key.Equals(actionContext.Request.RequestUri.AbsolutePath.Trim('/'), StringComparison.OrdinalIgnoreCase) &&
         validators.TryGetValue(varyKey, out validator) &&
         (etags.Any((e) => { return (!string.IsNullOrEmpty(e.Tag) && 
                                                     e.Tag.Trim('\"') == value.Etag); }) ||
         actionContext.Request.Headers.IfModifiedSince.HasValue &&
         actionContext.Request.Headers.IfModifiedSince.Value.UtcDateTime.ToString() == value.LastModified.UtcDateTime.ToString()))
       {
            actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.NotModified);
            actionContext.Response.Headers.CacheControl = new CacheControlHeaderValue()
           {
              Private = true,
              MustRevalidate = true,
              MaxAge = TimeSpan.FromSeconds(0)
           };
      }
   }

    base.OnActionExecuting(actionContext);
}

The logic of cache validation is as below.
1. Get the resource key that will be the route segments of the controller ignoring any parameter.
2. Since we apply accept and accept-charset as parts of the cache key then we need to find what the result of content negotiation (conneg) of the current request and thanks to Filip W. for his great article on it.
3. Combined with the resource key, we can now validate the cache through the If-None-Match against Etag or If-Modified-Since against Last-Modified attribute.

Lastly, since the conneg process (2) requires the CLR type to be known in advance, the easiest solution I could think of is by exposing a Type property that is set when declaring the attribute on the controller.

[Cacheable(ObjectType=typeof(Employee))]
public class EmployeesController : ApiController
{
   ...
}




Figure 1. 1st request (200 OK).



Figure 2. 2nd request with If-None-Match: "4a88d17c-5aaf-43ea-853b-f9aefd2e00cb"  and Accept: application/json (304 Not Modified).



Figure 3. 3rd request with If-None-Match: "4a88d17c-5aaf-43ea-853b-f9aefd2e00cb" and Accept: application/xml (200 OK).




Figure 4. 4th request with If-None-Match: "4a88d17c-5aaf-43ea-853b-f9aefd2e00cb" and Accept: application/xml (304 Not Modified).



Figure 5. 5th request with If-Modified-Since: Thu, 06 Feb 2014 08:14:19 GMT and Accept: application/json (304 Not Modified).



Figure 6. 6th request to employees/1 results to Cache-Control: no-cache.



Figure 7. 7th request with If-None-Match: "4a88d17c-5aaf-43ea-853b-f9aefd2e00cb"and Accept: whatever (304 Not Modified).



Figure 8. 8th request with POST should result to cache invalidation for api/employees (201 Created).



Figure 9. 9th request with If-None-Match: "4a88d17c-5aaf-43ea-853b-f9aefd2e00cb" (200 OK).



Figure 10. Load testing of 1000 concurrent requests with caching gave an average of 376 served requests per second.




Figure 11. Load testing of 1000 concurrent requests without caching gave an average of 315 served requests per second.

Well, apart from data transfer, we don't really see a significant difference of performance here since the number of records itself is only 4. However when we deal with hundreds or thousands of records, the HTTP cache will certainly outperform. Therefore, be wise when choosing resources to be cached since the caching itself has its own costs in terms of complexity that might affect the performance.


Pete Smith: Tracking changes to complex viewmodels with Knockout.JS Part 2 – Primitive Arrays

In the first part of this series, I talked about the challenges of tracking changes to complex viewmodels in knockout, using isDirty() (see here and here) and getChanges() methods.

In this second part, I’ll go through how we extended this initial approach so we could track changes to array elements as well as regular observables. If you haven’t already, I suggest you have a read of part one as many of the examples build on code from the first post.

Starting Simple

For the purposes of this post we are only considering ‘Primitive’ arrays… these are arrays of values such as strings and numbers, as opposed to complex objects with properties of their own. Previously we created an extender that allows us to apply change tracking to a given observable, and we’re using the same approach here.

We won’t be re-using the existing extender, but we will use some of the same code for iterating over our model and applying it to our observables. In that vein, here’s a skeleton for our change tracked array extender… it has a similar structure to our previous one:

You should notice a few differences however:

  • Two observable arrays are being exposed in addition to the isDirty() flag – added and removed
  • The getChanges() method returns a complex object also containing adds and removes

As this functionality was developed with HTTP PATCH in mind, we’re assuming that we will need to track both the added items and the removed items, so that we can only send the changes back to the server. If you aren’t using PATCH, it can be sufficient just to know that a change has occurred and then save your data by replacing the entire array.

Last points to make – we’re treating any ‘changes’ to existing elements as an add and then a delete… these are just primitive values after all. Also the ordering of the elements is not going to be tracked (although this is possible and will be covered in the next post).

Array subscriptions

Prior to Knockout 3.0, we had to provide alternative methods to the usual push() and pop() so that we could keep track of array elements… subscribing to the observableArray itself would only notify you if the entire array was replaced. As of Knockout 3.0 though, we now have a way to subscribe to array element changes themselves!

We’re using the latest version for this example, but check the links at the bottom of the third post in the series if you are interested in the old version.

Let’s begin to flesh out the skeleton a little more:

Now we’ve added an arrayChange subscription, we’ll be notified whenever anyone pops, pushes or even splices our array. In the event of the latter, we’ll receive multiple changes so we have to cater for that eventuality.

We’ve deferred the actual tracking of the changes to private methods, addItem() and removeItem(). The reason for this becomes clear when you consider what you’d expect to happen after performing the following operations:

In order to achieve this behavior, we first need to check that the item in question has not already been added to one of the lists like so:

Applying this to the view model

A change tracked primitive array is unlikely to be very useful on it’s own, so we need to make sure that we can track changes to an observable array regardless of where it appeared in our view model. Lets revisit the code from our previous sample that traversed the view model and extended all the observables it encountered:

In order to properly apply change tracking to our model, we need to detect whether a given observable is in fact an observableArray, and if so then apply the new extender instead of the old one. This is not actually as easy as it sounds… based on the status of this pull request, Knockout seems to provide no mechanism for doing this (please correct me if you know otherwise!).

Luckily, this thread had the answer… we can simply extend the observableArray “prototype” by adding the following line somewhere in global scope:

ko.observableArray.fn.isObservableArray = true; 

Assuming that’s in place, our change becomes very simple:

We don’t need to change any of the rest of the wireup code from the first sample, as we are already working through our view model recursively and letting applyChangeTrackingToObservable do it’s thing.

That’s all the code we needed, now we can take it for a spin!

Summary

We’ve seen how we can make use of the new arraySubscriptions feature in Knockout 3.0 to get notified about changes to array elements. We made sure that we didn’t get strange results when items were added and then removed again or vice-versa, and then integrated the whole thing into a change tracked viewmodel.

In the third and final post in this series, we’ll go the whole hog and enable change tracking for complex and nested objects within arrays.

You can view the full code for this post here: https://gist.github.com/Roysvork/8743663, or play around with it in jsFiddle!

Pete



stevanus w. : #webapi101: Including ModelState Errors in Exception Tracing

After authorization filters are invoked, model binding process happens followed by any defined action filter. Therefore, an action filter will be the first place to inspect the ModelState status and act on it that is for the tracing we could iterate over the errors and populate ExceptionTraceRecords.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class InvalidModelStateTraceFilter : ActionFilterAttribute
{
   public override void OnActionExecuting(HttpActionContext actionContext)
   {
      if (actionContext.Request.Properties.Keys.Contains("webapi101.requestresponsetracerecord") &&
          !actionContext.ModelState.IsValid)
      {
         var record =      (RequestResponseTraceRecord)actionContext.Request.Properties["webapi101.requestresponsetracerecord"];

         foreach (var k in actionContext.ModelState.Keys)
        {
           foreach (var e in actionContext.ModelState[k].Errors)
          {
              record.Exceptions.Add(new ExceptionTraceRecord()
             {
                RequestId = record.RowId,
                ExceptionType = k,
                ExceptionMessage = e.ErrorMessage
             });
          }
        }

        actionContext.Response = actionContext.Request.CreateErrorResponse(HttpStatusCode.BadRequest, actionContext.ModelState);
      }
   }
}

Registering the filter globally in WebApiConfig.cs:
config.Filters.Add(new InvalidModelStateTraceFilter());

Now assume I have the following model marked with some validation attributes:
public class Employee
{
   public int EmployeeID { get; set; }

   [Required]
   [StringLength(25)]
   public string FirstName { get; set; }
        
   [Required]
   [StringLength(25)]
   public string LastName { get; set; }
}

Posting a new employee with FirstName longer than 25 characters and NULL LastName yield the following exceptions in the tracing record.


Figure 1. Invalid employee.

It's all good, however the error message whose key is "employee" is not really required here and it is by default thrown by the formatter itself when it can't find required member(s) during deserialization process. Fortunately, we can suppress this behavior by implementing our own IRequiredMemberSelector that ignores such validation.

public class NotRequiredMemberSelector : IRequiredMemberSelector
{
   public bool IsRequiredMember(MemberInfo member)
   {
      return false;
   }
}

And replace the default behavior of all formatters in WebApiConfig.cs.
foreach (var f in config.Formatters)
{
   f.RequiredMemberSelector = new NotRequiredMemberSelector();
}

And finally reposting the new employee.


Figure 2. Invalid employee without formatter's error.



Figure 3. ModelState errors are logged.

This will complete the required infrastructure for tracing/logging request and response including ModelState errors, business and unhandled exception. All the codes and changes from the previous release will be made available in the next release.


stevanus w. : #webapi101: Digest Authentication Handler (Deleting Expired Nonces)

A simple implementation of a timer-based scheduler to delete expired nonces from the nonce table periodically based on a specified interval.

The stored procedure:

CREATE PROCEDURE dbo.usp_DeleteExpiredNonce
(@Now AS DATETIME,
 @TimeoutInterval AS INT)
AS
BEGIN
SET NOCOUNT ON;

DELETE FROM dbo.Nonce
WHERE DATEDIFF(second, Expiration, @Now) >= @TimeoutInterval;
END

The codes:

public static void ScheduleExpiredNonceDeletion()
{
   Timer scheduler = new Timer(SecurityConfiguration.DeleteNonceInterval);
   scheduler.Elapsed += (source, e) =>
   {
      try
      {
         SqlCommand command = SqlClient.GetCommand(ConfigurationManager.ConnectionStrings["webapi101"].ConnectionString);
         command.CommandType = CommandType.StoredProcedure;
         command.CommandText = "dbo.usp_DeleteExpiredNonce";
         command.Parameters.Add("@Now", SqlDbType.DateTime).Value = DateTime.Now;
         command.Parameters.Add("@TimeoutInterval", SqlDbType.Int).Value = SecurityConfiguration.LoginTimeout;
         command.ExecuteNonQuery(true);
      }
      catch { }
   };
   scheduler.Enabled = true;
}

And lastly, firing up the scheduler from WebApiConfig.cs:

NonceRepository.ScheduleExpiredNonceDeletion();

This will complete the infrastructure required for supporting digest authentication, and all the codes and changes since the previous release will be made available in the next release.


stevanus w. : #webapi101: Digest Authentication Handler (Local-Nonce Repository)

Storing generated nonces persistently during initial request will weaken our system in the case of DoS attack. Therefore, here is an update of the previous solution by utilizing a local-nonce storage that uses memory instead of database table (preventing unused nonces) during the initial authentication handshake. The storage implementation is a circular buffer that is based on a fixed-size array with a pointer that works in conjunction with a help from dictionary for better performance. At the moment, the buffer size is set to 1000 but you may just adjust it according to your application need.

How it works?
1. On initial client request, server generates a random nonce and instead of storing it on database, the server stores the nonce on the local buffer.
2. After receiving the authentication parameters back that includes the previously generated nonce, the server checks if currently there is matching one on database. If it is not found, then this must be the first authentication related to the nonce and it should be found locally. The process continues by updating the nonce and its counter persistently to the database for subsequent uses. Note that the local storage is a circular buffer therefore the nonce will eventually get overwritten.

In the case of DoS attack, we may still not be able to serve certain legitimate requests since their nonces will just get swept too quick, however we have saved our database from being bullied with say thousands of requests in a second that would imply thousands of unused nonce rows inserted in a second.

If we want to be more sophisticated, we may even try to host a windows service as a local storage that internally may use more than one buffer and we may embed a flag in the generated nonce to denote which buffer it belongs to. This approach may further works in web farm scenario if the service is host on a dedicated server.



Figure 1. Local nonce implementation.



Figure 2. Dictionary.Count stays at 1000 after 1000+ requests.


Pete Smith: Increasing loop performance by iterating two intersecting lists simultaneously

Disclaimer

This brief post covers a micro-optimisation that we employed recently in an Asp.Net Web Api app. If you’re looking to solve major performance problems or get a quick win on small tasks, this isn’t going to be very useful to you. However, if you’ve nailed all the big stuff and are processing a large batch (think millions) of many records together then these small inefficiencies really begin to add up. If this applies to you, then you may find the following solution useful.

It’s possible that many people have thought of this problem and provided a solution before… in fact I’m very sure they have as I’ve googled it and so many Stack Overflow posts came up that I’m not going to bother linking to any of them. However, no-one seemed to made anything that was simple, re-usable and easy to integrate… so this is my take on it.

Finally, I’ve attempted to do some napkin maths. It’s probably wrong in some way so please correct me.

Compound Iteration

How often have you written code like this? 

This simplified sample shows how you might validate an HTTP PATCH request against some metadata. It seems innocuous right?

But say you have 1000 fields to validate, and maybe half of them are present in the body of your request. In the worst case we’ll have 500 iterations of the outer fields loop where we’ll then have to iterate through 500 dictionary keys just to find out that the field doesn’t exist in the data set.

Even in an optimal case for the remaining fields that do exist, you’ll have to iterate through 250 keys on average before we find a match, so for an ‘average’ case we could be looking at:

(500 * 500) + (500 * 250) = 375,000

As an ‘average’ case, it could potentially be a lot less than this, potentially a lot more. Either way,  imagine trying to bulk validate 100,000 records and… yikes!

Sort your data, and enter the Efficient Iterator

Provided your numbers are big enough it’s much more efficient to sort your data first and then step through each collection simultaneously. If your field info is coming say from a SQL table with a clustered index and an orderby is essentially free then it’s even more possible that this will result in significant speedup.

Basically what such an algorithm does is to take the first item from each of the two lists, and compare them. If Item A comes before Item B in the sort order, you advance forward one item in List A – or vice versa – until the two are found to match (or you run out of items). You are able to take action on each step, in the case a value is a match, or an orphan on either side.

Now the worst case iteration is merely the sum of the elements in the two lists. So in our average case, just 1500. That’s a 250x reduction… over two orders of magnitude!

Show me the code

Without further ado, here’s a Gist that you can use to do this right now…

Take a look at these MSpec tests for information on how to use it. You’ll also need to use nullable types if you want to work with non-reference types but that should be straightforward. Thanks to Tommy Carlier for his amendments to the sample to allow any type of IEnumerable and to support value types!

Questions are welcome in the comments… but please refrain from unhelpful critiquing the ‘design’ of the simplified problem sample ; ) Enjoy iterating efficiently!

Don’t forget that you’ll have to sort both lists before passing them to the efficient iterator!

Pete



Filip Woj: Return types, action parameters and data annotations now available in Web API 2.1 Help Page

On Friday Microsoft released a 2.1 version of Web API (along with MVC 5.1 and Web Pages 3.1). The release announcement was made yesterday and can be read here – but pretty much all of the new features have already … Continue reading

The post Return types, action parameters and data annotations now available in Web API 2.1 Help Page appeared first on StrathWeb.


Ali Kheyrollahi: Am I going to keep supporting my open source projects?

I have recently been asked by a few whether I will be maintaining my open source projects, mainly CacheCow and PerfIt.

I think since my previous blog post, some have been thinking that I am going to completely drop C#, Microsoft technologies and whatever I have been building so far. There are also some concerns over the future of the Open Source projects and whether they will be pro-actively maintained.

If you read my post again, you will find out that I have twice stressed that I will keep doing what I am doing. In fact I have lately been doing a lot C# and Azure - like never before. I am really liking Windows Azure and I believe it is a great platform for building Cloud Applications.

So in short, the answer is a BIG YES. I just pushed some code in CacheCow. and released CacheCow.Client 0.5.0-alpha which has a few improvements. I have plans for finally releasing 0.5 but it has been delayed due to my engagement with this other project which I am working on - as I said C# and Azure.

So please keep PRs, Issues, Bugs and wish lists coming in.

With regard to my decision and recent blog, this is a strategy change hoping to position myself in 2 years to be able to do equally non-Microsoft work if I have to. I hope it is clear now!



stevanus w. : #webapi101: Proper Use of Authentication Handler for Partially-Protected APIs

In some cases, we may only want to secure particular APIs (marked by Authorize attribute) using BasicAuthenticationHandler or DigestAuthenticationHandler, leaving the rest of the APIs public. However, since they are implemented as message handler, we could be wiser to speed up the performance (in particular with DigestAuthenticationHandler) by not having them kick in the pipeline for public access. This can be achieved by defining a dedicated route for the protected APIs and only register the authentication handler for that route (known as per-route message handlers), as opposed to registering it globally through HttpConfiguration.MessageHandlers.

HttpServer/HttpSelfHostServer
...
DelegatingHandler(s)
...
HttpRoutingDispatcher = = => AdminApiRoute
HttpControllerDispatcher < = = = BasicAuthenticationHandler
...
Controller(s)

As we can see from the above illustration, when the request gets into HttpRoutingDispatcher, it will be on a duty to pick the right route and inspect if there is a handler attached to it or otherwise it delegates to HttpControllerDispatcher. Since the AdminApiRoute has BasicAuthenticationHandler attached to it, if that route is picked then the authentication handler will be the next one to kick in before delegating it to HttpControllerDispatcher (by setting the InnerHandler property).

An example of an admin route that has to be protected:

config.Routes.MapHttpRoute(
   name: "AdminApi",
   routeTemplate: "api/admin/{controller}/{id}",
   defaults: new { id = RouteParameter.Optional },
   constraints: new { controller = "users" },
   handler: new BasicAuthenticationHandler()
                 {
                     InnerHandler = new HttpControllerDispatcher(config)
                 }

);

An example of users controller that is accessible only to administrator (we need to further inspect the role of the authenticated user that is not covered here):

[Authorize]
public class UsersController : ApiController

{
     public IEnumerable<User> Get()
     {
         var users = from u in Membership.GetAllUsers().Cast<MembershipUser>()
                           select new User() { Username = u.UserName, Email = u.Email };
    
         return users;
     }
}


Now, only request to api/admin/users will have the BasicAuthenticationHandler kicks in.

And furthermore, now you have an option to directly rejecting unauthorized request inside the authentication handler; short-cutting the pipeline.


stevanus w. : #webapi101: "The context cannot be used while the model is being created."

Certainly this will not be something new but this is what I learned today.

It could happen when a single instance of DbContext is being used in concurrent environment such as web applications. The reason is because the very first request referencing the class will instantiate the only instance of it that will further initialize the model and the context. When there is a concurrent request accessing the same instance while the context is being initialized, that's when the exception is thrown. However when each request has its own reference to a different instance of the context, each of it will execute fine as expected since during the model creation process, the model is locked down and later cached (http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext.onmodelcreating(v=vs.113).aspx).

In my previous solution, this scenario happened when I tried to instantiate only a single instance of repository through dependency injection inside message handler registration: config.MessageHandlers.Add(new DigestAuthenticationHandler(NinjectConfig.Factory.Get<IRepository<Nonce>>()));.

This has resulted to many unexpected behaviors during the concurrency testing such as table gets locked, requests get serialized, etc. And the test parameter itself shamelessly is only 10 concurrent requests that does not even give 1 request per second.


Figure 1. Benchmark testing of single instance of DbContext,


The fix to the issue is by resolving the dependency of the repository locally:  var repository = 
request.GetDependencyScope().GetService(typeof(IRepository<RequestResponseTraceRecord>)) as IRepository<RequestResponseTraceRecord>;.

And 1000 concurrent requests did not blow up the application with an average number of 84 requests per second.


Figure 2. Benchmark testing of multiple instances of DbContext.


And thanks to TAP so that implementing DbContext.SaveChanges() asynchronously has significantly increased the average number of requests per second to 195 for 1000 concurrent requests.


Figure 3. Benchmark testing of multiple instances of DbContext with async.


Pete Smith: Tracking changes to complex viewmodels with Knockout.JS

As part of a project I’ve been working on for a client, we’ve decided to implement HTTP PATCH in our API for making changes. The main client consuming the API is a web application driven by Knockout.JS, so this meant we had to find a way to figure out what had changed on our view model, and then send just those values over the wire.

There is nothing new or exciting about this requirement in itself. The question has been posed before  and it was the subject of a blog post way back in 2011 by Ryan NiemeyerWhat was quite exciting however was that our solution ended up doing much more than just detect changes to viewmodels. We needed to keep tabs on individual property changes, changes to arrays (adds\deletes\modifications), changes to child objects and even changes to child objects nested within arrays. The result was a complete change tracking implementation for knockout that can process not just one object but a complete object graph.

In this two part post I’ll attempt to share the code, the research and the story of how we arrived at the final implementation.

Identifying that a change has occurred

The first step was to get basic change tracking working given a view model with observable properties containing values – no complex objects.

Initial googling turned up the following approach as a starting point:

http://www.knockmeout.net/2011/05/creating-smart-dirty-flag-in-knockoutjs.html
http://www.johnpapa.net/spapost10/
http://www.dotnetcurry.com/showarticle.aspx?ID=876

These methods all involved some variation on adding an isDirty computed observable to your view model. Ryan’s example stores the initial state of the object when it is defined which can then be used as a point of comparison to figure out if a change has occurred.

Suprotim’s approach is based on Ryan’s method but instead of storing a json snapshot of the initial object (which could potentially be very large for complex view models), it merely subscribes to all the observable properties of the view model and sets the isDirty flag accordingly.

Both of these are very lightweight and efficient ways of detecting that a change has occurred, but as detailed in this thread they can’t pinpoint exactly which observable caused the change. Something more was needed.

Tracking changes to simple values

After a bit more digging, a clever solution to the problem of tracking changes to individual properties emerged as described by Stack Overflow one hit wonder, Brett Green in the answer to this question and also in slightly more detail on his blog.

This made the use of knockout extenders to add properties to the observables themselves; an overall isDirty() method for the view model as a whole could then be provided by a computed observable. This post almost entirely formed the basis for the first version. After a bit of restructuring, pretty soon we’ve got an implementation that will allow us to track changes to a flat view model:

An example of utilising this change tracking is as follows:

Detecting changes to complex objects

The next task was to ensure we could work with properties containing complex objects and nested observables. The issue here is that the isDirty property of an observable is only set when it’s contents are replaced. Modifying a child property of an object within an observable will not trigger the change tracking.

This thread on google groups seemed to be going in the right direction and even had links to two libraries already built:

  • Knockout-Rest seemed promising, but although this was able to detect changes in complex properties and even roll them back, it still could not pinpoint the individual properties that triggered the change.
  • EntitySpaces.js seemed to contain all the required elements, but it relied on generated classes and the change tracking features were too tightly coupled to it’s main use as a data access framework. At the time of writing it had not been updated for two years.

In the end we came up with a solution ourselves. In order to detect that a change had occurred further down the graph, we modified the existing isDirty extension member so that in the event that the value of our observable property was a complex object, it should also take into account the isDirty value of any properties of that child object:

Now when extending an observable to apply change tracking, if we find that the initial value is a complex object we also iterate over any properties of our child object and recursively apply change tracking to those observables as well. We also set up subscriptions to the resulting isDirty flags of the child properties to ensure we set the hasDirtyProperties flag on the target.

Tracking individual changes within complex objects

After the previous modifications, our change tracking now behaves like this:

Obviously there’s something missing here… we know that the Skills object has been modified and we also technically know which property of the object was modified but that information isn’t being respected by getChangesFromModel.

Previously it was sufficient to pull out changes by simply returning the value of each observable. That’s no longer the case so we have to add a getChanges method to our observables at the same level as isDirty, and then use this instead of the raw value when building our change log:

Now our getChangesFromModel will operate recursively and produce the results we’d expect. I’d like to draw your attention to this section of the above code in particular:

There’s a reason we’ve been using seperate observables to track hasValueChanged and hasDirtyProperties; in the event that we have replaced the contents of the observable wholesale, we must pull out all the values.

Here’s the change tracking complete with complex objects in action:

Summary

In this post we’ve seen how we can use a knockout extender and an isDirty observable to detect changes to individual properties within a view model. We’ve also seen some of the potential pitfalls you may encounter when dealing with nested complex objects and how we can overcome these to provide a robust change tracking system.

In the second part of this post, we’ll look at the real killer feature… tracking changes to complex objects within arrays.

You can view the full code for the finished example here: https://gist.github.com/Roysvork/8744757 or play around with the jsFiddle!

Pete

Edit: As part of the research for this post, I did come across https://github.com/ZiadJ/knockoutjs-reactor which takes a very similar approach and even handles arrays. It’s a shame I had not seen this when writing the code as it would have been quite useful.



stevanus w. : #webapi101: Solution 12/01/2014 (Deprecated)

Please feel free to contribute, use and/or modify any part of the codes for any personal or commercial use.

Please make sure that you do your own testing before using any part of the codes.

I will keep updating the solution with bug fixes, enhancement and new features.


Figure 1. webapi101 solution.

Source code


stevanus w. : #webapi101: Digest Authentication Handler (qop="auth")

An example of HTTP digest authentication (challenge-response protocol) using Membership provider and message handler.


The authentication steps:
(1) Client requests:
     GET /api/employees HTTP/1.1

(2) Server responses:
     HTTP/1.1 401 Unauthorized
     WWW-Authenticate: Digest realm="webapi101", nonce="68507f78286ade8f01af3d6973257c90", qop="auth"

     nonce is the server's secure-random bytes, typically 16 bytes.
     qop is quality of protection that has the value of auth to denote authentication only.

(3) Client responses:
     GET /api/employees HTTP/1.1
Authorization: Digest username="admin", realm="webapi101", nonce="68507f78286ade8f01af3d6973257c90", uri="/api/employees", response="e504b09b57adc3813e8682b8920a2060", qop=auth, nc=00000001, cnonce="ec2038379d691c31"

     nonce is the server's nonce in step (2).
     nc is the client's incremental number to prevent against replay attack.
     cnonce is the client's secure-random bytes that is in conjunction with nc are used to prevent against chosen-plaintext attack.
     response is computed as below:
     HA1 = MD5(username:realm:password)
     HA2 = MD5(method:URL)
     response = HA1:nonce:nc:cnonce:qop:HA2
     
(4) Server responses:
     HTTP/1.1 200 OK

     If the server's response that is computed from the Authorization parameters in step (3) is equal to the client's response (HA1:nonce:nc:cnonce:qop:HA2) then the client is authenticated otherwise repeat step (2) with a new nonce.

For more information, please refer to HTTP Authentication: Basic and Digest Access Authentication.

It is also important to note that since the digest authentication does not require user password to be transmitted, we need to configure the Membership provider to have its attributes: enablePasswordRetrieval set to true and passwordFormat set to Encrypted.

The following steps will describe how to enable the digest authentication for your Web API:
(1) Add the following elements in your Web.config file.
     <configSections>
        ...
       <sectionGroup name="webapi101">
         <section name="security"
                       type="WebApi101.Security.Configuration.SecuritySection" />
       </sectionGroup>
       ...
    </configSections>
    ... 
    <webapi101>
       <security>
         <authentication realm="webapi101" loginTimeout="10" />
       </security>
    </webapi101>
     ...

(2) Register the DigestAuthenticationHandler in your WebApiConfig.cs.
   
 config.MessageHandlers.Add(new DigestAuthenticationHandler(NinjectConfig.Factory.Get<IRepository<Nonce>>()));

config.MessageHandlers.Add(new DigestAuthenticationHandler());

(3) Run the provided dbo.Nonce.sql file to set up the database and the Nonce table.


Figure 1. Digest authentication via browser.


Dominick Baier: Combining Thinktecture AuthorizationServer with Windows Integrated Authentication

One of the key features of AS is that you can combine it with arbitrary authentication methods. This basically allows to layer OAuth2 and our application and authorization model over any identity management system.

Recently the question came up which steps would be necessary to combine AS with plain Windows integrated authentication. That’s what I did:

  • Downloaded the latest source code version from github
  • Setup the vdir in IIS (IIS Express works as well) and enabled Windows authentication
  • Set the ASP.NET authentication mode to Windows
  • Removed the FAM as well as the system.identityModel configuration section

With that we have AS configured for Windows authentication and removed the WS-Federation plumbing. The last step would be to setup the claims transformation logic to a) transform the Windows account name into the sub claim and b) set the AS administrator role if the Windows user is an AS administrator. This is done in global.asax:

protected void Application_PostAuthenticateRequest()

{

    if (HttpContext.Current.User.Identity.IsAuthenticated &&

        HttpContext.Current.User is WindowsPrincipal)

    {

        var svc = DependencyResolver
                
.Current
                 .GetService<
IAuthorizationServerAdministratorsService
>();

        var transformer = new NameIdToSubjectClaimsTransformer(svc);

 

        var newPrincipal = transformer.Authenticate(
         
string.Empty, HttpContext.Current.User as ClaimsPrincipal
);

 

        HttpContext.Current.User = newPrincipal;

        Thread.CurrentPrincipal = newPrincipal;

 

        var session = new SessionSecurityToken(newPrincipal);

        FederatedAuthentication
          
.SessionAuthenticationModule
           .WriteSessionTokenToCookie(session);

    }

}

 

This code will transform the WindowsPrincipal to an AS principal on the first authenticated request. From that point the SAM will use the transformed principal via the session token / cookie.

If you like to use the resource owner credential flow – you’d also need to implement a Windows specific version of the IResourceOwnerCredentialValidation interface (and wire it up in autofac.config). I leave that as an exercise – but the general approach would be to use Win32 LogonUser to verify the Windows username and password (instead of the WS-Trust default implementation).

HTH


Filed under: ASP.NET, AuthorizationServer, OAuth, WebAPI


stevanus w. : #webapi101: Request Response Tracing With ExceptionS

An update of Tracing Request and Response With Exception to trace more exceptions. Since message handler could also throw an exception, it is necessary to also include it as an exception record while still persisting the exception thrown from the controller.

// RequestResponseTraceRecord.cs
public class RequestResponseTraceRecord : IIdentifiable
{
   ...
   public ICollection<ExceptionTraceRecord> Exceptions { get; set; }
   ...
}


// RequestResponseTraceHandler.cs
...
execute: async () =>
{  
   // Don't let tracing adds additional latency
   try
   {      

      response = await base.SendAsync(request, cancellationToken);  
   }   
    (Exception e)  
   {
      response = new HttpResponseMessage(HttpStatusCode.InternalServerError)
      {
         Content = new StringContent(e.Message)
      };


 
      // Unhandled exception
      record.Exceptions.Add(new ExceptionTraceRecord()
                            { RequestId = record.RowId,
                              ExceptionType = e.GetType().Name,
                              ExceptionMessage = e.Message });
   

   }
},
...




Figure 1 Multiple exceptions traceable for a request.


Darrel Miller: Web Standards Search Engine

 

I always struggle to use the IETF and W3C web sites to find Web standards that I am looking for.  Google and Bing work ok for finding stuff if you can hit the right keywords, however if the terms you are searching has other more popular usages, then you have to wade through irrelevant results.

Google have a service where you can create a custom search engine where you can curate what sites are included in the result.  It also has a nice feature where you can categorize sites.

I have created one of these search engines and added what I believe are important resources when building web applications.  Give it a try here http://bit.ly/standards-search

I have tried to categorize sites into “Standards”, “Drafts”, “Mailing Lists” and “IRC”.  It is quite amazing how many of the web’s design decisions are made in mailing lists and IRC channels.  They can be a hugely valuable resource for finding out why something is the way it is.

The list below shows the sites that I currently have included.  If there are other sites that you believe should be included then send me a link via twitter. (@darrel_miller)

Standards

*.spec.whatwg.org/*
http://www.ietf.org/rfc*
http://www.iana.org/assignments/*
http://microformats.org/wiki/*
http://www.w3.org/TR/*   

Drafts

http://tools.ietf.org/html/*   
www.ietf.org/id/*   
*.spec.whatwg.org/*

Mailing Lists

http://www.ietf.org/mail-archive/*   
http://lists.w3.org/Archives/Public/*   
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/
https://groups.google.com/d/msg/collectionjson
https://groups.google.com/d/msg/hypermedia-web
https://groups.google.com/d/msg/hal-discuss
http://groups.yahoo.com/neo/groups/rest-discuss/conversations/messages

IRC Logs

http://www.w3.org/*-irc
http://krijnhoetmer.nl/irc-logs/
http://rest.hackyhack.net/


Ben Foster: Proxying HttpClient requests through Fiddler

To proxy external requests set the Proxy property of HttpClientHandler:

    private static HttpClient client 
        = new HttpClient(new HttpClientHandler()
        {
            Proxy = new WebProxy("localhost", 8888)
        });

This will not work for localhost requests since these bypass proxies by default. Instead you can suffix the request URL with .fiddler. This is why it is a good idea to store your API endpoints in app/web.config:

<add key="ApiEndpoint" value="http://localhost.fiddler:49978" />


Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.