My day job has primarily consisted of migrating an old-school ASP.Net WebForms website to a new ASP.Net MVC application.  This has involved a massive layout/structure redesign, database overhaul, and a lot of custom coding.

I'm mostly happy with how I structured the MVC-based content management system.  It's very similar to WordPress ... only written in C# and using a MS SQL database for the back end.  It also borrows heavily from designs found in DotNetNuke, BlogEngine.NET, and just about every other CMS I've ever used.

But the trickiest - and coolest - part came when my boss asked that it be fully backwards compatible.

Our flagship software application is heavily integrated with the web.  Users update their subscriptions through a web service.  Documents are downloaded from a webs service.  System updates are delivered through a web service.

Unfortunately, all of this was set up on a legacy server that was physically in our office.  Since we're moving to a distributed content hosting system, we needed an easy way not just to retrieve the data (that's handled already) but to send updates to these webservices.

It had to be secure.

It had to be fast.

It had to not be FTP.

Last Time 'Round

In my last job, I built a web service network that built off a traditional challenge-response authentication system.  Every client application accessed each web service on behalf of a specific user (with a username and password).  The client would ping the server and ask for a security token.  Then it would hash its credentials with the token and submit that hash along with the data.

Effective ... but bulky.

This older system required discrete user accounts be set up for every user.  It also required multiple HTTP transactions between systems - a HEAD request to get a token followed by a POST/GET/PUT/DELETE request to interact with data.  It worked, but was cumbersome.  The multiple transactions also opened us to man-in-the-middle attacks, so everything was necessarily SSL-encrypted.

A New Paradigm

This time, I elected to go with a simpler system that, oddly enough, is more secure.

The client is issued an application name and secret key at its time of deployment - the server keeps track of these in a secured database.  The secret key is never exchanged over the wire.

When a client sends a request, it also submits three pieces of information in the request headers:

  • It's name
  • A randomly-generated string
  • A one-time password

The one-time password is a hash of three things:

  • The application's secret key
  • The same randomly-generated string passed in the header
  • The current system UNIX timestamp, divided by 15

This ensures the password is unique to the application, unique to the request, and only valid within a narrow time window.

Why It Works

The server independently re-creates the application's password by looking up the application's secret key and hashing it together with the random string passed in the request and the current system UNIX timestamp (divided by 15).

The server will only accept requests for the current 15-second window and the immediately previous 15-second window.

In addition, the server records the random string sent by the application and automatically rejects any duplicate requests.

This leads to an authentication system that is:

  • Unique to client applications but not necessarily to individual user accounts
  • Incredibly fast - only one request is ever sent/received
  • Not based on FTP

What would you do to make the request more secure?