Internal project – design the API

I’ve spent some time designing the API for my internal project.

I’ve documented the design of the API here.

Key Points


Following best practices, I’ve avoided including the login and password with each request. Instead there is a session service. Call this service to create a new session and get a session token in response. In addition I’ve included a method to end the session early. The intention that the session token would expire. Making a call to any of the methods on the API would reset the timeout.


As far as possible I’ve mapped this to valid HTTP verbs.


One difficulty I ran into was how to provide a nice mechanism to mark a post a liked. It didn’t make sense to mark a status item as liked by making a PUT request with the entire status item. The best option was to add a status/like URL, under the status URL.


There are a sack of really good resources out there for this:

  1. Best Practices for Designing a Pragmatic RESTful API:
  2. The Good, the Bad, and the Ugly of REST APIs:
  3. OWASP REST Security Cheat Sheet:

Current Status

  1. Document the UI – I’ve found this tends to make the implementation clearer
  2. Implement in bootstrap, MVC4 with SQL Server backend
  3. Design JSON API to access app
  4. Implement JSON api using WebApi backend
  5. Replace MVC app with javascript client side framework, angular
  6. Swap the SQL Server backend for a No SQL database
  7. Replace the WebApi backend with an F# implementation
  8. Replace the WebApi backend with node.js

Internet of things heading for a trainwreck

Many years ago I read The inmates are running the asylum. One of the early chapters pointed out that when you cross a computer with anything you get … a computer. The intention of the chapter was to highlight terrible user interfaces for computers. 10 years later, I wonder what Alan Cooper would think of “The Internet of Things”.

The Internet of Things is basically thing + computer + connection to the internet. Now with lower power chips, it’s never been easier.  A Rasberry Pi is more powerful than the first computer I built. Slap on a customised linux distro, marry it to your ‘thing’ and you are away.

Is it a ‘thing’ or a computer?

The internet of things will add internet and computing power to things we know now. Thinks like lights, fridges, watches, power points … well pretty much anything. We are used to interacting with these things in the same way that we interact with appliances. You plug them in, switch them on and they just work.
Appliances often have quite long life cycles. For example the fridge we own now used to belong to my wife’s grandfather and must be over 20 years old.
This is very different to how we treat computers.
It’s worth reviewing how computers have been used.

Computers – a brief review

Computers were once like appliances. You could buy something like a TRS-80, plug it in and use it. It didn’t have any persistent storage. Programs were stored on cassettes or later on floppy disks.


Early viruses would infect programs on a disk or the disk itself. The vector for infection was generally sneaker net. Someone would be infected with the virus from someone else when it came into contact with their infected disk. Infection far easier once computers started getting hard drives as the virus could infect any floppy disks that were inserted into the computer.
However the speed that viruses could spread was pretty limited by the way that they spread.

Networked – appliances meet Metcalf’s Law

Metcalf’s Law says “the value of a telecommunications network is proportional to the square of the number of connected users of the system”. The short version is that computers get much more valuable when they are connected together. this is one of the great benefits that the internet of things promises.
What it does mean is that when all the computers are connected together, a virus (or any other sort of malware) can spread far faster. The most spectacular example of this was SQL Slammer, where it is estimated that almost all of the vulnerable systems were infected within 10 minutes of its release.
This has exposed the reality the all computer systems have bugs. Networked computer systems are exposed to all the malware and bad actors on that network. And the internet is a very, very large network.

Obsolete – appliances meet Moore’s Law 

Moore’s law (better stated as Moore’s curves) is generally understood to say that computing performance doubles every 18 months. This is a phenomenally rapid rate of improvement. Imagine if kettles could boil water twice as fast every 18 months.
One impact of this is that computers have a relatively short lifespan when compared to other items. Most computers would be replaced within 5 years (by which time their replacement would be 8 times as fast).

Lifecycle of a computer

While computers started out as close to appliances, they now have a very different life cyle in two very key ways:
  1. They get updates to fix vulnerabilities to protect them from malware
  2. They live for less than 5 years

Back to the internet of things

My fear is that in the end these are all computers yet they are not being treated like computers. The problem here is computers are not like appliances.

Does your ‘thing’ get updates?

Will the manufacturer commit to providing software updates for the life of the ‘thing’ or just the warranty period? The company producing has a primary interest in selling the thing rather than the software driving the thing. Often that is their primary area of expertise. Typically the software is an afterthought. It’s very likely that the ‘thing’ you buy will never receive updates.

It will be left running the same software it shipped with. Through the internet it will be exposed to all of hackers, crackers, tinkers, malware writers and cyber criminals. They will find holes that need patching and nobody will be there to patch them.

I’m not the only one worried about this sort of thing.


The internet of things is going to provide a stack of new devices that can get hacked.
It took 20 years to create the security lifecycle that we have today. How long will it take for the internet of things to catch up?

    Internal project, MVC site done

    I’m now done on putting together the mvc site for the internal project.

    A few screenshots of what it looks like now:

    Home, not logged in

    New Status



    The very visible bar across the bottom of the screen is Glimpse, which is an awesome tool to give you a window into what is happening on your server.
    I learned a few interesting things while working through this.


    It’s interesting to consider that I’ve ended up so many data objects to represent similar data. This is mostly due to providing different abstraction layers.
    Consider the data to represent a status post. I’ve got the following:
    1. EEF data model object, which represents exactly what the table represents
    2. The data model that is returned from the repository, this is different from the EF data model to enable adding a different backend (DataInterfaces.Models.Status)
    3. The ViewModel displayed by the site (Site.Models.Status), which is optimised for display. 

    On reflection I could have added more view models. For example, the history view has the following rather unpleasant piece of code:
     @Html.Partial("~/Views/PartialViews/Pagination.cshtml", new Site.Models.Pagination { PageCount = Model.PageCount, Page = Model.Page, BaseUrl = "/Status?" });  
    This could be far neater if I’d simply added an instances of the Pagination object on the ViewModel on the StatusList object. However I wanted to re-use the models that the site used for the WebApi interface and this felt like a reasonable compromise.

    Unit testing controllers

    I tried to keep as much of the code out of the controllers (following best principles), but I did want to unit test the code I had there.
    This ended up being a bit more work than expected. I had a dependency on session state and it took a little while to work out how best way to make this testable. The obvious solution was to wrap the session object in another object. However that felt rather like a reinventing the wheel. Surely there was a better way.
    After some judicious googling, I found that the object I was looking for were HttpSessionStateBase and HttpSessionStateWrapper. When using Ninject as an IoC container, the binding for this was:
     kernel.Bind().ToConstructor(x => new HttpSessionStateWrapper(HttpContext.Current.Session));  
    I also needed to retrieve the user’s ip address to track view counts and like counts. These followed the same basic patterns, the objects were HttpRequestBase and HttpRequestWrapper.

    It was nice to find something that was well thought out.


    Ninject is awesome as always, although I always seem to forget the binding syntax, resorting to this awesome cheatsheet.

    I added log4net, just because it wouldn’t be a real site without some logging.

    Css is mostly vanilla bootstrap. I probably could have done more with this, but I could have kept working on this forever. Sooner or later you have to draw a line.

    Current status

    It’s been really interesting working on this. I’m finding that trying to implement something real forces you to make trade offs and to understand the technology better. I worked with all these technologies a fair bit, but it is still possible to find something that a bit new, for example testing session state in the controller.
    I’m shifting the order in what I’m going to work on this a little.
    1. Document the UI – I’ve found this tends to make the implementation clearer
    2. Implement in bootstrap, MVC4 with SQL Server backend
    3. Design JSON API to access app
    4. Implement JSON api using WebApi backend
    5. Replace MVC app with javascript client side framework, angular
    6. Swap the SQL Server backend for a No SQL database
    7. Replace the WebApi backend with an F# implementation
    8. Replace the WebApi backend with node.js

    Internal project – step 2 in progress

    After a bit of a hiatus, I’ve returned to working on my internal project:

    I’ve partially completed my initial planned step 2. I ended up spending a lot of time on setup activities, ie yak shaving. I’ve been trying to restrain myself, but it is hard.
    Some of the yak shaving activities:
    1. Worrying about the structure of the project and which directory things belong in.
    2. Setup for a dev machine, in this case powershell scripts to create a database and some initial base data. I managed to avoid putting together deployment tools, but I have ended up with scripts to create the database.
    3. Implementing authentication by hand rather than just using something off the shelf. This was truly a bad idea, but by the time I’d realised it, I was already too far down the rabbit hole.
    4. Using a more secure hashing algorithm. While it was interesting to use BCrypt, it wasn’t really necessary.
    However I did find something fairly interesting things during this process.

    Future plans impact structure now

    I was planning to swap out the backend for a nosql database. The aim was for this change to be seamless. However the structures that you might use in a relational database would be different to a nosql database.

    So the Sql tables look like this:

    1. status (status_id, message, date, user_id…)
    2. status_view (status_id, date…)
    3. status_like (status_id, date…)
    However for a nosql or document oriented database, it would make far more sense to have a single document represent each post, including all the data associated with the post.
    This has a direct impact on how the code needs to be structured to support this. Rather than having a repository for each table, it needs to have a single repository interface to cover all of the interactions with status, ie something like this:


    I took the time to split the code up between a number of projects in order to isolate the projects by functional area. I’ve currently got:
    1. DataInterfaces – models and interfaces for accessing any data
    2. DataSql – the sql implementation of the interfaces in DataInterfaces
    3. SiteLogic – the business logic for the application, all code here should be completely unit testable
    4. SiteLogic.NUnit – unit tests for the library
    5. Site – the MVC site
    6. Site.NUnit – the unit tests for the MVC site.


    As far as possible I’ve been wanting to write unit tests for all of the code. Obviously this isn’t possible all the time, for example I gave up on unit testing the controller for authentication. 

    Currently Reading – 2013-01-08

    Updated reading list:
    64 Things was rather interesting. It’s a rather non-technical summary of things we can expect in the future with technology and the internet. It’s a bit over a year old now, so things have shifted a little (bitcoin and the recent NSA revelations spring to mind). It is a pretty comprehensive summary and very readable. A good book to give someone who isn’t hugely technical.

    Internal project – Architecture

    I’m rather fond of Military history and one of the quotes that I like is:
    Plans are nothing. Planning is everything. Dwight D Eisenhower

    I realised I missed a step in my original plan for the project: planning. The reason I like this quote is that it is clear that the plan might not be followed, however the process of planning forces you to think about what you want to do.

    So I’ve done some initial planning around this. Since I’m reading a book on documenting software architecture, I’ve dug into that for the most appropriate view to use.

    Layered Architecture

    Some notes on this:

    1. I want to clearly separate the UI layer from the business layer in order to facilitate adding an API
    2. The data access layer might be swapped out with a completely different data backend in the future, so that needs to be separated out. I’m currently thinking that I’ll implement these Entity Framework for the first version. I’ll avoid generic repositories to create a clean interface that doesn’t require implementing a LINQ interface over a NoSql database that might not cleanly support it.
    3. I want the Authentication to be clearly separate from the business layer in order to keep this as dependancy free as possible. This tends to be a rather complex area that is well worth ensuring it doesn’t have any dependencies.
    4. Cross cutting concerns like logging aren’t covered in this diagram

    Currently reading – 2013-12-29

    I find I tend to have a lot of books I’m planning to read but haven’t read. In order to keep myself accountable, I’ve decided to start posting the books I’m currently reading.