Why would someone hack your ‘thing’

This is a follow up to an earlier blog post (Internet of things heading for a trainwreck), where I commented on the security of the internet of thing. An interesting follow up article on this is: ‘Things’ on the Internet-of-things have 25 vulnerabilities apiece. It’s highly unlikely that any of those 25 vulnerabilities will be patched.

But I digress, the question is, why would someone hack your ‘thing’?

Reason 1: Because it is there

Someone might not be targeting your thing or even your class of thing when they take control. Your thing might be share vulnerabilities with someone that is being targeted. For example the underlying operating system of the device or hosted applications might share vulnerabilities that are common in web servers running on the internet. Many devices would provide a web interface to manage them, and to do this they are likely to use commonly available webservers, eg ngix or apache.
However simply being available on the internet makes something a target. Someone scanning the internet for something interesting to break into would not necessarily know that the device responding on port 80 is a thermostat rather than a webserver.

Reason 2: For the capabilities

In many ways a thing is mostly a less powerful computer with extra capabilities. It’s unlikely that someone would hack your thing for the computing power (although people have produced bincoin mining malware for android). However your thing is every bit as capable as a computer in every other way. It could be used to gain a foothold in your network. It could be used as a spam relay. It could be used in DDoS attacks. It could be used to host malware. The list goes on…
Your thing is generally different though, as it is a computer + something. That something could provide a rich set of capabilities that computers don’t currently have. For example, a Nest has sensors to identify whether someone is home or not. Imagine if someone were able to hack into your Nest, prior to breaking into your house to check if you are home. Cameras on ‘things’ could well be used in the same way that RAT trojans are used for voyeurism and blackmail. Consider that you might not have a laptop with a webcam in every room, but you might well put a ‘thing’ with a camera each room, including the bathroom.
As the internet of things starts to take off, people are going to work out new ways to exploit the new sensors that it brings to the table.

Reason 3: For the data

Not all ‘things’ have sensors that would be useful in real time but many of them collect some very interesting data. Something like a fitbit can track your activity. We want that information to run our lives, but it is just as interesting to a third party. Things can provide historical data on heart rate, location or any other information that might be collected.


Security is hard and typically is an afterthought. This should get rather interesting when your lightbulbs and door locks get hacked.

Mounting Network Gear

For a while I’ve been keeping my modem and the voip box sitting on top of the printer. This is a bit of a problem as you tend to knock everything off when you try to scan something.
When I replaced the modem with a new modem and a switch. I decided enough was enough, I needed a better solution. I thought the solution I came up with might be of general interest so I took a few photos on the way through.
Apologies in advance the terrible quality of the photos.
Before – oh the mess!
What you need:
  1. 12mm thick mdf board
  2. Screws to match the depth of the board (I chose 6G x 15mm)
  3. Jigaw to cut it out
  4. Drill + bits to put a nice neat hole in the top to hang it
  5. Sandpaper to clean up
  6. Tape Measure
You need this

1. Mark the board

Measure the items you want to mount. Add a centimetre for the space between each one, and 5cm for space at the top. Mark the board up. Don’r forget: measure twice and cut once.

2. Cut out your board

Using your handy jigsaw, cut out the board.Coffee table and jellybean dropsheet are optional accessories.


Clean this up with the sandpapaper, jigsaws tend to leave rough edges


3. Drill a hole

This needs to be large enough to handle your hook. I’d err on the side of larger holes to handle larger hooks.

It’s a hole! In a piece of wood!

4. Start mounting things

First set of screws

Problem 1: The little linksys box didn’t neatly handle the screws, so I had to scrape the plastic away a little.

Thank goodness it will all be out of sight in time

First one mounted, blurred for effect
All of them mounted on the board
With the cables wired in

You might notice that I didn’t do a great job of spacing these apart very well. The modem on the far left overhangs the end of the board slightly and there isn’t a huge gap between that and the switch (in the middle). I should follow my own advice on measure twice cut once.


I’ve found this useful for getting everything out of the way. I hope this helps someone else.

Embedding images in a page – MVC style

I recently had a need to write some images stored in a database to a page. This was something rare enough that I thought it was worth documenting as it meant putting together a few different pieces.

Examples here are for MVC4, but could be easily adapted for somehing else.

In the View:
Restaurant Image

In the controller:
byte[] photoData
// populate photoData here
model.Photo = Convert.ToBase64String(photoData, 0, src.photo.Length)

This works by encoding the image as a base 64 string, which is supported by most modern browsers. You might need to change the data at the start to handle different image types.

Pair this with the following two pages:

  1. Uploading a File (Or Files) With ASP.NET MVC – Phil Haack
  2. Convert HttpPostedFileBase to byte[] – Stack Overflow
and you’ve got an end to end solution for uploading and displaying files.

Why would you / wouldn’t you do this?

Doing this is a bad idea in many cases as it means that the image can’t be cached. Overall this would tend to make your page loads slower.
However it might be worth doing if
  1. If the image is changing often enough and therefore there is no value in caching
  2. It gives you a self contained page, limiting the number of http connections

Internal Project – implement the API

I’ve completed the work in implementing the API.

This was surprisingly more complex than expected. I’ve spent a bit of time working with WebApi and I’d also done the work to design the API, but I still ran into a few unexpected issues, some of which required me to change the design of the API.

Finding the IP address

For some reason I’d decided to include logging the IP address of the client machine when people mark a status as liked and when a status is marked as viewed. This proved to be a bit harder than expected. After poking around in the Request object for a while, I resorted to google, which led to this stack overflow question.

Routing Fun

When I’d designed the API, I’d had planned GET requests to cover retrieving a single status, searching for statuses and retrieving the status history, all hitting the same URL. I’d slightly naively assumed that this would be handled something like function overloading. I’d also confused this with the simple case where you return all and retrieve a single item by id. In that case, this is handled by routing.
Clearly this doesn’t work, how would the server resolve the HTTP verb to a single function?
As a result I needed to move this to separate the methods using routing. This meant that search became: /status/search as a GET.

Retrieving values passed in

I ran into a few problems trying to retrieve values passed to the server. Should they go in the body? URL? Headers?

In retrospect, probably the best place for the session id would have been as a header. Ideally the authentication for this could be handled as less of a copy and paste implementation. It would be a bit overkill in the current situation though.


  1. Document the UI – I’ve found this tends to make the implementation clearer
  2. Implement in bootstrap, MVC4 with SQL Server backend
  3. Design JSON API to access app
  4. Implement JSON api using WebApi backend
  5. Replace MVC app with javascript client side framework, angular
  6. Swap the SQL Server backend for a No SQL database
  7. Replace the WebApi backend with an F# implementation
  8. Replace the WebApi backend with node.js

Internal project – design the API

I’ve spent some time designing the API for my internal project.

I’ve documented the design of the API here.

Key Points


Following best practices, I’ve avoided including the login and password with each request. Instead there is a session service. Call this service to create a new session and get a session token in response. In addition I’ve included a method to end the session early. The intention that the session token would expire. Making a call to any of the methods on the API would reset the timeout.


As far as possible I’ve mapped this to valid HTTP verbs.


One difficulty I ran into was how to provide a nice mechanism to mark a post a liked. It didn’t make sense to mark a status item as liked by making a PUT request with the entire status item. The best option was to add a status/like URL, under the status URL.


There are a sack of really good resources out there for this:

  1. Best Practices for Designing a Pragmatic RESTful API: http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api
  2. The Good, the Bad, and the Ugly of REST APIs: http://broadcast.oreilly.com/2011/06/the-good-the-bad-the-ugly-of-rest-apis.html
  3. OWASP REST Security Cheat Sheet: https://www.owasp.org/index.php/REST_Security_Cheat_Sheet

Current Status

  1. Document the UI – I’ve found this tends to make the implementation clearer
  2. Implement in bootstrap, MVC4 with SQL Server backend
  3. Design JSON API to access app
  4. Implement JSON api using WebApi backend
  5. Replace MVC app with javascript client side framework, angular
  6. Swap the SQL Server backend for a No SQL database
  7. Replace the WebApi backend with an F# implementation
  8. Replace the WebApi backend with node.js

Internet of things heading for a trainwreck

Many years ago I read The inmates are running the asylum. One of the early chapters pointed out that when you cross a computer with anything you get … a computer. The intention of the chapter was to highlight terrible user interfaces for computers. 10 years later, I wonder what Alan Cooper would think of “The Internet of Things”.

The Internet of Things is basically thing + computer + connection to the internet. Now with lower power chips, it’s never been easier.  A Rasberry Pi is more powerful than the first computer I built. Slap on a customised linux distro, marry it to your ‘thing’ and you are away.

Is it a ‘thing’ or a computer?

The internet of things will add internet and computing power to things we know now. Thinks like lights, fridges, watches, power points … well pretty much anything. We are used to interacting with these things in the same way that we interact with appliances. You plug them in, switch them on and they just work.
Appliances often have quite long life cycles. For example the fridge we own now used to belong to my wife’s grandfather and must be over 20 years old.
This is very different to how we treat computers.
It’s worth reviewing how computers have been used.

Computers – a brief review

Computers were once like appliances. You could buy something like a TRS-80, plug it in and use it. It didn’t have any persistent storage. Programs were stored on cassettes or later on floppy disks.


Early viruses would infect programs on a disk or the disk itself. The vector for infection was generally sneaker net. Someone would be infected with the virus from someone else when it came into contact with their infected disk. Infection far easier once computers started getting hard drives as the virus could infect any floppy disks that were inserted into the computer.
However the speed that viruses could spread was pretty limited by the way that they spread.

Networked – appliances meet Metcalf’s Law

Metcalf’s Law says “the value of a telecommunications network is proportional to the square of the number of connected users of the system”. The short version is that computers get much more valuable when they are connected together. this is one of the great benefits that the internet of things promises.
What it does mean is that when all the computers are connected together, a virus (or any other sort of malware) can spread far faster. The most spectacular example of this was SQL Slammer, where it is estimated that almost all of the vulnerable systems were infected within 10 minutes of its release.
This has exposed the reality the all computer systems have bugs. Networked computer systems are exposed to all the malware and bad actors on that network. And the internet is a very, very large network.

Obsolete – appliances meet Moore’s Law 

Moore’s law (better stated as Moore’s curves) is generally understood to say that computing performance doubles every 18 months. This is a phenomenally rapid rate of improvement. Imagine if kettles could boil water twice as fast every 18 months.
One impact of this is that computers have a relatively short lifespan when compared to other items. Most computers would be replaced within 5 years (by which time their replacement would be 8 times as fast).

Lifecycle of a computer

While computers started out as close to appliances, they now have a very different life cyle in two very key ways:
  1. They get updates to fix vulnerabilities to protect them from malware
  2. They live for less than 5 years

Back to the internet of things

My fear is that in the end these are all computers yet they are not being treated like computers. The problem here is computers are not like appliances.

Does your ‘thing’ get updates?

Will the manufacturer commit to providing software updates for the life of the ‘thing’ or just the warranty period? The company producing has a primary interest in selling the thing rather than the software driving the thing. Often that is their primary area of expertise. Typically the software is an afterthought. It’s very likely that the ‘thing’ you buy will never receive updates.

It will be left running the same software it shipped with. Through the internet it will be exposed to all of hackers, crackers, tinkers, malware writers and cyber criminals. They will find holes that need patching and nobody will be there to patch them.

I’m not the only one worried about this sort of thing.


The internet of things is going to provide a stack of new devices that can get hacked.
It took 20 years to create the security lifecycle that we have today. How long will it take for the internet of things to catch up?

    Internal project, MVC site done

    I’m now done on putting together the mvc site for the internal project.

    A few screenshots of what it looks like now:

    Home, not logged in

    New Status



    The very visible bar across the bottom of the screen is Glimpse, which is an awesome tool to give you a window into what is happening on your server.
    I learned a few interesting things while working through this.


    It’s interesting to consider that I’ve ended up so many data objects to represent similar data. This is mostly due to providing different abstraction layers.
    Consider the data to represent a status post. I’ve got the following:
    1. EEF data model object, which represents exactly what the table represents
    2. The data model that is returned from the repository, this is different from the EF data model to enable adding a different backend (DataInterfaces.Models.Status)
    3. The ViewModel displayed by the site (Site.Models.Status), which is optimised for display. 

    On reflection I could have added more view models. For example, the history view has the following rather unpleasant piece of code:
     @Html.Partial("~/Views/PartialViews/Pagination.cshtml", new Site.Models.Pagination { PageCount = Model.PageCount, Page = Model.Page, BaseUrl = "/Status?" });  
    This could be far neater if I’d simply added an instances of the Pagination object on the ViewModel on the StatusList object. However I wanted to re-use the models that the site used for the WebApi interface and this felt like a reasonable compromise.

    Unit testing controllers

    I tried to keep as much of the code out of the controllers (following best principles), but I did want to unit test the code I had there.
    This ended up being a bit more work than expected. I had a dependency on session state and it took a little while to work out how best way to make this testable. The obvious solution was to wrap the session object in another object. However that felt rather like a reinventing the wheel. Surely there was a better way.
    After some judicious googling, I found that the object I was looking for were HttpSessionStateBase and HttpSessionStateWrapper. When using Ninject as an IoC container, the binding for this was:
     kernel.Bind().ToConstructor(x => new HttpSessionStateWrapper(HttpContext.Current.Session));  
    I also needed to retrieve the user’s ip address to track view counts and like counts. These followed the same basic patterns, the objects were HttpRequestBase and HttpRequestWrapper.

    It was nice to find something that was well thought out.


    Ninject is awesome as always, although I always seem to forget the binding syntax, resorting to this awesome cheatsheet.

    I added log4net, just because it wouldn’t be a real site without some logging.

    Css is mostly vanilla bootstrap. I probably could have done more with this, but I could have kept working on this forever. Sooner or later you have to draw a line.

    Current status

    It’s been really interesting working on this. I’m finding that trying to implement something real forces you to make trade offs and to understand the technology better. I worked with all these technologies a fair bit, but it is still possible to find something that a bit new, for example testing session state in the controller.
    I’m shifting the order in what I’m going to work on this a little.
    1. Document the UI – I’ve found this tends to make the implementation clearer
    2. Implement in bootstrap, MVC4 with SQL Server backend
    3. Design JSON API to access app
    4. Implement JSON api using WebApi backend
    5. Replace MVC app with javascript client side framework, angular
    6. Swap the SQL Server backend for a No SQL database
    7. Replace the WebApi backend with an F# implementation
    8. Replace the WebApi backend with node.js