Internal project – Architecture

I’m rather fond of Military history and one of the quotes that I like is:
Plans are nothing. Planning is everything. Dwight D Eisenhower

I realised I missed a step in my original plan for the project: planning. The reason I like this quote is that it is clear that the plan might not be followed, however the process of planning forces you to think about what you want to do.

So I’ve done some initial planning around this. Since I’m reading a book on documenting software architecture, I’ve dug into that for the most appropriate view to use.

Layered Architecture

Some notes on this:

  1. I want to clearly separate the UI layer from the business layer in order to facilitate adding an API
  2. The data access layer might be swapped out with a completely different data backend in the future, so that needs to be separated out. I’m currently thinking that I’ll implement these Entity Framework for the first version. I’ll avoid generic repositories to create a clean interface that doesn’t require implementing a LINQ interface over a NoSql database that might not cleanly support it.
  3. I want the Authentication to be clearly separate from the business layer in order to keep this as dependancy free as possible. This tends to be a rather complex area that is well worth ensuring it doesn’t have any dependencies.
  4. Cross cutting concerns like logging aren’t covered in this diagram

Currently reading – 2013-12-29

I find I tend to have a lot of books I’m planning to read but haven’t read. In order to keep myself accountable, I’ve decided to start posting the books I’m currently reading.

Internal Project – Step 1 – Document the UI

I’ve started putting together the UI for my home project.

You can see this in all it’s glory at https://github.com/dcamdupe/status.

Here is a sneak peek:

Home page version 1
Home page version 2
These high quality wireframes help illustrate a number of things:
  1. I don’t have access to appropriate tools at home. Rather than buying them, I’m using the highly technical approach of pen + paper + iPhone camera to capture my wireframes
  2. My hand writing is frighteningly messy
  3. In version 2 I’ve avoided the traditional wooden table for a far more attractive red table cloth with stars
For version 2 of the wireframes, I’ve tightened things up a little and used more consistent UI elements. This might not be apparent, given my limited ability with a pen. I also noticed I missed a page and a few other major elements on the pages.
I’m skipping a few normal user lifecycle elements (forgotten password etc) for simplicity’s sake.
The only thing that might not make immediate sense is the popularity box. This will be some sort of synthesis of the number of views and likes to provide an overall popularity. This could be represented by a number, an image or just a colour.

2014-01-05 Updated to link to earlier article.

Internal Project

I’ve decided to start a home project to test a few different technologies. The aim is to pick a a relatively simple project and implement this in a few different ways.

The simple project is a website where someone can post a status.

The rough outline is a website where:

  • Authenticated Users can:
    • post a status
  • Anonymous & Authenticated users can:
    • View current Status
    • View past statuses and search through them
    • ‘Like’ a Status

I guess you could call this twitter without most of the useful features.

I’m aiming to try a number of different technologies, in decreasing level of comfort.

I’ll also be swapping pieces in and out with new implementations in different technologies.

The current plan

  1. Document the UI – I’ve found this tends to make the implementation clearer
  2. Implement in bootstrap, MVC4 with SQL Server backend
  3. Swap the SQL Server backend for a No SQL database
  4. Design JSON API to access app
  5. Implement JSON api using WebApi backend
  6. Replace MVC app with javascript client side framework (currently planning to try angular)
  7. Replace the WebApi backend with an F# implementation
  8. Replace the WebApi backend with node.js
I want to apply best practices throughout, which might mean learning new best practices as I start using new technologies.

I’ll post all parts of the implementation and any documentation on Github.

Working with External APIs

I’ve done a fair bit of work with external APIs. It’s generally a fairly frustrating process. So I’ve collected a list of best practises. Some of these are shamelessly stolen from colleagues.

Some of these are a bit ‘heavy’ and not appropriate for all circumstances. This isn’t a checklist to follow blindly, it’s just a list of some good ideas I’ve picked up.

1. Logging

Log all calls going out and all incoming responses along with dates & times. Any logging should occur as close to the boundary as possible. This is to ensure that you remove as many layers of abstraction. For example if making webservice call, you would want to log the raw request & response, not a serialised version of the data you send.
Logging will help answer questions like:
  1. Response times, current and historical
  2. Is the API still available & responding?
  3. Has something changed at the remote end?
  4. Have we correctly translated internal data to external calls?

2. Mock the API

When trying to develop or test something that depends on an external API, you’ve got a huge dependency on that API. Development and testing can stall if that external API is unavailable. This is also an issue with intermittent faults. A lot of time can be wasted on trying to identify that that API is unavailable or is not working.
If the API has a test environment to work against, typically this is less stable than the production environment. Updates are shipped more frequently, issues are addressed more slowly than the production environment. Response times are also typically slower, as the environment is rarely provisioned like production.
Mocking the API protected you from all of these. Typically this means capturing known good responses. When the mock API is enabled return, these known good responses. This can be extended further to trigger specific error conditions when certain data is passed in, eg a specific customer name will trigger a duplicate name error.
You will need a simple way to switch the mock API on/off, eg a simple config file change.

3. Write Functional tests of the API

This might seem a bit overboard, but doing this can save a huge amount of time. Typically we end up doing this sort of testing manually, but it is worth doing this more systematically.
Writing functional tests helps explore the limits of the API. It can help expose bugs in the API and clarify things that might be unclear from any documentation. It can help to benchmark performance. It can also be a great source of data for mocking the API.
It can also be a great source of protection from changes in that API. If something fails, you can simply run the functional tests to verify whether anything has changed.

Going Functional

As more functional features get added to C# I’m writing code rather differently.

Splitting Data & functions

Instead of having classes that combine data and functions, I’m increasing splitting the two. The functions take data as parameters rather than modifying members on the class. This is a move towards more pure functions.
So instead of this:
    class Customer
    {
        public int Id { get; set; }
        public string Name { get; set; }

        public void Add()
        {}
    }
I’m more likely to write this:
    class Customer
    {
        public int Id { get; set; }
        public string Name { get; set; }
    }

    class CustomerService
    {
        public Customer Add(Customer customer)
        {}
    }

Passing functions to functions

I’m also increasingly using functions as more first class objects to be passed around.
So instead of this (contrived example):
    class CustomerService
    {
        public Customer Add(Customer customer)
        {
            // add customer
            CustomerModificationNotifier.Notify(“customer added”);
        }
    }

    static class CustomerModificationNotifier
    {
        public static void Notify(string message)
        {}
    }

I might write this:
    class CustomerService
    {
        public Customer Add(Customer customer, Action notifier)
        {
            // add customer
            notifier(“customer added”);
        }
    }

    static class CustomerModificationNotifier
    {
        public static void Notify(string message)
        {}
    }

Everything Old is New Again

This feels all very new and shiny. Except that it really doesn’t feel that new.
Say for example this:
    struct customer
    {
int id;
char name[50];
    };

    void add_customer(customer*);

    void add_customer(customer* cust)
    {
        /* add customer */
    }
Or this:
    struct customer
    {
int id;
char name[50];
    };

    void add_customer(customer*);

    void add_customer(customer* cust, void (*notifier)(int))
    {
notifier(1);
    }

    void notify(int);

    void notify(int number)
    {
        /* add customer */    }
It’s funny how everything that is new really isn’t that new after all.

Sometimes it really feels like all we’ve achieved in the last 40 years is better string management and bounds checking of arrays.

SSDT Limitations

In an earlier post I mentioned some of the limitations of SSDT. It’s worth covering them in a little more detail as they are significant.

The most significant issues relate deployment. Deployment for SSDT involves setting up a publish profile with a destination database to upgrade. This gives you the option of either publishing directly to the database or generating a script to run for the deployment.

Maximalist approach (with no control)

SSDT wants to control everything. It isn’t enough for to just control tables, procs, functions and triggers. SSDT also must control Logins, App Roles, Certificates, Groups.
Unfortunately security controls tend to vary by environment. Production will not look like UAT. For SSDT, everything looks the same.
This would all be fine and dandy except that you cannot control whether SSDT will manage these things. You cannot tell SSDT to ignore logins when running upgrades.

This means storing in version control production:

  1. Logins & passwords
  2. app role passwords
  3. master keys for encryption

Other oddments

  • SSDT does not handle multiple file groups. Because clearly nobody would actually use that in production

Single file generated for upgrades

While SSDT provides the option to upgrade the database in place, but this a somewhat risky option for a production database. It is generally preferred to at least have the option to inspect what will be deployed.

Unfortunately for SSDT the upgrade generates a single file. In practice this means is that if the database upgrade fails for any reason recovery is rather complex. You’ll need to work out where the upgrade script failed and then what action you will take from there.

Conclusion

The product feels rather like a beta release, bringing even a simple project into production will expose some pretty serious limitation.
However the largest issue is more fundamental: the approach where you are expected to trust the magic box to generate a valid upgrade script. It is very easy to generate scripts that will fail to upgrade the database successfully. Earlier versions of Entity Framework had a similar approach, which they’ve now moved away from. The entire approach is fatally flawed.