Technical debt

What is technical debt?

Technical debt (also known as design debt or code debt) is a recent metaphor referring to the eventual consequences of any system design, software architecture or software development within a code base.

Recently I’ve been hearing the phrase used much more often, sometimes it’s setting the scene by a developer before a rushed fix is put into place, or how poor code is explained in a system. I accept this as part of software development but hearing this phrase from non technical people causes me concern.

Is this acceptable? Hearing that it’s OK to incur technical debt to deliver a project early I think not. Debt used to be a word that had respect, folks didn’t want debt but a change in attitude with a pay it back later mentality is something people are more comfortable with in the modern world.

In my experience the intention is always to return and pay it back, but rarely does this happen. The realisation of reducing the ability to deliver change to a customer base isn’t acceptable. But just like any debts the issue compounds until it’s unmanageable.

Working on a code base full of technical debt is uninspiring and saps the life out of all that surround it, and once it’s acceptable to allow the debt more will almost always follow.

Don’t be afraid to re-factor and redesign, sometimes it will punch you in jewels but learn, embrace and improve. Working within an environment that holds these values close breeds a sense of ownership and pride and ultimately promotes good practice to all.

Without debt holding change back, delivery will flourish and accomplishment will bring pride and courage.

AppHarbour – Azure done right?

What is it?
App harbour is a new kid on the block providing platform as a service backed by the resilience of Amazon’s cloud platform. For sometime they boasted it was “Azure done right” a very large claim to be make.

How does it work?
Appharbour directly integrates with source control and upon checkin it will pull the latest code base, build, run unit tests and deploy.

On top of this a Web interface is available that allows you to scale and rollback applications as required.

A free version is available to get running so it’s a good place to test and try new ideas.

Appharbour provides integrations directly into various source control providers including :

  1. Codeplex
  2. Bitbucket
  3. Github

For other source control providers an API is available to allow custom integrations.

Something that’s particularly nice is the provider market place, from here there are many additional options that can bolted onto your account such as:

  1. Memcache
  2. Raven DB
  3. Sql Server

Each of these features also has a free option so you can try before you buy, something I find very useful.

The marketplace is also open, so new providers should have no problem integrating onto the platform.

The main benefits with the system has to revolve around the one touch deployment, the automation process of build to deployment allows for a high degree of confidence that what’s deployed matches source control.

No matter how many nodes are required to run your application deployment is dealt with, another nice feature of the whole process is that rollbacks also become a breeze. Released a version that’s got a show stopping bug, rolling back to the previous version becomes a simple click in the web interface.


Appharbour is a new kid on the block and doesn’t have the backing that other platforms have from the likes of Microsoft and Amazon.

As with anything in the cloud an investment is required and Appharbour is no different, but looking at $100 investment just to get IP-Based SSL could be a show stopper for individuals looking to boot strap an idea. The top level package at the time of writing is $199. This gets you 4 workers and Ip-Based SSL this service could be provided elsewhere for less, but I believe the benefits of the platform can outweigh this cost.


After a short amount of time the benefits of the platform quickly become apparent. Using naming conventions when naming solutions allows control over the application to be deployed. Within a single solution using a SQL Azure backend, I had a project that could be deployed either to Azure or Appharbour with almost all deployment hassle removed. With the current lowest service plan being free it’s definitely worth a look.

TransactionScope and Serializable

When using transaction scopes and SQL connections I recently tripped over a undesirable effect. By default when creating a new transaction scope the connection is raised to serializable. I can see the logic in this, as it provides the safest starting point and allows the developer to change the behaviour after careful consideration.

A quick explanation of serializable isolation level : Transactions occur isolated and sequential. However this is just an illusion and other transactions maybe running alongside but only if the database can maintain the façade of running in an isolated manner.

I first noticed the issue when monitoring the connection pool, there were many connections that had an isolation level above what I expected, it turns out that once the isolation level was raised by enlisting in the transaction scope the isolation was not being set back to read committed.

To work around this you need to Execute “SET TRANSACTION ISOLATION LEVEL” on the connection before releasing, or turn off connection pooling (Which I wouldn’t recommend without a very good reason, establishing a database connection is an expensive operation).

Saving SSD space and Mac OSX Lion

Recently I brought a new top of the line Macbook Pro and I’m very happy with it. But with prices being what they are I opted for the 128 gig SSD. I’ve moved all non essential files elsewhere and about ready to install windows 7 for my normal development environment.

While analyzing used space on the harddisk I noticed a 4.2 gig file sitting in /private/var/vm, sleepimage.

It turns out that this file is always the slightly larger than the amount of RAM in your machine, and isn’t really required for machines with SSD’s installed. Infact it can damage the life of the SSD.

As with anything like this I can’t be held responsible for any damage blah blah blah :-).

From Terminal:

  • $ pmset -g | grep hibernatemode (Hopefully the mode is set to 3, if not then don’t continue).
  • $ sudo pmset -a hibernatemode 0
  • cd /var/vm
  • sudo rm sleepimage

You should now have reclaimed some valuable space.

Fancybox with Jquery and partial views

Recently I’ve been working on a site redesign and I thought a nice little feature would be to have a fancybox (read lightbox) and inside there provide some basic site statistics, current user count etc.

So the aim is to have a fancybox appear with it’s content loaded via ajax from a partial view on the server. The content will appear when a button is clicked on the page.

I’ll be using the following :

Partial View

public PartialViewResult p_popup()
   return PartialView();


$(document).ready(function() {
   $("#menuButton").click(function() {
      // Close the overall model
      // Open a fancybox
         url: '/fb/p_popup',
         type: 'POST',
         data: '',
         success: function (data) {

And that’s it, I prefer this method to using templates on the client side. In a follow up to this I’ll deal with posting back from forms with validation.

Javascript intellisense and VS2010

Sometimes you just take things for granted, I was helping a friend with some code and he was suddenly amazed to see that I was getting intellisense in visual studio. So I thought that I would share this little tip.

In visual studio open up a javascript file and from the solution explorer then drag another script file into the code editor.

That’s it job done!

WIF and load balancing with MVC 3.

Recently I’ve been getting to grips with WIF and the starter STS which I must say is an excellent starting point. A requirement for a project that I’ve been working on was to enable the site to run in a load balanced environment without any affinity to a particular node.

From the outset this seemed quite straight forward. After customizing the STS to use our own credential store and aligning the machine keys things looked to be rocking, well from an STS point of view.

After adding the STS reference and deploying the web application everything looked OK initially, looking in firebug I could see plenty of requests reporting “500” internal server error.

After much investigation it became clear that one of the nodes couldn’t access the token due to it being protected via DPAPI.

The following assumes that you have a serverCertificate inside the microsoft.identitymodel node in your config. It also assumes that you application pool has access to find the certificate in the local store.

Changes to the global.asax file.

New event handler

void onServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs e)
List<CookieTransform> sessionTransforms = new List<CookieTransform>(new CookieTransform[]
new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(e.ServiceConfiguration.ServiceCertificate),
new RsaSignatureCookieTransform(e.ServiceConfiguration.ServiceCertificate)

SessionSecurityTokenHandler sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());

Changes to application start method.

protected void Application_Start()
FederatedAuthentication.ServiceConfigurationCreated += onServiceConfigurationCreated;

The preceding enabled tokens to be treated the same on all nodes in the cluster.