Thursday, 13 November 2008

Brittleware

This is really just an observation, and a highly subjective one at that. To any die-hard ASP.NET traditionalist out there: This isn't meant as an attack, so don't take it as one.

I've been working on the ASP.NET MVC framework together with my buddy Steve for a while now and we've been trying out a lot of different things while getting to grips with this new (and much better) way of doing things. We've been trying to establish our own patterns for developing MVC applications, and while doing so we have experimented and written a lot of code, and very few tests.

During this phase of experimentation I truly came to love the MVC framework for its flexibility and elegance, but for some reason I had this nagging feeling that all was not well. It was a familiar feeling, too, and one that I associate specifically with web development. But I couldn't put my finger on it.

Then just a couple of days ago I realised what it was. We'd decided to start building in earnest, and using the MVC framework in anger. Nothing would be written unless it was tested. And after a couple of hours of intense controller-testing I suddenly realised that the uneasy feeling was gone, and in its place was a feeling of security and contentment.

What I'd felt before was the same feeling I always had when writing ASP.NET applications. Things just felt brittle and ready to break. I could go to any length in order to ensure that the application did what it should, worked like intended, and would fail gracefully - but somehow that feeling of brittleness never went away completely. I didn't have any tests, no way of verifying that I was still on the right path, the straight and narrow. And writing MVC applications without tests gave me that same feeling because really, I was no better off.

With the introduction of tests that feeling of brittleness disappeared and instead the application felt robust. Solid. Stable. Reliable. Add any number of synonyms. If you believe in TDD you know what I'm talking about.

I don't know if there's a moral to this story, but the realisation I had has confirmed to me yet again how correct it is to develop everything with tests first. There simply isn't another way.

Wednesday, 29 October 2008

Ninject providers and Moq mock injection.

A few months back I stumbled upon Ninject while reading a post on Jonas Follesø's blog. Since then I've been busy architecting a very modular e-commerce solution for a client, and just recently have I started "tying the application together" using Ninject. Ninject makes for very simple dependency injection, indeed. The Ninject approach is so straight forward, in fact, that Ninject isn't just a good framework for Inversion of Control (IoC) - it's an excellent tool for teaching the principles of IoC as well.

I'm not going to go into any detail on how to use Ninject here, though - there are lots of guides and tutorials in the Dojo.

What I will talk about here, however, is a nice little solution for injecting mocked objects using a Ninject binding provider. My solution uses a Service Locator similar to the one Jonas sets out in his article on Silverlight and Ninject, but the focus here is on the binding providers and how to use them in a unit testing scenario. Nonetheless, a quick look at how the Service Locator is used will not go astray.


public class ServiceLocator
{
public IKernel Kernel { get; private set; }

public static ServiceLocator Instance { get; private set; }

public T Get<T>() { return Instance.Kernel.Get<T>(); }

public ServiceLocator(params IModule[] modules)
{
Kernel = new StandardKernel(modules);
Instance = this;
}
}


As you can see, the ServiceLocator class is a singleton. You load it up once for your application, and classes can happily use it to get access to instances of types defined by an interface, such as


ICategoryService svc = ServiceLocator.Instance.Get<ICategoryService>();


I'm building MVC web applications where the controller classes resolve their dependencies in this way, and in order to test these controllers I have to provide the ServiceLocator with bindings to mocks. But I'm probably getting ahead of myself just a little.

Right... I realise I actually have to provide at least a little information on how Ninject works so that you, dear reader, have at least some context. In order to facilitate dependency injection via Ninject, you create a kernel and load this with one or more instances of IModule which contain your bindings. Creation of the kernel might look something like


StandardKernel kernel = new StandardKernel(new MyBindingModule());


where MyBindingModule is a class that contains the bindings I require. Once again, for the sake of brevity I will not explain the intricate detail of how all this works, but for the sake of continuity this is how MyBindingModule might look


public class MyBindingModule : StandardModule
{
public override void Load()
{
Bind<ICategoryService>().To<CategoryService>();
}
}


By creating a kernel and passing in MyBindingModule to it, the kernel will know that whenever you request an instance of ICategoryService it should create a new instance of the concrete class CategoryService and return that back to you.

Now, that's really great and really powerful stuff. You can set up loads of different bindings, even conditional bindings, and the kernel will happily resolve these for you and return you the right instance type whenever you need it. The only issue with this setup is that when it comes to unit testing, where you need to replace some of your standard bindings with mocks, this approach doesn't work.

Why not? Because the kernel, by default, will resolve a binding and return a new instance of the resolved type. If you're going to use mocks, this is no good because you need to have a handle on the actual mock instance so that you can set expectations and verify behaviours etc etc.

Thankfully Ninject's got a nice little feature which allows you to bind a type to a provider. The provider is a class (which you create) that's responsible for creating the object the kernel should return. Typically you'd use a provider when creation of an object is a complex task (e.g. it involves reading configuration files etc). We're not going to create a very complicated object, but we'll use a provider class so that we can ensure that the Ninject kernel resolves its bindings to specific mock instances that we've already created - rather than creating new instances that we have no access to.

Before we go any further, let's just have a look at a simple test that injects a mock into another class. In this test we're going to create a mock for ICategoryService, set an expectation on that mock, then inject the mock into a CategoryController class (by means of its constructor), ask CategoryController to do something for us, and lastly verify that the CategoryController used the mock in the way we expected. Moq's my mocking framework of choice, but you could use any other framework if you like (Rhino Mocks, NMock, EasyMock, TypeMock etc).


[Test]
public void TestAddingCategoryReturnsTrueFromService()
{
Category category = new Category(){ Id = 100};

Mock<ICategoryService> serviceMock = new Mock<ICategoryService>();
serviceMock.Expect(x => x.Add(category)).Returns(true);

CategoryController controller = new CategoryController(serviceMock.Object);
controller.AddNewCategory(category);

serviceMock.VerifyAll();
}


If you are at all familiar with mocking you'll the above test (created with NUnit) should present no mystery (other than it's a bit pointless, perhaps).

We're going to change the scenario a bit, because we've no control over the creation of CategoryController instances, so we can't inject dependencies via the constructor. Instead we'll let the CategoryController's default constructor make use of the ServiceLocator to resolve its dependencies:


public class CategoryController
{
private ICategoryService CategorySvc { get; set; }

public CategoryController()
{
CategorySvc = ServiceLocator.Instance.Get<ICategoryService>();
}
}


Now we can no longer inject the mock directly into our the CategoryController. Instead we have to rely on Ninject to resolve the dependency for us. So now we'll load up the ServiceLocator with an IModule that has a binding to a provider that we can prime with our mock. Hang on tight while we we have a look at our provider:


public class NInjectMockProvider<T> : SimpleProvider<T> where T : class
{
public static Mock<T> Mock { get; set; }

protected override T CreateInstance(IContext context)
{
if (Mock == null){ Mock = new Mock<T>(); }
return Mock.Object;
}
}


It's straight forward, really. Our NInjectMockProvider<T> extends SimpleProvider<T>. The CreateInstance(IContext context) method creates an instance of Mock<T> and assigns this to the static property Mock, unless this is not null, and then returns the mocked object. Armed with this very simple provider and a new binding our test takes on a new shape alltogether. I've included the new test and binding below.


[Test]
public void TestAddingCategoryReturnsTrueFromService()
{
//set up the ServiceLocator
ServiceLocator locator = new ServiceLocator(new MyBindingModule());

Category category = new Category() { Id = 100 };
Mock<ICategoryService> serviceMock = new Mock<ICategoryService>();
serviceMock.Expect(x => x.Add(category)).Returns(true);

NInjectMockProvider<ICategoryService>.Mock = serviceMock;

//this is where the magic happens. The mock's been assigned to the provider, so
//the controller will be automagically injected with the mock instance we
//created above.
CategoryController controller = new CategoryController();

controller.AddNewCategory(category);

serviceMock.VerifyAll();
}

public class MyBindingModule : StandardModule
{
public override void Load()
{
Bind<ICategoryService>().ToProvider(new NInjectMockProvider<ICategoryService>());
}
}


This example is specific to a scenario where we use the Service Locator pattern, but you can use the same approach and the same mock provider class in other situations as well where you'd like Ninject to resolve bindings to existing mock instances.

I've found that the use of NinjectMockProvider<T> makes it easy to inject mocks and easy to read and understand the tests as well. Hopefully this can be of use to some of you out there.

Extension Method Issues with Ninject

I thought I'd put this up here quickly as, hopefully, it may be of use to others who've come across and struggled with the extension method issue with Ninject. I've got another post on Ninject and Moq mock injection coming up soon.

If you've used Ninject in your .NET 3.5 projects you may have had occasional problems compiling your solution. If the compiler throws the "Missing compiler required member 'System.Runtime.CompilerServices.ExtensionAttribute..ctor'" error, you've encountered an issue with the Ninject.Core.dll v1.0.0.82 which is currently available from ninject.org.

The issue is discussed here but the solution to the problem isn't very clear. Though, if you read very carefully you'll find that you're encouraged to download the source and re-build Ninject yourself after adding a NET_35 pre-processor directive.

As it turns out, Nate Kohari - father of Ninject - has fixed the issue and published the Ninject SVN trunk on the web. So all you need to do is point your SVN client (I use Tortoise) to http://ninject.googlecode.com/svn/trunk/, pull out the code, and build yourself a release version of the DLLs.

Problem solved!

Tuesday, 14 October 2008

Problems with MVC and Continuous Integration

It seems I don't have much time to write these days, but I've finally got another worthwhile piece to share. After sorting out our issues with MVC Preview 5 on VS2008, we promptly ran into trouble when adding our MVC projects to the Continuous Integration process. The MVC project would not build because of missing DLL references. Steve had installed MVC Preview 5 on the build-server, so this was more than a little odd.

However, after digging around for a while Steve found that this wasn't really an issue with missing DLLs on the build server at all. Rather, it was an issue with a couple of hintpaths in the MVC project file. The project file had stuff in it like this:




Do you see what's going on here? The hint-path is relative, and points to \Program Files\ on the C:\ drive! This is an issue for us, because the actual build happens on the E:\ drive of our build server. So... what to do?

We use Subversion as our source repository, and frequently apply svn:externals to our project repos to pull in common third-party DLLs (such as Moq, NUnit, NInject etc). So why not use the same approach for the culprit MVC DLLs? That's exactly what we did and it solved the problem nicely.

We've got a separate repository called BinaryDependencies which holds all our third-party DLLs. Whenever one or more of these DLLs are required we add a svn:external reference to the BinaryDependecies repository to pull in the relevant DLLs to the local project, and then it's simply a matter of replacing the existing references to Microsoft.Web.Mvc.dll (and any other offending reference that's broken) with a local reference. Simple!

Of course, this isn't a lasting solution because you'll have to update the DLLs in the BinaryDependencies repository every time a new version of MVC comes out. BUT, it does get you up and runnig for now - and besides, MVC's soon going to be in Beta and first release... and I assume these particular problems will have been fixed by then!

Sunday, 28 September 2008

Significance

From time to time I read or hear people talk about how unimportant we must be. In a universe so vast, on a planet so tiny, creatures like us must surely be insignificant indeed. And this, somehow, makes life pointless.

I find it to be quite the contrary. In a universe so vast, so void of life, tiny creatures like us, living on such a tiny planet so far from anything else must certainly be incredibly significant. Our existence would not make any sense otherwise.

The point? When things seem pointless, fundamentally they're anything but!

Wednesday, 24 September 2008

Fix for ASP.NET MVC Preview 5 bug with Visual Studio 2008

We've had some trouble using the latest ASP.NET MVC Preview 5. For reasons unknown any attempt to open a view (e.g. Default.aspx, not the code-behind) in Visual Studio 2008 would cause a fatal .NET runtime error and the Visual Studio process would simply be killed. Dead. Gone.



Annoying.

I spent a good amount of time trying to figure out why this could be, and though there were a few leads on the web none of the proposed solutions out there fixed the problem. So, I asked the senior developer on my team, Steve Friend, to have a look. Sic' em, Steve!

Steve noticed that Scott Guthrie's Preview 5 Example Application wouldn't cause any problems at all. Nor would any MVC project added to the same solution as Guthrie's application. However, all other MVC projects created from scratch with Visual Studio would suffer from the symptom described at the top of this post.

Steve examined the differences between ScottGu's application and a 'fresh' MVC app, and discovered that removing two of the DLLs from the fresh MVC app's /bin folder solved the problem. He documented this on our internal wiki, and I've copied-and-pasted his fix here. Thanks Steve!


-------------
Creating a new project
In Visual Studio create a new MVC project (File > New > Project, then select Visual C# > Web then select ASP.Net MVC Web Application). This will create a fairly large default project containing a number of folders and files relating to the MVC pattern structure (i.e. models, views and controllers).

Once you have done this you will need to build the project and create a new website in IIS (see IISSetup for details of how to do this), but before you do that, run through the instructions below to avoid the bug that seems to ship with default projects.

The bug
If you double click on any view page (i.e. a aspx page) in order to edit it in Visual Studio the application crashes.

The fix
In the solution explorer select "Show all files" to reveal the bin folder. In there you will the following dlls (and possibly more, if you have already built your project/solution):

* Microsoft.Web.Mvc.dll
* System.Web.Abstractions.dll
* System.Web.Mvc.dll
* System.Web.Routing.dll

Delete "System.Web.Abstractions.dll" and "System.Web.Routing.dll".

You know you're not a Digital Native when...

... you get up in the morning, tune into Norwegian online radio (NRK P3, to be specific), and find yourself genuinely surprised - when stepping out of the shower ten minutes later - that they're playing Norwegian rock band Dum Dum Boys. "But, but... we're in the UK?"

Perhaps another few hours of chasing Zs is called for.

Monday, 25 August 2008

Building your DBs with VS 2008 Database Edition

This is going to be a quick one, but I feel the database projects in VS 2008 Database Edition are worth a mention.

I've just set up the first database project for my client using VS 2008. The Database Edition of VS 2008 is an improvement on the database project extensions for VS 2005, and makes database development, maintenance, and deployment really simple.

The database project puts the entire DB schema into a structured format (either by schema or object type). You can easily add, modify, or delete tables, sprocs, constraints, functions, etc - and running comparisons of a DB project and a database instance (or between two DBs or two DB projects) is a breeze.

The DB project can be built, which means you've got full DB integrity checks at the click of a button. Even better, you can add the entire solution to your source control repository of choice, and include it in your continuous integration cycle. I set this up with SVN and CCNET, and now every single change to the DB is verified upon build.

How did we manage before?

Thursday, 24 July 2008

DB access problems with VS 2008 Database Edition

I've just installed VS 2008 Database Edition to help with managing the new databases for my current project. The DB project templates in VS 2008 DE provide a really clean way of managing all the DB objects and makes it easy to have it all under source control as well.

However I ran into trouble setting up the projects and kept getting the following error: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified).

In typical Microsoft style there's no help attached to that error; Sorry Mac, you're on your own.

A bit of digging around revealed two things: You must specify a target database for your project (if you don't it'll try and connect to the default instance on your machine and that won't work) and you have to enable the SQL Server Express service on your machine as this DB is used by the project for storing comparison data etc.

Most women will claim that men are reluctant to stop for directions and to read the instructions for the gadget they just bought. Maybe so. I didn't bother reading the tutorials on setting up the project. But blimey, it could certainly be made more intuitive!

Anyway, this post is more of a Note To Self - but if it helps you, too, then great!

Saturday, 19 July 2008

Automatic properties; a great idea?

A little while ago I had a conversation with another guy about coding standards and the subject of member variable prefixing came up. I'm a real fan of underscore prefixes like private string _firstName; and suggested that this be included in the coding standard so as to avoid problems stemming from sloppy coding and member names varying by case only.

I've come across many a class where the coder intended to access the property FirstName but - possibly because of IntelliSense or other code-completion shenanigans - ended up accessing the member variable firstName instead. If member variables are consistently prefixed then this isn't a problem. I like the underscore because it stands out, it's just one character, and it kind of anchors the variable to the class or scope.

Anyway, the chap I was speaking to said "sure, we can use prefixes but it really isn't so necessary because we'll just use automatic properties instead."

Hmmmm..... Simpson, hey?

Today in C# you've the ability to declare typed properties which hide the underlying variable. In other words, there's no longer the need to declare a private member variable and expose it explicitly through a public property. Now all you have to do is create the property itself, like this:

public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}

It is a pretty cool feature. But it's really limited as well. If you want to do anything but expose a variable on your class' interface, you can't use automatic properties. You can define the setter as private but that's about the flexibility you have.

Once you start to tamper with either the getter or setter your property is no longer considered automatic, so you're back to good old normal properties. It seems to me there's not a whole lot to be gained from automatic properties because in my experience and, more often than not, you typically want to do something other than access a member variable (at least in the setter) when the property is called.

Oh, and another thing: Automatic properties aren't debuggable. You can set a breakpoint on an automatic property but it will never be hit. Boo.

I reckon that in the future we'll be able to have another kind of automatic property where you actually can access the get/set variable, validate, or do whatever you like.

For now the automatic property is nothing but a bit of syntactic sugar. I'm on a diet (ha-ha!) so I'm likely to stay clear.

Friday, 18 July 2008

Exploitative software

At the Rotterdam Central metro station last weekend they were having some trouble with their ticketing machines. Have a close look at the OS names.

Seems they've one for your every day tickets (the purely fabricated prices) and one for special occasions... like festival weekends for example. Fire up the Exploitation OS, sonny!

Thursday, 17 July 2008

Go Wordle yourself


I just had to try this. Very cool idea. Check out Wordle and get your own graphical representation of the stuff that comes out of your head and onto the page. Judging by this picture my blog certainly does not look very technical. Why "Address" is so prominent I've no idea. But I like the idea, and it looks kind of cool, no?

I just can't shut up about it


Agile. I use that word so many times a day now I'm starting to sound like a salesperson. Not good. But I'm deliberately keeping my vocabulary small (keeping it simple) so that at least what people hear is consistent.

I have been interviewing business analysts lately and it's important to me - surprise, surprise - that they have experience with both analysis and project management in an agile environment. As it turns out, the guys that I've liked have a similar idea to mine about agile methodology: One should be agile about that, too!

What I mean by this is that if you're truly agile you don't lock yourself into one single methodology for running your project. It's better to mix and match, pick and choose, chop and change. Construct your own methodology from a set of others and make it fit your situation, your constraints, your experience.

Why is this important? Because the less you impose process, the more nimble you are.

I've had some interesting discussions around this recently. I'm leading a new project, a new and exciting build. It's the ever elusive greenfield. A chance to do something new, something different, to try different shoes on and throw away those that don't fit. This is why I accepted the challenge, because of all these opportunities. And I have been adamant that we should be nimble, flexible, agile all along, and that we should use this opportunity for all that it's worth to try new things.

But my ideas were almost stopped dead because the company that I'm working for currently has a massive IT initiative underway to put in place required processes to manage their rapidly growing development team (which is up by 300% in a matter of months) while at the same time refactoring and rebuilding an aging and highly coupled trading platform.

The team undertaking this work is already so big and now growing so fast that they need process. I argued that the team that I'll be leading is so small that we really don't need all that process. I argued long and hard and had to turn to many different people before I could finally convince the IT leadership that it really is OK to go light on process and tools and support systems when you're in the early phases of something like this.

There is no need for a full blown Team Foundation System setup with all its bells and whistles. Subversion does the trick, and it does it beautifully. There's no need for meetings about every aspect of what we are about to embark on. There's no need to define processes for this, that, and everything in-between. In my case, we're kicking off a proof-of-concept project that later will - hopefully! - end up becoming a large scale development with full IT and business backing. Right now, we just need to get on with our POC with minimal hassle and minimal interference. Later on, when the business says "Yes this is great! Here, have a wad of cash", that's when we should start looking at any extra processes and tools and systems we might need to put in place.

I've spoken before about things being a problem only when they are a problem. Anticipating a need for process months prematurely just creates a problem from a notion of something that doesn't exist. Remember that. YAGNI applies not just in best-practice coding guides, it applies to everything that can be labeled as agile.

So, in final words and a note-to-self: Shut up and do it!

Tuesday, 8 July 2008

More and more about agility

I'm on a bit of a roll with agile thinking at the moment. As you can surmise from a previous post, the thinking in my current environment has traditionally been anything but agile.

It's not uncommon. While agile is a buzzword and everybody's talking about it, there are still a lot of people to whom the concept is still vague, unfamiliar, strange, alien - just not something they understand. It's been the case here, and it's now slowly changing.

I've talked about M before. M's smart. He's sharp. He gets things. You show him an article on a concept he's never heard about before and he'll grasp it within an amazingly short amount of time. This happened when I showed him the Getting Real book by 37Signals. He got it.

Though, he doesn't always get it entirely. And I am not one to blame him in this case. Moving on from your typical waterfall, specify-in-entirety-then-develop-till-it's-done approach to a an approach that says "let's just do what's right for right now and then see what's next" would demand a lot from most of us. So in M's case going agile so far means going from one iteration to three iterations.

That's a far cry from being agile because if you're truly developing in an agile manner you have no idea how many iterations you'll need. More likely, you'll have an infinite number of iterations.

At first M's initial take on agile frustrated me, but then I thought about it and now it doesn't bother me at all. Why? Because of something that I read in 37Signals' book. It says
It's a Problem When It's a Problem
In other words, if you think something is going to be a problem, don't worry about it until it actually becomes a problem. M's three-iterations-is-agile isn't a problem because he doesn't know what the first iteration is yet. Even more to the point, when we've all agreed on what the first iteration entails, he'll see (as the iteration nears it's end) that iteration number two looks a whole lot different than what he thought before we started.

As a friend put it the other night, it's kind of like going through cycles of throwing away requirements. What you thought was a requirement a month ago suddenly no longer applies or has been replaced by three other requirements - all because your small iterations bring you a workable representation of software that is real and tangible. Real and tangible things give you experiences. Experiences can give you ideas about how to make that particular experience even better.

And that's as simple as it is. Small-step enhancements. A few of those, based on experience, give you a whole lot more than one big effort based on assumption.

I should give a couple of examples here on what this means, in real terms.

At my previous employer we had a kind of semi-agile approach to developing software. For the greater part it was good, but sometimes we'd get these over-specified documents that told us not only what the customer wanted but also exactly how it should be done (read: implemented).

Pardon me, but if you're hiring me to build software for you, you're leaving the "how" more or less up to me. Anyway...

So I got this requirements document for a customer management tool. And it had this section in it about the "Address Book" and how it should look and work. Every address should be displayed in a certain way, and if an address was a customer's designated billing or shipping address they should appear at the top of the listing (with the billing address appearing first) and be clearly marked as either "Billing" or "Shipping". And the Address Book should paginate all addresses. And to make a n address a billing address there should be a button to click next to the address listing. And the same applies to making an address a shipping address. These were, more or less, the requirements.

We (the team of developers) had a chat about it. And we dropped the pagination, because we figured not many customers would have enough addresses to really need pagination. And we dropped the "click-button-to-make-special" idea.

Instead we just listed the addresses on a page and put two columns next to them with checkboxes in them, one to indicate that an address is a shipping address and one to indicate that the address is a billing address.

No pagination, no buttons, no automatic re-ordering of the way addresses are listed. Instead we did it nice, clean, and simple.

Guess what? It worked just fine. And when the time comes that enough customers complain that it's not working just fine for one reason or another... well, then I bet they'll change it.

Another example: A different part of that same application required that we could display and validate data according to a client's specific demands. We had one running instance of the application so we did this by extending standard ASP.NET controls as user controls and building a simple templating system around it. It took one of our developers just under a week to do this. It worked. It was nice and simple. But there was one potential drawback...

Because we'd developed this templating solution using user controls we couldn't re-use it in other applications. We thought we might need to re-use this functionality elsewhere, so we talked about re-building it all using server controls and custom configuration blocks etc etc etc. But in the end we didn't. Because we didn't need it then.

As far as I know they haven't needed it since, either. But the day may come when they do need it, and then they'll build it.

These are some key examples on what it actually means to be agile. It's not about being lazy, it's about being smart.


... and now that this is out of my system it's time to hit the sack :-)

Monday, 7 July 2008

A few more words on agility.

I wrote in a post not long ago about the importance of staying agile in your development process, and I thought I'd elaborate a little further. I'm now involved in a project which is getting close to entering the development phase, and the last few weeks have highlighted to me where we go wrong and why we go wrong so often when we've got a good idea for software.

The potential scope of the project I'm working on now is huge. The idea itself is nothing new (I can't go into it here for obvious reasons) but the setting and the "take" on the idea is new, and probably better than what the competition has come up with. The guy that came up with the idea (my boss, let's call him M) soon realized that there is real money to be made in an emerging market, and he's received funding from the company board to set up a new department and get the project off the ground. Everybody is really excited about what's about to happen and everyone is "behind" the idea.

All of that is really good and nobody's complaining. However, all this excitement lead to some problems right from the start. M's idea grew rapidly. Scope creep, anyone? More like scope avalanche/flood/torrent (choose your own appropriate natural disaster analogy). And everyone was still just as excited. Except me, probably. Part of my excitement started turning to fear. I saw that if this thing was allowed to take its natural course, we'd crash and burn. Because, quicker than you can say "Waterfall methodology" M (with my help) had mapped out a system so large and so complex that it simply wouldn't be possible to deliver it on time. It probably couldn't be developed in twice the allocated time. And when I pointed out the fact, things got really scary for a while.

M was incredibly keen on "doing something" (I admire this man's energy and passion for his work, I really do). The business leaders thought the idea was brilliant and wanted it to happen, and happen now (heard that one before?). And the fact that there was too much work and not enough time could easily be overcome. The company makes good money these days, so we could have more allocated to our budget.

More money equals more developers equals more lines of code in a day equals faster delivery. Right? I don't think so. In fact, I know it doesn't work that way. And that's what I had to make M and the others see. Before _anything_ had been produced, the company was - potentially - ready to fork out huge amounts of money to back up a project that had done nothing to prove itself except look good on paper. So I put the brakes on. I spoke with M and expressed my concerns. I told him I thought it is incredibly risky to assume that we know everything about what our customers want and develop something so huge purely on a hunch. I told him we should keep the original deadline and pull back the scope so that we could hit the deadline safely. I told him we should limit the available resource and do as much as possible within the constraints of the original budget and resources available. I asked him to read 37Signals' "Getting Real".

And guess what? M skimmed through "Getting Real" and he got it. He understood the point of what I was saying. And so he slowed down.

But the business didn't. They encouraged us to "think big". So we did. In a four day exercise we thought real big and put together numbers and estimates for what it would cost to deliver M's idea, scope creep and all, by the original deadline. And the cost came in at about ten times the budget for the first year. That kind of drove it home: It's a risky and really costly approach.

It's a risky approach because there are too many assumptions, and the main assumption is that we know what the customer wants. In a new project such as this one, where everything hinges on the customer, you should assume as little as possible and instead listen to what your customers want. Launch with as little effort as possible and be ready to change when your customers demand it. That way you'll have a project that can grow and adapt and be successful because it deserves to be - not because it had amazing funding.

So now we're starting small. Thankfully. And guess what? The first deliverable isn't even going to be an in-house development. We're looking at some third party solutions to launch part of the product and build the rest around that third party solution. That way we can build the product in bits and pieces and work it up as required while the out-of-the box solution we purchase takes care of our first deliverable.

So the project is turning back to a more agile approach, and thank God for that. What these weeks have highlighted to me is that there is a very real and very palpable risk of getting carried away when you're working on something new and exciting. It's easy to forget to stop and evaluate what you're doing. And when enough people get excited about the same thing, crowd control becomes difficult. We must remind ourselves that the _idea_ is only the starting point, and in the end it is really the execution that matters.

What happened in the first few weeks of this project is that there was too much focus on the idea and not enough focus on the reality surrounding its execution. Hopefully we've adjusted that balance now, but I'm sure there will interesting times ahead because "speaking agile" and "doing agile" are two quite different things. And I'm sure I'll be writing more about it here.

Saturday, 5 July 2008

Quick update on the Silverlight plugin issue

In a reply to a comment from Jonas Follesø I wrote
Also, having installed the Silverlight 2 plugin, I find that several of the Silverlight 1 sites from the Silverlight.net showcase either do not work (I'm prompted to install Silverlight!) or work intermittently. This doesn't give me a lot of confidence in the technology right now.
Jonas followed up with a link about this particular issue. Turns out Firefox 3 isn't yet a supported browser!

Thursday, 3 July 2008

Silverlight plugin trouble

How can Silverlight expect to take over the world when there are such serious compatibility issues between the plugin's version 1 and 2? I upgraded my plugin in IE, but not in FireFox. Now FireFox crashes, just simply dies whenever I try to go to a site with embedded Silverlight 2 content. Whether this is a FireFox 3 issue or a Silverlight issue I cannot state categorically - but it's a big issue and seriously detracts from the appeal of building apps with Silverlight 2. I'm also a bit worried about the size of the new plugin. 4.7MB!! That's 3.3MB up on version 1. Not a good trend.

Friday, 20 June 2008

Stay agile. Just stay agile.

There's a real temptation among business people, and also among some of us developers, to think big and go for delivery of the ultimate application in one go. This is dangerous territory.

If you have an idea for a brilliant application or web site, and you start to realise that the complexity of your idea is actually far beyond what you first imagined then it isn't the time to sit down and gather requirements for everything, hire loads of developers and get coding. Rather, this is the time to write a page or two about your vision, then select a few features and build those. Then select some more, build those. Release your application gradually, in stages. On the web this is perfectly possible.

I'm part of a project now where the temptation to go big and go big fast is very real. I've asked a few people to read the Getting Real book by 37Signals to help them get their head around what it means to stay agile and nimble during a software development project.

It's so easy to get stuck on features. Particularly if you're a very visual person it's easy to think too much too soon about how the UI works, how stuff is presented, how you drag-and-drop etc etc etc. Until you have some kind of working model, some kind of prototype you cannot quantify the value of your idea properly. That's why you should focus first on what you want to do, then actually do it, and when you've done it you can go back and have a look at how you've done it - and maybe change it.
  1. Choose a feature (the what)

  2. Build the feature, keeping it as simple as possible

  3. Revisit. Now that it's a working feature, you can go back to Step 1 and add something to it (e.g. an AJAX-enabled drop-down list etc).
If you plan and do stuff in stages like that you'll get things done and done well. If you try to plan for everything up front and then try to build everything in one "sitting" you are going to fail. You don't have to take my word for it. Just try.

Anyway, the point of this ramble is: Keep it simple. If you're paving a driveway you do it one cobblestone at a time.

Wednesday, 16 April 2008

Silverlight 2: We're not quite there yet.

I've just spent two days in a lab in Reading on the Microsoft Campus. The lab topic was Silverlight, the up and coming UI technology that's set to seriously rival Flex and Flash.

The majority of the time was spent in breakout meetings discussing the pros and cons of the technology, and a lot of compare and contrast was done with Silverlight's main competitor, Adobe’s Flex.

Quite a few of the developers present had used Flex before or are currently in the process of evaluating Flex against Silverlight for future projects. Everyone seemed to agree that Silverlight has the potential to be a much better framework for Rich Internet Application development than Flex, and there were two main reasons for this.

Firstly, Flex was born out of a design application and therefore was not intended for real "line of business" application development. Second, Flex APIs are inconsistent (“quirky” was a word used frequently), thus requiring a lot more effort to train people to use the framework properly.

Silverlight is a much more intuitive technology for .NET developers. I could see this straight away as the programming model, the event wiring and handling etc., is just like in any other .NET application. Those at the seminar that had worked with both Silverlight and Flex said that proficiency in Silverlight (for a .NET developer) is gained in less than half the time of Flex. Also, the integration with other .NET technologies (services etc) makes it an appealing alternative. It fits right in there with the rest of the .NET family.

But, there are some big shortcomings. The Silverlight 2 control library isn’t complete, not by any stretch of the imagination. I was surprised to find that certain controls such as the DropdownList, TreeView, and DataGrid do not exist. And MS will not promise the inclusion of anything that's currently missing in any of the upcoming beta and CTP releases. Unfortunately MS feels burned by their openness around the Vista release and now hold their cards close to their chest. They'll listen to the community's feedback, but they will not promise delivery of anything just in case they cannot meet their own deadlines.

Despite certain controls not being available, Silverlight remains a really powerful technology. Indeed, it is so easy to use that you can knock up "anything" in very little time. I built a semi-reusable DropdownList control in less than half an hour (using a combination of a textbox, a button, and a listbox control). The beauty of Silverlight and XAML (its markup language), though, is that I could build this control entirely from scratch by drawing it using the Expression Blend application and vector graphics. That’s just too cool.

So Silverlight is powerful, and it makes a lot of previously laborious tasks simple. But nobody has yet used it on a truly big scale, and nobody seems to know or have any ideas on how it should be used for enterprise scale applications. Over the two days we saw a lot of seemingly large applications demonstrated, but I was left with the distinct feeling that the applications were heavy on the UI and thin on business logic. In other words, I wasn't convinced that they demonstrated how Silverlight is a good fit for the large type of applications that we need to build.

Also, we (several of us asked the same questions) were not able to get guidance on how large Silverlight applications should be structured from a development point of view. How do we build something like our current tools applications and share look and feel, navigation etc? How does the project structure look? How does deployment work? Nobody had any answers for this, because Silverlight is still in its infancy.

So, for those companies out there that wish to make Silverlight central to their software solutions there is a lot of pioneering work left to be done. It means blood, sweat, and tears. The technology isn't ready yet, so it's up to the community to make up for its shortcomings. But I think Silverlight will find its feet pretty soon.

Wednesday, 2 April 2008

DataBinding woes

I tore my hair out over this one, for hours. And the solution to the problem was so simple I felt stupid not to realise what was going on.

I have a custom server control called CustomerSummary. It has a single public Customer property that can be set declaratively on any page where it is used. I use this control in three different web applications, and implementing it in the first two was no problem. But when it came to the third application, the control never rendered. I just could not figure it out.

While debugging the control I realised that the Customer property was never set, even though it was correctly declared on the page with Customer=’<%# CurrentCustomer %>’ (CurrentCustomer is a property on the containing page’s code-behind). What the?

After much poking around and debugging a colleague looked at it and said: “Do you call DataBind() on your page?”. No, I don’t. And that’s why. The <%# %> expression is a databinding expression, so since the page didn’t call DataBind(), the CustomerSummary control was never bound to the customer data, and therefore never displayed.

So the first solution was to call DataBind() in the Page_Load method (this is what the first two applications do; that’s why it worked there). However, the same colleague that pointed out that DataBind() was missing pointed me to this article which explains custom code expression builders and how you can use them to avoid having to call DataBind() and bloating the ViewState. Sweet! It’s recommended reading.

Tuesday, 19 February 2008

Dynamic app.config settings for testing

If you need to write tests that verify behaviour based on different app.config settings, here’s a neat way of doing so.

In your [SetUp] method, load the app.config into two separate instances of the Configuration class, like so:

[SetUp]
public void SetUp()
{
//load config so it can be edited for tests
_originalConfig = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);

_currentConfig = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
//any other init code here
}


In your [TearDown] method, ensure that you restore the app.config to the original (presumably you want it to be the same every time your tests start).

[TearDown]
public void TearDown()
{
_currentConfig = _originalConfig;
_currentConfig.Save();
}


When you need to modify any part of the config you can do so (provided there’s a “set” property for the element or attribute) at runtime like so:


[Test]
public void MyTest()
{
//... set up stuff here ...

//modify config for this test. Set the lockout period to 120 minutes
//_client is an instance of a custom configuration element
int lockoutPeriod = 120;
_client.Profiles.PasswordLockoutPeriod = lockoutPeriod;
_currentConfig.Save();
ConfigurationManager.RefreshSection("clientSettings");

//... carry out your tests here ...
}


Pay particular attention to the call to ConfigurationManager.RefreshSection("clientSettings");. The change I made to the config is within this section and so I want to refresh it before continuing in order to ensure that the new version is used.

Also note that any objects that reference the config need to be re-loaded after you’ve modified (and refreshed) the config in order to get the changes. For example, if you have loaded an instance of a class in your SetUp that uses config you'll have to re-initialise that class after making your config changes in a test.

Sunday, 17 February 2008

Dependency Injection is an easily confusing topic

It took me quite a lot of time to get my head around the Dependency Injection pattern. Getting a simple answer to the question "what is Dependency Injection" is not easy. And when you think you "get it", you'll come across an article or a statement that confuses the issue.

So I was happy when I found this citation on Andrew Binstock's blog:

"Any nontrivial application is made up of two or more classes that collaborate with each other to perform some business logic. Traditionally, each object is responsible for obtaining its own references to the objects it collaborates with (its dependencies). When applying DI, the objects are given their dependencies at creation time by some external entity that coordinates each object in the system. In other words, dependencies are injected into objects."

I don't think you can put it simpler than that. I'll be writing more about Dependency Injection as part of my series on TDD so I'll return and try and show with some examples how to use Dependency Injection and give some good reasons why you may want to use it.

The confusion around this topic stems, I think, from the fact that Dependency Injection is almost always discussed in the context of a Dependency Injection framework such as PicoContainer or Spring. These frameworks are no doubt very useful, but I don't think bringing them into the mix when explaining the pattern adds value to the discussion.

The definitive article on Dependency Injection was written by Martin Fowler and I recommend reading it despite it's use of various frameworks in its examples.

Test Driven Development, part 2. Why TDD?

In my first post on Test Driven Development I introduced TDD as a development methodology and showed, without going into a deep level of detail, how TDD encapsulates high level software requirements and break these into low-level unit tests which are defined and written before any actual production software is written.

Why use the TDD Methodology?

In this post I want to explain why TDD is a good way of writing software and what benefits the methodology yields to software projects where requirements are changing continuously. If you are not familiar with TDD it may be a good idea to go back and read the first part of this series.

It is a statement of obvious fact that testing is necessary for any type of software development effort. Testing can be carried out in a variety of ways, so why is TDD better than the other approaches? The simple answer is that since TDD makes you think of your tests first, all your software development is focused on passing these tests. And as each test is a representation of a software requirement, the entire software development process drives towards meeting the pre-defined, clear requirements specified up front. Instead of a traditional approach where testing happens after development in a kind of "let's test to see if we got it right" mentality, TDD changes the testing mentality into "now that we have our tests, we had better get this right."

A further benefit of the up-front testing approach of TDD is that every unit of code that is written already has a test. This means that as the software grows, so does the number of tests, and there is no need to go through an exercise to create tests once the development phase is over. The tests are defined and written when necessary as new parts of the software are developed or as requirements are added or changed.

Additionally the tests that are written during a TDD project can be run any time to verify the integrity of the produced code. This is an absolutely invaluable aspect of TDD. The fact that you have a click-and-run verification process at your fingertips means that changes to software (requirements changes or refactoring) can happen quickly and safely. If you change a single line of code, re-write a method or an entire class, a single run of your tests will verify whether your code still functions like it should. The bigger the software project, and the more developers changing and adding code on the project the more valuable this aspect of TDD becomes. Just imagine an application with 25 libraries and ten developers working for three months to complete the project. Every single day each developer will commit (on average) changes at least once. Over the course of the three month period that means there are rougly 600 opportunities for introducing bugs into the central code base. With TDD and automated testing any bug introduced into the code base will be flagged at the first test run and can be fixed when it is introduced rather than weeks hence when nobody can figure out where it came from.

More subtle benefits of TDD include those that emerge from the step-by-step process of test and code creation. This will be covered in much greater details in a later post on how TDD is practiced, however it can be said here that the relatively strict guidelines of the TDD methodology encourages brevity, clear and logical encapsulation, and discourages bloated methods. TDD methodology forces you to write testable code, and following the guidelines will yield better, more readable, and more economic code. A common problem with non-TDD approaches is that code ends up being very hard to test (in an automated manner) because of the way the code is written. With TDD this problem is not prevalent.

There are some developers that subscribe to unit testing and automated test runs, but who do not believe in the TDD methodology. The most common reason I am met with goes something like "How can you test your tests? If you cannot test your tests you cannot be sure that your tests are correct. If a test is incorrect the resulting software will have incorrect behaviour. As such, TDD is an unsafe approach." I sometimes am told that TDD requires a very mature team of developers in order to function, and there is also a lot of doubt as to whether the up-front effort required to write tests pays off in the long run. "It's so time consuming," they say.

These are all valid concerns, but they typically stem from a lack of understanding of the underlying TDD methodology. It is true that you can end up writing tests that are incorrect. I have certainly done that more than once. However I know that I have written incorrect tests because eventually these incorrect tests have been caught out by other tests that I have created later. Writing an incorrect test amounts to roughly the same level of severity as misunderstanding or omitting a an untested software requirement. The fallout is determined by how badly wrong the test was written. The risk of these things happening is not minimised by creating layer upon layer of tests for your tests, but rather through clear communication and channels for requirements elicitation and documentation both prior to development and during the development phase.

I do not believe that TDD requires a "very mature team of developers". Rather I believe it requires a team of developers with clear vision and great aptitude. A developer's number of years of experience "in the field" does not determine whether or not he is fit for TDD. It is true that the TDD methodology requires practise and is not learned quickly, but this does not mean that it cannot be taught to or practiced by juniors. On the contrary, TDD teaches very good software development skills and can be an excellent tool in the education of inexperienced developers.

But of course, TDD in the hands of sloppy developers will not yield better software than that team would using any other methodology. TDD is not a silver bullet. Nothing is. If you believe it is you'll end up shooting yourself in the foot.

My answer to the argument about TDD being time consuming usually centers around examples from my own experience. Recently I had to add a single member field on a class, load the value for that field from a database, and ensure that the field was transferred properly from the data access layer to a WCF service layer. It took me between four and five hours to write all the tests and code required. I'll add that the data access layer involves custom data mapping with iBATIS and is a little bit more involved than what you would normally assume, but four-to-five hours for this task probably sounds like a lot of effort for what I accomplished. But the pay-off for all that work came shortly afterwards when a change elsewhere in the system caused my newly written tests to fail. And they failed for reasons I had not even thought of - so chances are that without the tests in place this bug will have gone unnoticed for much longer. It's difficult to quantify how much effort would have gone into discovering, hunting down and fixing the bug, but I am pretty sure that it would have taken me at least an hour - maybe two. So if two and a half of the total five were spent writing my tests I've already had a significant return on my investment. And the best thing of all is that this investment will keep yielding a return, because the tests are still there and still running.

Finally, some people are simply more comfortable writing unit tests after development because this approach is more familiar and in line with the write-first-test-second approach that we all know so well. To this I submit that "test-after is better than no test at all", but also offer two points of reflection on why TDD is still the better alternative:

When you write unit tests after writing code you run into a dilemma once a test you’ve just written fails.

  1. If your test fails because you wrote the wrong test for your unit you will have to change your test to suit your code. This is inherently dangerous because at this point you’ve no assurance that your unit actually does what it should. A test that has been retrofitted in this manner can end up showing false proof that a unit behaves correctly when it does not
  2. If your test fails but is the correct test for your unit you have to change your unit so that it passes the test. This is exactly what’s at the centre of TDD methodology. Your tests should come first to provide the benchmark for your units. In this scenario you end up applying TDD principles, and it is better to adhere closely to these and apply them up front.

In my next post I will be writing about the how of TDD, going into practical code examples and discussing the step-by-step methodology that yields this amazing result I have spoken about here today.

Monday, 11 February 2008

WCF Data Contract translation

In WCF you often pass data between the client and service in serializable classes marked up as Data Contracts. These classes are supposed to be data containers only and should not contain business logic. Whenever a class of this type is received on the service side is therefore translated into a business object of some sort before processing takes place.

A straight-forward way of doing this translation is to set up a little translator class with a method that takes a parameter of type X and returns an object of type Y, and to manually code the translation between the two in the method body. The drawback here is that you have to explicitly code the translation both from X to Y and from Y to X. I thought I had a better solution.

Because of an earlier, lengthy refactoring exercise a couple of Data Contracts (that were also business objects) were now split into two classes - one a Data Contract for the service, and one a Data Transfer Object for the data access and mapping layer. These two classes shared a number of properties and because of this their common interface was abstracted out. And this led to really easy translation, because the translator could simply translate treat two types implementing the same interface as the same type.

public static IProfile Translate(IProfile input)

This method allows the caller to control the types going in and coming out, through type casting.

//the object "input" is of type DataContracts.Profile
DataAccess.Profile = (DataAccess.Profile) Translator.Translate(input);


This works just fine for simple object-to-object translation (e.g. DataAccess.Address to DataContract.Address), but with nested objects I ran into trouble. For example, IProfile declares a public property of type List. With the above approach we run into trouble because there's no way of telling the Translate(IProfile) method what type the contained list of IAddress is.

So it became necessary to be a bit more type specific. The Translate() methods were rewritten with generics like so:


public static T Translate(IProfile input) where T : IProfile, new() {}
public static T Translate(IAddress input) where T : IAddress, new() {}
//... and so on


And now, by adding an extra property public Type AddressType {get; set;} on IProfile we could make use of the MethodInfo.MakeGenericMethod(Type) to call the correct method on Translator to translate the right type of Address for the IProfile object, like so:


// ... this is all in class Translator

public static T Translate(IAddress input) where T : IAddress, new()
{
//... translation logic for addresses goes here ...
}

public static T Translate(IProfile input) where T : IProfile, new()
{
T output = new T();

// ... do the member-specific translation here ...

if (input.AddressList.Count > 0)
{
foreach (IAddress inputAddress in input.AddressList)
{
MethodInfo method = typeof (Translator).GetMethod("Translate", new Type[]{inputAddress.GetType()});
MethodInfo genericMethod = method.MakeGenericMethod(output.AddressType);

output.AddressList.Add((IAddress) genericMethod.Invoke(null, new object[]{inputAddress}));
}
}

return output;
}

Sunday, 3 February 2008

Hamburg

I have just spent a weekend in Hamburg with my wife, and it was a bit of an eyeopener. I've never visited Germany before (stopovers in airports don't count; if they did I'd consider myself really well traveled indeed) and the surprises were many. For starters the stereotypical German was not to be found. Not a grumpy, angry, or rude person in sight. In fact, the Hamburgers (ha ha!) are as pleasant as they come. Londoners, you've got something to learn here. I also expected people to be very strict and correct - perhaps a little uptight. That wasn't the case, either. Perhaps I've just got a nasty set of preconceptions (and perhaps all of this is evident to all of you) but now all those have been thrown aside and I really cannot wait to come back again. Germany-by-car is now high on the list of priorities.

The thing that touched me the most about Hamburg, however, was the walk through the area around St. Nikolai Memorial - a former residential area that was bombed to smithereens by the RAF and USAF during a true campaign of terror (sorry for using such a watered down and inaccurate term, but just about any historical text deems Operation Gomorrah [yes - that's what they called it] an act of deliberate terror) during the end of July 1943. Over the course of just a few days 35000 civilians were killed. (The wikipedia article cites 50000 deaths, and this Air Force Magazine article cites 40000 deaths, yet the St. Nikolai Memorial quotes the number as roughly 35000.) German civilians were the target of this campaign which aimed to demoralise the enemy. I am not about to get political here, but I think we're quick to forget the sufferings of the German people during the war, and this was a good reminder for me, at least, that even the history of WW2 cannot be drawn in black and white.

The remarkable thing is that Europe, somehow, has managed to pick up the pieces and pull itself back into relative unity. We, the generations born after the war, have so much to be thankful for. When we get caught up in our everyday dramas, disasters or despair, perhaps we ought to stop and spare a moment of thought for what was given and was lost in the war fought to preserve our European liberty. Don't think that your lifestyle is a given, and certainly don't take it for granted. Instead, travel to Germany and experience the country and the people and find yourself among some of Europe's friendliest.

Test Driven Development, part 1.

Testing is central to any software development. We simply cannot live without it and, though its role in the development life cycle is often underestimated, thorough, good, repetitive testing is crucial to delivery of quality software. Testing can be done in any number of ways and is all too often treated as an afterthought at the end of a development project. I want to take some time now and over several posts to speak about Test Driven Development (TDD), a methodology that puts testing at the front, in the driver's seat, of software development.

TDD is certainly nothing new, and there are numerous excellent texts available on the topic. What I will be writing about here has probably been covered in said texts, however I wish to cover the central topics here and highlight things that I have found particularly useful, interesting, or plain difficult. TDD is a huge topic and there are lots of development frameworks available to support the developer (each typically targeting a specific language or group of languages), several streams of thought (classic TDD vs Mocking for example), and multiple ways of achieving the same things. By getting at it piece by piece I hope to form a coherent picture of what TDD is all about and how you can use different frameworks and apply different methods and approaches in a manner that suits you, your business and your project, all in an effort to improve the quality of the software you write.

I think a good approach to this will be to start with fairly coarse-grained topics such as what, why, where, when, and how - though perhaps not in that order. Several subtopics will pop up along the way and I'll address these as I go.

Now, if you are reading this and you are interested, please feel free to post your comments and questions. Here we go...

TDD, What is it?
Test Driven Development is, as the name implies, a methodology for software development in which tests are what drive the actual creation of software. In TDD tests define the software requirements at a level of very fine granularity. TDD is unit testing in action, where a unit is a single public method. (In 99% of the cases you will only ever test public methods, though there are exceptions.)

When we say that a test defines a requirement it means that a test is created to assert that when a certain action is performed (a public method is called) in software, specific results ensue (the state of an object changes in a specific manner). A very trivial example of this could be a Dispenser class that represents a vending machine. The requirement could state that when a can of soft drink is dispensed, the number of cans (of that type of soft drink) is reduced by 1. The test that defines this requirement would create a Dispenser instance with a certain known number of cans (of Fanta, for example) and then call Dispense(typeof(Fanta)) on the Dispenser class. The _actual_ test here is that after the Dispense method has been called the quantity of Fanta cans left must be the initial quantity less 1.

You've probably already asked yourself "what kind of requirement is this?". It's certainly not a very typical requirement you'd ever get from a customer. Instead the customer is likely to define the requirement something like this: When a can of soft drink is to be dispensed, ensure that the customer has inserted sufficient coins, then dispense the correct can of drink according to the button pressed. Ensure that the stock levels are updated and that the customer is issued the right amount of change, if applicable.

In the middle of _that_ requirement you'll notice that there is a reference to stock levels. This corresponds nicely to the test we defined in the previous paragraph. As you can see, a customer's requirements (a functional requirement) typically breaks down into several low-level requirements. This is a good thing, because in order to test efficiently you need to test one condition at a time.

But I get ahead of myself. Let's go back to talking about requirements and how tests represent them. If you stop and think for a while about the client's requirement for dispensing a can of soft drink you'll see that several software components are needed to build the software to control the client's machine. We need software to deal with cash (counting coins and dispensing change), software that controls the machine's input buttons, software that keeps track of stock, software to control the dispensing mechanisms and so on. When you start thinking about how these components will work, and how they will work together, your process of design will quickly move down to a much lower level of requirements and you'll see the smaller elements, the software units that need to exist in order for the machine to work as the customer specifies. All these smaller units have their own very specific requirements that are pieces of, and together form the whole of the customer's high-level requirement.

Describing unit tests and requirements in this way may make it sound like it is easy to deduce what the higher level requirements of a software are based on the low level unit tests. That is usually not the case. At the low level each individual unit (method) is tested to ensure that it behaves as prescribed. This is very useful as it clarifies the intent behind the written code, however as each test is separate to other tests there is no apparent way to string a set of random unit tests together in order to form a more human requirement at a higher level. _Integration tests_ go some way towards bridging this gap but these will be discussed later in a different post as they are likely only to introduce confusion here.

So far we have established that TDD is about testing each unit of code to ensure that it works, it behaves in the manner you, the developer, intended. But TDD is a methodology that does much more than create unit tests. It is certainly possible to unit test code without the TDD approach.

What sets TDD apart from other unit testing methodologies is its focus on testing before and during the development process. You may find it a little strange that testing takes place before development takes place, yet indeed this is a very crucial point. TDD stipulates that you do not write any code unless you have a test for that code. For example, if you need to write a method that adds two integers you must first write a test for that method that verifies its behavior. Your test would for example pass the integer 2 and the integer 5 to the method and expect that the returned integer is 7. You write this test before you write your Add(int, int) method - but of course that test does not compile. But you can now write your method and instantly run your test to verify that it works the way you intended.

Though this is an overly simplified example it still highlights that the tests you write up front are the driving factor for the actual code you develop. Writing software in this manner forces you to think very carefully about what you are doing and what you are trying to achieve. If you do this diligently (I will post about the how of TDD later) you will find that you not only produce code that is very accurate (it does exactly what you intend) - you also produce code that is highly testable (duh!)! Though I will also post about the why of TDD later, it should be evident at this point that code that is easily testable has high value.

That kind of wraps it up for the what of TDD. Next time I'll write about the why.

Thursday, 31 January 2008

A WTF in a parcel

A friend was cruising around for some info on how different courier companies allow people to track parcels, and came across this true gem of a statement on ParcelForce's website.

Links to this website

You may not create a link to any page of this website without Royal Mail's prior written consent. If you do create a link to a page of this website you do so at your own risk and the exclusions and limitations set out above will apply to your use of this website by linking to it.


WTF? Seriously - what does that mean, and what purpose can a statement like that possibly serve?

See it for yourself, just scroll down to the "Links to this website" and "Links from this website" sections. The genius that came up with this text ought to be congratulated.

Wednesday, 30 January 2008

Rather scathing about UML

Every used UML to model software you're building or will build? Ever tried to?

Personally my only brush with UML was at university where it was a set topic of study. I've not touched it since and I'll be careful about drawing too many conclusions from my limited exposure. I'll just say this: Visual modeling of software is difficult and any method or process is bound to have limitations and flaws. But that doesn't render this post about UML any less humorous.

Are you happy where you work?

Today I happened upon some interesting reading. There's a software company in New York called Fog Creek Software that has a very unique way of looking at employees, hiring processes, office layout and company structure.

I cannot say I condone everything I read on Fog Creek's site, but there were many things that certainly rang true. I've worked a couple of places where I felt that the pool of IT talent was going to waste either because upper management saw the programmers and other IT staff as "the people who know how to deal with the ones and zeros" or because there were really clear lines drawn (in the sand!) as to what's allowed and what's not in terms of technology, methodology, tools and software. These things are just crippling to creativity, and really limiting when it comes to building new skills within an IT team.

I really recommend reading what the owners of Fog Creek have to say. Read the About page, the bit about the Development Abstraction Layer, and the spiel on how to treat developers. The description of the office alone makes me want to move to New York.

It was actually the article about the Fog Creek office that made me so enthusiastic about what these guys have to say. Fog Creek really appears to invest heavily in their developers. Seriously. Some of the figures seem just crazy at first glance. Just take the $800 chair that everybody sits in. Or the $700 per-developer-per-month cost of the office. I'm sure most CFOs would cringe just at the thought. But the idea, Fog Creek's idea, is that this investment in people means highly motivated workers, better focus, higher productivity, higher quality - and the end result is of course better products that sell better than the competition's.

This just makes me think of jobs I've had in the past where software development is at the centre of what the business does, yet programmers and IT staff are sidelined in every way possible. Take for example the digital agency I worked for in Sydney that had an awesome building in a top location, with a huge bar with perfect views of that spectacular harbour. Yet when the company expanded, who were the ones that were pushed - literally - to the very back of the office to sit directly underneath the central air conditioning vents, where by 11am your fingers would be numb with cold? Or take a more recent job in London where the offices were also very flash and modern, yet the IT department was placed closest to the office entrance so that there was constant traffic of people coming and going. Hardly an ideal location for the hard thinkers of the company, the people who kept the "shop open" 24/7.

The difference between those companies and Fog Creek is pretty clear. Fog Creek understands its own business and who make it happen - and they invest in that, convinced that this investment in people is what will increase profits.

I'm a developer so it's no mystery that I like reading about a company that puts proigrammers in the spotlight. But it's the same for everybody - we want recognition that the job we do - our individual contribution - is important to the company we work for. So I encourage all of the CEOs, MDs, CTOs and CFOs out there to take a look at what Fog Creek does and try to apply some of the tips in their own companies. Of course, the CEOs, MDs, CTOs, and CFOs don't read my blog - so forward them the link to this post :-)

Sunday, 27 January 2008

Throw your own SqlException, part 2

Last year I wrote a very short post on a technique for creating and throwing a SqlException instance. The post links to a solution on another blog. Since I have been doing a bit of work using reflection lately I thought I should follow up on the original post and explain exactly what the problem is, how it's solved, and also include the full source code (in case the original post on the other blog is lost).

So, here we go. My original problem was that I needed to be able to throw an instance of SqlException with a specific error code. SqlException cannot be instantiated directly, however, because its constructor is marked as private. The only way to get around this is to use the very useful tools in the System.Reflection namespace and get at the private constructor via exploration of the metadata emitted by the System.Data assembly.

Before I go on, if you are not familiar with what reflection is I recommend that you first read about it here. Also, when working with reflection to discover and create types Lutz Roeder's Reflector is invaluable. Download it!

Now we're equipped for the task at hand. Using Reflector we'll first inspect the SqlException class to discover what's required to create an instance, and then we'll use the System.Reflection namespace to write a set of methods that allow us to create a SqlException instance with the required state. It really isn't difficult, but writing this kind of code (using System.Reflection) does not feel natural at first pass, so I'll go through it here step by step.

1. Inspect SqlException with Reflector.
Open Reflector and load the System.Data assembly. Expand the System.Data.dll node, and then the System.Data.SqlClient node. Scroll down until you find the SqlException node and expand this also. When you expand this node you'll find that the SqlException class has two constructors (both private), three methods (one public, one private, one internal), and several public properties.

One of the private constructors takes a string (an error message) and a SqlErrorCollection instance as parameters. This is the constructor we'll be working with. Using System.Reflection we'll create an instance of SqlException by passing in a string of our choosing, plus an instance of SqlException that we'll also create. But before we get carried away, let's have a look at the SqlErrorCollection class.

2. Inspect SqlErrorCollection with Reflector.
SqlErrorCollection is listed just above SqlException in the node list. Expand it and you'll find one internal constructor that takes no parameters (how useful for us!), one internal Add(SqlError) method (probably a bit more useful), three public methods, two public properties, and two inherited properties.

So, in order to create an instance of SqlErrorCollection for our SqlException constructor it seems we'll have to utilise System.Reflection a bit more. Now, an empty SqlErrorCollection is probably not going to be that useful to us either, so we'll have to create an instance of SqlError and add it to the collection using the Add(SqlError) method.

3. Inspect SqlError with Reflector.
SqlError is listed just above the SqlErrorCollection in the node list. When you expand it you'll see that it is quite simple, with an internal, seven-parameter constructor, and a single public method (overriding Object.ToString()).

Armed with this knowledge, the list of steps to complete our task is as follows:
  1. Create a SqlError with an appropriate error message and error code.

  2. Create a SqlErrorCollection and add the SqlError instance from step 1.

  3. Create a SqlException instance by passing in an error message and the SqlErrorCollection from step 2.

Almost there, but not quite. How exactly do we go about doing this? Before we carry on, it's time to have a look at the System.Reflection namespace. Put simply the System.Reflection namespace allows you to discover the internal structure of a type, invoke methods, properties or constructors, and even create types during runtime. All of this is accomplished through the use of special purpose classes, some of which we will use here (I'm not going to discuss System.Reflection more in-depth here as it really deserves a long post on its own).

Specifically we will be making use of the ConstructorInfo class (for calling constructors) and the MethodInfo class (for calling methods). Instances of these classes are returned by calling GetMethod() or GetConstructor(), respectively, on a type. For example, calling typeof(SqlErrorCollection).GetConstructor(...params...) returns an instance of ConstructorInfo that we can use to invoke the constructor. In fact, the parameters we pass to GetConstructor performs a search on the type (in this case SqlErrorCollection) and returns a ConstructorInfo instance that corresponds to the constructor found in the search. (If the constructor is not found, the ConstructorInfo instance returned is null.)

Let's a have a bit of a closer look. GetConstructor() has three overloads, but I will only look at the second overload here as it is exactly what we need. The method takes four parameters: BindingFlags, Binder, Type[], and ParameterModifier[]. The first parameter is a bitmap constructed of stringing together individual BindingFlags that we use to indicate what type of constructor we're after (e.g. a non-public constructor). The second parameter, Binder, allows us to pass in a special instance of a Binder class that can be used to select a specific overload etc. The third parameter is an array of Type, and this array specifies the types and number of the parameters in the constructor declaration. The fourth and last parameter is an array of ParameterModifier - each ParameterModifier in this array can be used to specify which parameters are to be passed by reference and so on.

We only need to worry about two of these parameters, namely the BindingFlags and the Type array. The two others we'll pass in as nulls and let the framework use the built in default behaviour.

GetMethod() is similar to GetConstructor() but has a string parameter that can be used to pass in the name of the method to find. If the method we're looking for has no overloads we only need to specify the BindingFlags for the method and we're done! Otherwise we can also pass in a Type array to specify which overload, exactly, we're after.

Now we're both armed with knowledge and equipped with some nifty tools to get the job done. Let's get started on our first task, creating a SqlError instance.

Creating the SqlError instance.
The constructor on SqlError takes seven parameters of different types. To search for this constructor using the GetConstructor() method we'll have to declare at Type array that defines these types in the correct order. Also, when it comes to actually invoking the constructor we'll need to pass in some actual parameters, and we'll have to declare these in an array of Object. Then we'll get a reference to the SqlError constructor by calling GetConstructor() on its type, and finally create an instance of SqlError by invoking the constructor with our parameters. Put together in a method it looks like this (there are some default parameters used here; you can replace these with something more dynamic if you like):

private static SqlError GetError(int errorCode, string message)
{
object[] parameters = new object[] { errorCode, (byte)0, (byte)10, "server", message, "procedure", 0 };
Type[] types = new Type[] { typeof(int), typeof(byte), typeof(byte), typeof(string), typeof(string), typeof(string), typeof(int) };
ConstructorInfo constructor = typeof(SqlError).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, types, null);
SqlError error = (SqlError)constructor.Invoke(parameters);

return error;
}

Creating the SqlErrorCollection instance.
This is a simpler task than creating a SqlError instance because the SqlErrorCollection constructor doesn't take any parameters. All we have to do is get a reference to the constructor and then invoke it. Later we'll invoke the Add() method to add the SqlError instance returned by the method we wrote in the previous step. A method to create a SqlErrorCollection instance looks like this:

private static SqlErrorCollection GetErrorCollection()
{
ConstructorInfo constructor = typeof(SqlErrorCollection).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { }, null);
SqlErrorCollection collection = (SqlErrorCollection)constructor.Invoke(new object[] { });
return collection;
}

Creating the SqlException instance.
This step basically does the same as what we've seen in the previous two steps, with the difference being that we invoke a method, and we use the instances of SqlError and SqlErrorCollection created by the two methods that we have already written. In short, we create an instance of SqlErrorCollection, an instance of SqlError, call GetMethod() on the SqlErrorCollection type to get a reference to the Add() method, then add the SqlError instance to the collection. Finally we wrap it all up by instantiating a SqlException by getting a reference to its constructor and passing in an error message along with the SqlErrorCollection. As a method it looks like this:

public static SqlException CreateSqlException(string errorMessage, int errorNumber)
{
SqlErrorCollection collection = GetErrorCollection();
SqlError error = GetError(errorNumber, errorMessage);

MethodInfo addMethod = collection.GetType().GetMethod("Add", BindingFlags.NonPublic | BindingFlags.Instance);
addMethod.Invoke(collection, new object[] { error });

Type[] types = new Type[] { typeof(string), typeof(SqlErrorCollection) };
object[] parameters = new object[] { errorMessage, collection };

ConstructorInfo constructor = typeof(SqlException).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, types, null);
SqlException exception = (SqlException)constructor.Invoke(parameters);
return exception;
}

And that's it. Putting it neatly together in a class, we've got ourselves a little SqlExceptionStub that is very handy for testing certain expected database exceptions. The complete code is below.

And that's that!


using System;
using System.Data.SqlClient;
using System.Reflection;

namespace SnowValley.Tools.Tests.Utility
{
public class SqlExceptionStub
{
public static SqlException CreateSqlException(string errorMessage, int errorNumber)
{
SqlErrorCollection collection = GetErrorCollection();
SqlError error = GetError(errorNumber, errorMessage);

MethodInfo addMethod = collection.GetType().GetMethod("Add", BindingFlags.NonPublic | BindingFlags.Instance);
addMethod.Invoke(collection, new object[] { error });

Type[] types = new Type[] { typeof(string), typeof(SqlErrorCollection) };
object[] parameters = new object[] { errorMessage, collection };

ConstructorInfo constructor = typeof(SqlException).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, types, null);
SqlException exception = (SqlException)constructor.Invoke(parameters);
return exception;
}

private static SqlError GetError(int errorCode, string message)
{
object[] parameters = new object[] { errorCode, (byte)0, (byte)10, "server", message, "procedure", 0 };
Type[] types = new Type[] { typeof(int), typeof(byte), typeof(byte), typeof(string), typeof(string), typeof(string), typeof(int) };
ConstructorInfo constructor = typeof(SqlError).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, types, null);
SqlError error = (SqlError)constructor.Invoke(parameters);

return error;
}

private static SqlErrorCollection GetErrorCollection()
{
ConstructorInfo constructor = typeof(SqlErrorCollection).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { }, null);
SqlErrorCollection collection = (SqlErrorCollection)constructor.Invoke(new object[] { });
return collection;
}

}
}