Sunday, 3 February 2008

Test Driven Development, part 1.

Testing is central to any software development. We simply cannot live without it and, though its role in the development life cycle is often underestimated, thorough, good, repetitive testing is crucial to delivery of quality software. Testing can be done in any number of ways and is all too often treated as an afterthought at the end of a development project. I want to take some time now and over several posts to speak about Test Driven Development (TDD), a methodology that puts testing at the front, in the driver's seat, of software development.

TDD is certainly nothing new, and there are numerous excellent texts available on the topic. What I will be writing about here has probably been covered in said texts, however I wish to cover the central topics here and highlight things that I have found particularly useful, interesting, or plain difficult. TDD is a huge topic and there are lots of development frameworks available to support the developer (each typically targeting a specific language or group of languages), several streams of thought (classic TDD vs Mocking for example), and multiple ways of achieving the same things. By getting at it piece by piece I hope to form a coherent picture of what TDD is all about and how you can use different frameworks and apply different methods and approaches in a manner that suits you, your business and your project, all in an effort to improve the quality of the software you write.

I think a good approach to this will be to start with fairly coarse-grained topics such as what, why, where, when, and how - though perhaps not in that order. Several subtopics will pop up along the way and I'll address these as I go.

Now, if you are reading this and you are interested, please feel free to post your comments and questions. Here we go...

TDD, What is it?
Test Driven Development is, as the name implies, a methodology for software development in which tests are what drive the actual creation of software. In TDD tests define the software requirements at a level of very fine granularity. TDD is unit testing in action, where a unit is a single public method. (In 99% of the cases you will only ever test public methods, though there are exceptions.)

When we say that a test defines a requirement it means that a test is created to assert that when a certain action is performed (a public method is called) in software, specific results ensue (the state of an object changes in a specific manner). A very trivial example of this could be a Dispenser class that represents a vending machine. The requirement could state that when a can of soft drink is dispensed, the number of cans (of that type of soft drink) is reduced by 1. The test that defines this requirement would create a Dispenser instance with a certain known number of cans (of Fanta, for example) and then call Dispense(typeof(Fanta)) on the Dispenser class. The _actual_ test here is that after the Dispense method has been called the quantity of Fanta cans left must be the initial quantity less 1.

You've probably already asked yourself "what kind of requirement is this?". It's certainly not a very typical requirement you'd ever get from a customer. Instead the customer is likely to define the requirement something like this: When a can of soft drink is to be dispensed, ensure that the customer has inserted sufficient coins, then dispense the correct can of drink according to the button pressed. Ensure that the stock levels are updated and that the customer is issued the right amount of change, if applicable.

In the middle of _that_ requirement you'll notice that there is a reference to stock levels. This corresponds nicely to the test we defined in the previous paragraph. As you can see, a customer's requirements (a functional requirement) typically breaks down into several low-level requirements. This is a good thing, because in order to test efficiently you need to test one condition at a time.

But I get ahead of myself. Let's go back to talking about requirements and how tests represent them. If you stop and think for a while about the client's requirement for dispensing a can of soft drink you'll see that several software components are needed to build the software to control the client's machine. We need software to deal with cash (counting coins and dispensing change), software that controls the machine's input buttons, software that keeps track of stock, software to control the dispensing mechanisms and so on. When you start thinking about how these components will work, and how they will work together, your process of design will quickly move down to a much lower level of requirements and you'll see the smaller elements, the software units that need to exist in order for the machine to work as the customer specifies. All these smaller units have their own very specific requirements that are pieces of, and together form the whole of the customer's high-level requirement.

Describing unit tests and requirements in this way may make it sound like it is easy to deduce what the higher level requirements of a software are based on the low level unit tests. That is usually not the case. At the low level each individual unit (method) is tested to ensure that it behaves as prescribed. This is very useful as it clarifies the intent behind the written code, however as each test is separate to other tests there is no apparent way to string a set of random unit tests together in order to form a more human requirement at a higher level. _Integration tests_ go some way towards bridging this gap but these will be discussed later in a different post as they are likely only to introduce confusion here.

So far we have established that TDD is about testing each unit of code to ensure that it works, it behaves in the manner you, the developer, intended. But TDD is a methodology that does much more than create unit tests. It is certainly possible to unit test code without the TDD approach.

What sets TDD apart from other unit testing methodologies is its focus on testing before and during the development process. You may find it a little strange that testing takes place before development takes place, yet indeed this is a very crucial point. TDD stipulates that you do not write any code unless you have a test for that code. For example, if you need to write a method that adds two integers you must first write a test for that method that verifies its behavior. Your test would for example pass the integer 2 and the integer 5 to the method and expect that the returned integer is 7. You write this test before you write your Add(int, int) method - but of course that test does not compile. But you can now write your method and instantly run your test to verify that it works the way you intended.

Though this is an overly simplified example it still highlights that the tests you write up front are the driving factor for the actual code you develop. Writing software in this manner forces you to think very carefully about what you are doing and what you are trying to achieve. If you do this diligently (I will post about the how of TDD later) you will find that you not only produce code that is very accurate (it does exactly what you intend) - you also produce code that is highly testable (duh!)! Though I will also post about the why of TDD later, it should be evident at this point that code that is easily testable has high value.

That kind of wraps it up for the what of TDD. Next time I'll write about the why.

No comments: