Skip to main content

A Dozen Ways to Get the Testing Bug in the New Year

January 22, 2004

{cs.r.title}








Contents
1. Let Your Computer Do the Boring Stuff
2. Stop Debugger Testing
3. Assert Your Expectations
4. Think of It as Design
5. Build Safety Nets
6. Learn by Checked Example
7. Corner Bugs
8. Expand Your Toolbox
9. Make It Part of Your Build Process
10. Buddy Up
11. Travel With a Guide
12. Practice, Practice, Practice
Summary
Resources

Test-driven development received a lot of attention in 2003, and the interest will grow in 2004. For good reason: everyone agrees testing is important, but now many respected programmers are claiming that by writing tests first, they see better designs emerge. These same programmers quickly point out that test-driven development makes them feel more productive and less stressed. At the end of a shorter programming day they've built a suite of passing tests and code with better designs. Sound too good to be true? Well, there's nothing to lose in giving it a whirl. In fact, there's much to be gained.

This article gives you 12 practical ways to start writing tests, and keep
writing tests, regardless of your development process. The first two
techniques play off of things you're probably already doing, so you don't have to move too far out of your comfort zone. The next two challenge you to wade deeper into the pool to realize the benefits of test-driven development. The remaining techniques round out the regimen to keep you testing effectively all year. You'll be well on your way to fulfilling your new year's resolutions. Caution: contents have been known to be infectious!

1. Let Your Computer Do the Boring Stuff

The easiest way to start writing tests is to identify situations where you visually inspect results, then replace that human checking process with automated checking. Color me lazy, but I want to put this thing called a computer to work for me. It's much more reliable than I am at keeping my expectations in check. Some results are difficult to check without a human in the loop; don't start there. Instead, go after low-hanging fruit to get a few small victories under your belt. I've found the pervasive main() test driver to be easy pickings for automating. I'm not referring to the entry point that bootstraps your application, but rather the main() method that acts like a test driver by printing results to the console.

For example, imagine that you're writing a Java class that performs the functions of a simple spreadsheet. A spreadsheet cell -- indexed by a column and row combination such as "A1" -- can store a number or a formula that may include numbers and cell references. Here's an example main() test driver for the Spreadsheet class:

public static void main(String args[]) {
       
    Spreadsheet sheet = new Spreadsheet();
       
    System.out.println("Cell reference:");
    sheet.put("A1", "5");
    sheet.put("A2", "=A1");
    System.out.println("A2 = " + sheet.get("A2"));
       
    System.out.println("\nCell change propagates:");
    sheet.put("A1", "10");
    System.out.println("A2 = " + sheet.get("A2"));
       
    System.out.println("\nFormula calculation:");
    sheet.put("A1", "5");
    sheet.put("A2", "2");
    sheet.put("B1", "=A1*(A1-A2)+A2/3");
    System.out.println("B1 = " + sheet.get("B1"));
}

You may recognize this testing style or may have even written similar test drivers yourself, if only to give you some confidence that the code produced the results you expected. The main() method was my testing harness of choice for many years, and I still get the urge to use it from time to time because it's easy. But just as I'm about to take the bait, I remember how a main() test driver sucks the life out of me. See, every time I change code that affects the Spreadsheet class, I want to run the test to see if my change broke anything. I'm confident in my changes if I run the test afterward and see the following console output:

Cell reference:
A2 = 5

Cell change propagates:
A2 = 10

Formula calculation:
B1 = 15

This testing approach has at least one problem: it requires that I visually inspect the output every time I run the test. Worse yet, as the number of results output by the test driver increases, my workload also increases. I'll quickly grow weary of doing work best suited for a computer and stop running the test altogether. Inspecting the output also implies that between test runs I have to remember how the expected output should look. Is the correct result of the formula calculation 10 or 15? Hmm, I can't remember. And if I can't remember, there's little hope of sharing the test with other folks.

JUnit is a computer's taskmaster when it comes to checking expectations. If you've never used JUnit, the JUnit FAQ will get you up and running in less time than it takes to type a main() method signature. Using JUnit, a main() test driver requiring human checking can be easily replaced by automated tests that check their own results. Here's the equivalent Spreadsheet test, expressed in a JUnit test:

import junit.framework.TestCase;

public class SpreadsheetTest extends TestCase {
   
    public void testCellReference() {
        Spreadsheet sheet = new Spreadsheet();
        sheet.put("A1", "5");
        sheet.put("A2", "=A1");
        assertEquals("5", sheet.get("A2"));
    }

    public void testCellChangePropagates() {
        Spreadsheet sheet = new Spreadsheet();
        sheet.put("A1", "5");
        sheet.put("A2", "=A1");
        sheet.put("A1", "10");
        assertEquals("10", sheet.get("A2"));
    }

    public void testFormulaCalculation() {
        Spreadsheet sheet = new Spreadsheet();
        sheet.put("A1", "5");
        sheet.put("A2", "2");
        sheet.put("B1", "=A1*(A1-A2)+A2/3");
        assertEquals("15", sheet.get("B1"));
    }
}

Notice that the result checking is now codified in the use of assertEquals() methods that automatically check whether the expected value (the first parameter) matches the actual value (the second parameter). There's no need for you to remember what the correct results should be.

JUnit is distributed with two test runners -- textual and graphical -- that both produce simple and unambiguous output. Using the textual runner, an "OK" on the console signifies that all of your expectations were met:

> java junit.textui.TestRunner SpreadsheetTest

...
Time: 0.04

OK (3 tests)

Using the graphical runner (junit.swingui.TestRunner), you're looking for a comforting green bar. Most Java IDEs have an integrated graphical runner just waiting to stroke your ego, such as this runner in Eclipse:

Green bar
Figure 1. Green is good

If your expectations aren't met, JUnit is quick to let you know. Depending on the test runner used, if a test fails, you'll either see an eye-popping failure message on the console or a flaming red bar, along with details of the failed test.

Automation isn't necessarily testing, you say? I couldn't agree more. So now that you have an automated test harness that takes the pain out of manual testing, feel free to write more comprehensive tests. The book Pragmatic Unit Testing will help you strengthen your testing skills and write better tests. As you write more tests, it doesn't cost you anything to keep them running. Indeed, automated tests increase in value over time as ongoing regression tests, so that you have more time to write good tests.

2. Stop Debugger Testing

There was a time when I followed the conventional wisdom of running my code through a debugger to check that the code worked as I expected. That approach worked well, if only I never had to change the code again. But then when the code needed to be changed, I resisted the change because I dreaded firing up the debugger and walking through the code. In other words, using the debugger as a regression testing tool had the same drawbacks as using the main() method as a test driver. It just didn't scale. Consequently, I tended not to touch working code for fear of unknowingly breaking something. The result was code rot.

If you're like me, you use a debugger to validate mental assertions as follows:

  1. Set a breakpoint right before a section of code you have questions about.
  2. Set watches on interesting variables within that code section.
  3. Run the program up to the breakpoint.
  4. Single-step through each line, examining the variables along the way.
  5. Optionally, manipulate variables on the fly to force certain code paths to be exercised.

The entire time I'm doing this, in my head I have expectations about the values of variables and the results of method calls. The debugger merely gives me answers that I then match against my expectations. That is, when I use a debugger, I'm really settling for human checking that's prone to error and boredom.

Look for opportunities to replace the human checking you do via debugger with an automated test that checks its own results. It's not always easy to initialize the environment required by a section of code you'd like to test. After all, it's running the program up to the breakpoint that builds that context around the code, though usually at a significant start-up cost. But if the code is likely to undergo change that might break it, then it's worth finding a way to write an automated test. In doing so, you might just discover that the code being tested can be decoupled from some of its dependencies.

3. Assert Your Expectations

A lot has been written about test-driven development, summed up in this simple recipe:

  1. Write new code only after an automated test has failed.
  2. Refactor to keep the code clean.

The notion of writing a failing test before writing the code that makes it pass may seem awkward, but think about how you write code. Before you write a method, you generally have some idea what the method should do. If you don't, one might argue you're not yet ready to write the method. Writing the test first is typically easier and quicker than writing the corresponding code. The test then keeps you from wandering down rabbit holes and lets you know when you're done coding. So, before writing that next method, stop to consider what expectations you already have in mind. Assume you've already written the method, then simply write an automated test that asserts that the method works correctly.

Say, for example, you're staring at a blank editor screen, ready to begin writing a ShoppingCart class. The first thing you want it to do is manage a collection of items. If the same item is added twice, each time with a specified quantity, the shopping cart should contain the sum of both quantities. Now turn that mental assertion -- the success criteria for the code you wish you had -- into an automated test. The following is an example JUnit test:

import junit.framework.TestCase;

public class ShoppingCartTest extends TestCase {

    public void testAddItems() {
        ShoppingCart cart = new ShoppingCart();
        cart.addItems("Snowboard", 1);
        cart.addItems("Lift Ticket", 2);
        cart.addItems("Snowboard", 1);
        assertEquals(4, cart.itemCount());
    }
}

Serious question: how much more effort would it take to write that test after you'd already expended brain cycles deciding what the shopping cart should do? Think of test-driven development as a way of structuring and refining that thought process.

Now that you have a test, write just enough code to make it pass. No more, no less. Just let the test guide you rather than speculating about what you might need in the future or worrying about the next test. When the test passes, refactor the code as necessary to keep it clean and as simple as possible. Then re-run the test to make sure refactoring didn't break anything. Repeat by asserting your expectations for what the code should do next. Before long you'll fall into your own test-code-refactor rhythm. Stick with it; it will serve you well.







4. Think of It as Design

Writing tests first is a design activity because it forces you to think through how the code should work from the outside before diving into an implementation. In good time and with practice, you'll notice test-driven development is more of a design technique than a testing technique. But if you go looking for stunning design insights with your first tests, you'll be disappointed. For now, just listen to what the tests are trying to tell you about your design by paying careful attention to difficulties writing the tests. Tests are just another client of your code, and writing the tests first gives you a client's perspective. If the code is difficult to test, it follows that it will be difficult for a client to use.

Here's an example design scenario: you're writing a shopping cart application. Client code should be able to add named items to a shopping cart and retrieve detailed information for the items currently in the cart. Without worrying about the infrastructure necessary to support the shopping cart, start by writing a JUnit test similar to the following:

public void testGetItem() {

    ShoppingCart cart = new ShoppingCart();
    cart.addItems("ISBN123", 1);
       
    Iterator items = cart.items();
    Product item = (Product)items.next();

    assertEquals("Confessions of an OO Hired Gun", item.getDescription());
    assertEquals(9.95, item.getUnitCost(), 0.0);
    assertEquals(1, item.getQuantity());
}

This test documents how you'd want the ideal ShoppingCart class to look and behave from the outside. The test won't pass; nay, it won't even compile. But how much code do you need to write to make the test pass? Remember, somehow you need to swizzle a named item ("ISBN123") into its corresponding Product instance. Sounds like a good job for a database, eh? Ugh! Setting up a database and writing JDBC code at this point will only delay the feedback loop. A passing test sooner rather than later would do wonders for your confidence. Do you really need a database to make the test pass? No, you just need a data structure that associates keys with Product instances. You could certainly take a small step for now just to get the test to pass by hard-coding the iterator to return the expected product. In a subsequent step, you could encapsulate the mapping layer behind a simple interface:

public interface Catalog {
    public void addProduct(String key, Product p);
    public Product getProduct(String key);
}

Now you can avoid setting up a database by writing an in-memory implementation of the Catalog interface that uses something like a HashMap. The decision to put off writing a persistent catalog implementation isn't triggered by laziness. Rather, by choosing a natural and simple implementation first, the Catalog interface is naturally clean. Indeed, the test helps you separate interface design from implementation design so that implementation details don't creep into the interface. The Catalog interface can now be used to decouple the shopping cart from any particular catalog implementation. Simply construct a ShoppingCart with any implementation of the Catalog interface. Here's the same test refactored to do just that:

public void testGetItem() {

    Catalog catalog = new InMemoryCatalog();
    catalog.addProduct("ISBN123", new Product("Confessions of an OO Hired Gun", 9.95));

    ShoppingCart cart = new ShoppingCart(catalog);
    cart.addItems("ISBN123", 1);
       
    Iterator items = cart.items();
    Product item = (Product)items.next();

    assertEquals("Confessions of an OO Hired Gun", item.getDescription());
    assertEquals(9.95, item.getUnitCost(), 0.0);
    assertEquals(1, item.getQuantity());
}

Getting this test to pass is markedly easier now that a database isn't in the picture. Yes, you'll probably need a real database at some point. And you'll want to test that the shopping cart behaves the same with a real database plugged in. Until then, the in-memory catalog helped you focus on designing the shopping cart before speculating on infrastructure. Writing the test first revealed an insight for a design with low coupling and high cohesion: the ShoppingCart class is decoupled from any particular catalog implementation, and the Catalog interface encapsulates details of how named items are mapped to products.

You don't have to be an OO hired gun to craft good designs. You just have to listen to what the test says you need, and then write the simplest code that will make it pass. Remember to refactor between tests to keep the code clean.

5. Build Safety Nets

You wouldn't sign up to compete in a triathlon as your first goal toward exercising more in the new year; you'll experience the same pain and frustration if you attempt to test legacy code as your first testing exercise. Nothing kills a resolution quicker than going overboard. That being said, unless you're on a new project, legacy code -- code already written, but without tests -- is a fact of life. And without tests, legacy code is a liability. You can't change the code for fear of breaking something, and you usually can't write tests without having to change the code. Rock meets hard place.

When faced with changing legacy code, reduce the risk of breakage by building safety nets. I don't mean you should halt forward progress to write comprehensive tests for the entire legacy code base. That is the road to discouragement and lost opportunity cost. Instead, be pragmatic by writing focused tests that create a safety net around the code you intend to change. Then change the code and run the tests to ensure that nothing unexpected happened. If you can't write focused tests without first refactoring, use any other safety nets at your disposal to gain confidence, including existing functional tests or a buddy looking over your shoulder.

Refactoring helps prevent code rot. Safety nets make refactoring safe. If you're writing new code test-first, you're building safety nets along the way. If you're attempting to refactor legacy code, it's dangerous without safety nets. Building them isn't always easy, but it's usually well worth it.

6. Learn by Checked Example

Learning how to use third-party APIs can be frustrating. If you're lucky, the API might include a comprehensive JavaDoc. If you're really lucky, the API might even behave as the JavaDoc claims. Regardless of the documentation (or lack thereof), I learn best by doing. To truly understand how an API works, I need to write code that pokes and prods the API to get feedback about my assumptions. But exploring an API by first attempting to use it in my production code doesn't give me a warm fuzzy feeling. I hear dear Mom reminding me to "Put my play clothes on." I'd rather learn in a forgiving environment where I can explore an API with impunity. Checked examples provide a safe context for learning.

A checked example is a test, though perhaps not in the traditional sense of the word. Think of it as a learning test that validates your assumptions about how an API behaves, but doesn't necessarily attempt to uncover errors in the API. For example, say you're writing an application that will use Lucene -- a search engine technology with a Java API. How do you begin writing code that uses Lucene to search for indexed documents? Start by writing a learning test similar to the following that checks its own results and teaches you what you want to learn:

<imports omitted for brevity>

public class LuceneLearningTest extends TestCase {
   
    public void testIndexedSearch() throws Exception {
  
        //
        // Prepare a writer to store documents in an in-memory index.
        //
        Directory indexDirectory = new RAMDirectory();
        IndexWriter writer =
            new IndexWriter(indexDirectory, new StandardAnalyzer(), true);

        //
        // Create a document to be searched and add it to the index.
        //
        Document document = new Document();
        document.add(Field.Text("contents", "Learning tests build confidence!"));
        writer.addDocument(document);
        writer.close();

        //
        // Search for all indexed documents that contain a search term.
        //
        IndexSearcher searcher = new IndexSearcher(indexDirectory);
        Query query = new TermQuery(new Term("contents", "confidence"));
       
        Hits hits = searcher.search(query);
        assertEquals(1, hits.length());
    }
}

The LuceneLearningTest is a standard JUnit test that invokes the Lucene API to index an example document in an in-memory directory (RAMDirectory), then asserts that a search for the word "confidence" in the document's contents yields a hit. With this test under your belt, you can continue to grow the learning test suite one test at a time. For each new thing you need to learn, write a test method and refactor any common test code into the setUp() method. The following refactored version of the LuceneLearningTest includes checked examples for two additional query types:

<imports omitted for brevity>

public class LuceneLearningTest extends TestCase {

    private IndexSearcher searcher;
   
    public void setUp() throws Exception {
        Directory indexDirectory = new RAMDirectory();
        IndexWriter writer =
            new IndexWriter(indexDirectory, new StandardAnalyzer(), true);

        Document document = new Document();
        document.add(Field.Text("contents", "Learning tests build confidence!"));
        writer.addDocument(document);
        writer.close();
       
        searcher = new IndexSearcher(indexDirectory);
    }

    public void testSingleTermQuery() throws Exception {
        Query query = new TermQuery(new Term("contents", "confidence"));
       
        Hits hits = searcher.search(query);
        assertEquals(1, hits.length());
    }
   
    public void testBooleanQuery() throws Exception {
        Query query =
            QueryParser.parse("tests AND confidence", "contents", new StandardAnalyzer());
       
        Hits hits = searcher.search(query);
        assertEquals(1, hits.length());
    }

    public void testWildcardQuery() throws Exception {
        Query query =
            QueryParser.parse("test*", "contents", new StandardAnalyzer());
       
        Hits hits = searcher.search(query);
        assertEquals(1, hits.length());
    }
}

Notice that the indexing step has been refactored into the setUp() method, which is called prior to every test method. In this case, the use of the setUp() method has four functions:

  • Removes code duplication from the test methods.
  • Ensures that the test methods don't affect or rely on each other.
  • Helps readers of the test understand the purpose of this particular set of tests: indexing and searching.
  • Serves as a reminder that your application using Lucene will generally index documents less frequently than it will search documents.

Writing learning tests in isolation helps you focus on one thing at a time. You first focus on writing a learning test that confirms your understanding of an API. Then you write a test for the production code that relies on the underlying API. When that test passes, you've successfully integrated the API into your application. In other words, build confidence layer by layer. If the behavior of the API ever changes, your learning tests will pinpoint the change with greater accuracy than your integration test.

What happens when a new version of the API is available? Well, your learning tests also serve as an automated regression test suite that you can use to detect changes. Before upgrading to a new version of an API, run your learning tests to ensure that your assumptions about the API are still valid.

One more thing, while you have your play clothes on: you can also use this technique to learn new programming languages. For example, I learned Ruby (and you should too!) by writing a learning test every time I discovered something new in the language. The following is an example learning test that documents and validates two features of Ruby arrays:

require 'test/unit'

class RubyArrayTest < Test::Unit::TestCase

  def testPushPopShift
    a = Array.new
    a.push("A")
    a.push("B")
    a.push("C")
    assert_equal(["A", "B", "C"], a)
    assert_equal("A", a.shift)
    assert_equal("C", a.pop)
    assert_equal("B", a.pop)
    assert_equal(nil, a.pop)
  end
 
  def testCollect
    a = ["H", "A", "L"]
    collected = a.collect { |element| element.succ }
    assert_equal(["I", "B", "M"], collected)
  end

end

Any time I need to remember how to use an API or Ruby language feature, I refer back to my suite of learning tests. They document working examples that I can continually run and modify until I'm confident enough to move forward. And if I can't find what I'm looking for, I expand my knowledge base by writing a new test.







7. Corner Bugs

Even when we've written solid tests, once in a while someone using our code (a paying customer, a persnickety cubemate) discovers a bug. Don't let that stop you from continuing to write tests that catch the majority of bugs. Instead, use it as an opportunity to improve your testing skills. In the meantime, a bug has been reported and it needs to be fixed. Thankfully, you're able to quickly identify the suspect lines of code because you happen to have vast knowledge of the code base. So you fire up your favorite editor with fingers poised on the keyboard, ready to make the necessary repairs. But before you do that, don't let a golden opportunity to forever corner that bug pass you by.

How will you know when your code changes have squashed the bug? After all, if you're moments away from making a change, then you must have expectations about how the code will work after you've made the change. Writing code is a means to an end. Now is the time to turn your expectations into an automated test that will signify the end. The bug has been fixed when the test passes. Moreover, once the test passes, you have an automated way to keep the bug cornered for life.

8. Expand Your Toolbox

Often, we'd like to test something, but we just don't have the right tool for the job. We're short on time as it is, and spending precious time crafting a test harness is yet another reason not to test. Thanks to the work of others, there's no excuse for skimping on testing for lack of sufficient tools. The open source world is currently teeming with handy testing tools. It pays to be creatively lazy by looking around before reinventing yet another test harness.

Say, for example, you're writing a servlet that provides a shopping cart service. The intent of the servlet is to add the item and quantity specified in the request parameters to the shopping cart. You'd like to test that the servlet works, but the method you want to test requires an HttpServletRequest instance. You can't create one of those very easily. And if you have to crank up a J2EE server to put the servlet in a known state every time you want to run the test, you won't run the test very often. It's time to expand your toolbox to include the Mock Objects framework. The following JUnit test uses the Mock Objects framework to test the servlet outside of a J2EE server:

import junit.framework.TestCase;
import com.mockobjects.servlet.*;

public class ShoppingServletTest extends TestCase {

    public void testAddRequestedItem() throws Exception {

        ShoppingServlet servlet = new ShoppingServlet();
        MockHttpServletRequest request = new MockHttpServletRequest();
        request.setupAddParameter("item", "Snowboard");
        request.setupAddParameter("quantity", "1");

        ShoppingCart cart = new ShoppingCart();
        servlet.addRequestedItem(request, cart);

        assertEquals(1, cart.itemCount());
    }
}

Notice that the test presets the request parameters on a MockHttpServletRequest instance. That instance is then passed in to the servlet's addRequestedItem() method. When you run the test, your servlet is fooled into thinking that it's running in a servlet container. Later on, your integration tests will cast a wider net by validating that the servlet works in its native environment. But when you're writing the servlet code, using mock objects makes running the tests quick and painless.

So, before attempting to write a test harness from scratch or giving up on testing altogether, survey the tools others have crafted in their times of need. JUnit is a framework, not an application. By all means, if the standard JUnit assertion methods aren't enough, then write custom assertion methods. It's also relatively easy to write applications that build upon JUnit. JUnit.org maintains a list of existing JUnit applications and extensions. Don't stop with JUnit and its Java ilk. Many xUnit testing framework implementations for other languages and technologies are already there for the taking (visit XProgramming.com). If you can't seem to find what you're looking for, let Google be your guide. And if you do end up building a test harness, please share it so that others can expand their toolbox.

9. Make It Part of Your Build Process

A test is a valuable radiator of information. It documents -- in an executable format -- how code works. You don't have to trust that the documentation is correct; just run the test for yourself. If it fails, the output tells you straight up that the code doesn't work as the test promises. So once you've written a passing test, treat it with the respect it deserves by checking it in to your version control system. Then capitalize on the investment by running the test as part of your team's build process.

While you're grooving in the test-code rhythm, it's convenient to use the JUnit test runner integrated into your favorite IDE. But you also need to externalize the build process so that anybody on your team, regardless of their IDE loyalties, can build and test the code on their machine. In the Java world, Ant is the king of the hill when it comes to making your build and test process portable. The following snippet of an Ant build.xml file uses the built-in <junit> and <batchtest> tasks to run all JUnit tests conforming to the *Test naming convention:

<path id="build.classpath">
  <pathelement location="${classes.dir}" />
  <pathelement location="${lib.dir}/junit.jar" />
</path>

<target name="test" depends="compile" description="Runs all the *Test tests">

  <junit haltonfailure="true" printsummary="true">
    <batchtest>
      <fileset dir="${classes.dir}" includes="**/*Test.class" />
    </batchtest>
    <formatter type="brief" usefile="false" />
    <classpath refid="build.classpath" />
  </junit>

</target>

First notice the use of the <path> element to explicitly declare a classpath for the build rather than relying on the CLASSPATH environment variable being set correctly. This makes the classpath portable across machines. Second, notice that the test target is dependent on the compile target. So, to compile and test the code in one fell swoop, anybody on your team can check out the project from the version control system and type:

ant test

Finally, notice that the <junit> task is configured with haltonfailure="true". This means that the build process will fail if any test fails. After all, the build contains tainted goods if all the tests don't pass.

Why stop there? Now that you have an Ant target that compiles and tests the project, schedule the test target to be automatically run by a computer at a periodic interval. For example, using
CruiseControl or Anthill (both free) you can put an idle machine to good use running any Ant target as often as you'd like. Using a separate build-and-test machine implies that everything needed to build and test your project is under version control. You are using version control, aren't you? You'll be surprised how often a separate machine flushes out build problems. And if the build fails, those schedulers will even send you an email so that you can take appropriate action to get back on solid ground.

So, no matter how many tests you have, realize their value to your team early and often by making testing part of your process. Add each passing test you write to your version control system and run all the tests continuously to radiate confidence.

10. Buddy Up

When learning anything new, I've found it helpful to buddy up with another newbie. Besides being a lot more fun than trudging up the learning curve alone, together, you and your buddy can cover more ground. You can also keep each other accountable to the goals you share and challenge each other to become better. As you practice the techniques described in this article, openly discuss with your buddy your triumphs and struggles. Critique each other's tests and share design insights gained from code driven by tests. And when you feel pressure to slip back into old coding habits, a good buddy will bring you back from the brink.

So how do you find a buddy? It's been my experience that many folks secretly want to try test-driven development, but they don't want to be the only person on the team doing it. So start by expressing your desire to learn and practice test-driven development. By making this proclamation, you'll invite social support that can be a powerful motivator to help you follow through. Moreover, once you step into the spotlight you'll likely draw others out of the shadows.

11. Travel With a Guide

Sometimes buddying up just isn't enough. If you and your buddy are learning at the same time, you may both stumble into the same pitfalls. Traveling with an experienced guide will help you avoid getting bogged down. Don't feel that seeking outside help is a way of copping out. You'll be more productive if you don't have to blaze your own trails.

Consider arranging for training in unit testing or test-driven development to quickly put these techniques into practice. For this kind of training to be truly effective, it needs to be customized for you. For example, students I've taught have found short and focused sessions -- tailored and applied to the software they're building and the technologies they're using -- to be most beneficial. So look for training that covers the basic trails, but then lets you chose advanced paths of interest.

As you continue to practice test-driven development, you'll undoubtedly hit a few snags. Don't spend too much time fighting through them. A few minutes of one-on-one discussion with a mentor who's been there and done that will keep you on pace.

12. Practice, Practice, Practice

Writing tests first is a programming technique that takes practice, and lots of it. Accept the fact that you won't see miraculous results overnight. Experts say it takes a minimum of 21 days to build a positive habit and six months for it to become part of your personality. So when you feel yourself backsliding, don't despair. Just keep pressing on and pay careful attention to mental assertions you're making that could be codified in tests. Your brain will love you for it!

As with anything new, the more you practice, the better you get. Start simple by promising yourself to write just one good automated test a day. If you write more, it's bonus points. Tomorrow morning, you'll at least have one passing test. In a week, you'll have at least five. Run all your tests every time you change code, even if you don't think your change could possibly break anything. This will get you in the habit of running the tests often and build your confidence in the tests. Before long, you'll have a suite full of tests and you won't be able to confidently touch the code without running the suite afterward. Green bars are your reward for progress.

Summary

Getting started writing tests doesn't have to be difficult or time-consuming. Just wade in gradually by spotting practical opportunities to let your computer automatically check what you're already checking manually. Before writing new code, assert your expectations about what it should do. Along the way, listen to the tests for design insights. Make testing an integral part of your development process by writing tests first and making it easy for anyone on your team to run all the tests at any time. Finally, don't go it alone.

I hope these techniques help you get the testing bug this year. It's a resolution that's sure to improve your design and testing skills. Don't forget to relax by reminding yourself that every day you're just getting started. You'll quickly find that indeed you do have time to test, and then some.

Resources

  • JUnit Download Page

  • JUnit FAQ
    Answers to the world's most frequently asked JUnit questions.

  • Pragmatic Unit Testing
    Andy Hunt and Dave Thomas, The Pragmatic Programmers, LLC, 2003.
    A must-have, easy-to-read book that will help you quickly start writing good JUnit tests.

  • "Lucene Intro"
    by Erik Hatcher
    The first of a two-part series of great Lucene articles. I buddied up with Erik to write the Lucene learning tests.

Mike Clark is an independent consultant with Clarkware Consulting based in Denver, CO. He is co-author of Bitter EJB and editor of the JUnit FAQ.
Related Topics >> Extreme Programming   |   Testing   |