What’s Your Specification?

19. October 2016 TDD 0

I’m a big proponent of Test Driven Development (TDD.) If you’ve read a few of my blogs or seen a couple of my conference talks, that’s likely not news to you. The benefits I get from TDD are more than just accurate code. In fact, when I talk to people about TDD, I’ve shifted the focus away from coding accuracy and more towards other less tangible benefits. This has been a deliberate choice for me.

To start, I’ve rarely talked to someone who thinks they write or inaccurate code. Developers recognize that they have written bad or inaccurate code in the past (even if the past is yesterday) but we tend to think that today will be different. So trying to convince someone who doesn’t think they write bad code to use a technique that helps them write good code is usually fruitless.  I liken it to drunk driving campaigns. I know of nobody who thinks drunk driving is good. The impression I get, though, is that the general population doesn’t know when it’s drunk driving. That’s why there’s been the rise of “buzzed driving is drunk driving.”

As a result of me avoiding accuracy as the #1 way to convince people to do TDD, I tend to focus on things like “better flow”, “knowing where you left off when you were interrupted” etc.  All problems that developers can identify. I’ve yet to meet a developer who worked 8 hours a day without interruption, for example.

What is TDD?

Note: if you already know the answer to this question, feel free to skip to Tests as Specifications

TDD is a way of writing software where you write a failing test and then write the code to make that pass. It’s a bit  more detailed than that. You don’t just write any test. You write the simplest test at that point. Then you write the simplest code to make that test (and all the others in your suite) pass.  After each passing test, you look for opportunities to refactor. If there are, you perform them. If not, you write your next failing test.

The order here is key. You write a failing test before you write any other code.

Tests as Specifications

One of the benefits to TDD is that you have a suite of tests that specify what the code should be doing.  If you are fortunate enough to join a project with a large test suite, the first thing you should do is run the test suite and look at the tests to get an idea of what the application should be doing.

If you write your test first, then you are creating a living specification. For example, you might have a test that checks when a user enters their password wrong three times in a row, they are locked out.  As you read that test, you’ll see that the system blocks users after 3 attempts. It’s specifying what should happen. So you know if the user is blocked after 2, that’s a failure of the system. Or if the user is allowed to enter their password wrong 4 times and then log in the 5th time, that too is a system failure.

In this sense, tests can be easily translated from requirements or user stories. This is helpful because we often don’t have a requirements document, or at least not the kind we used to have with every nook & cranny of the application detailed before we start writing a line of code (and it’s good that we don’t have those anymore.)  More and more software is composed coming out of conversations and collaboration. If we’re fortunate, that gets added to the conditions of acceptance for a particular story.  At any rate, a year down the road if we want to know why a feature does something specific, we don’t have a single document to go back to. But we can have a suite of tests. If those tests were done in a TDD fashion, we’ll likely see a progression.  For our above example, we might see the following tests within a few lines of each other:

  • A user should be able to log in with an email and password
  • If a user enters their password wrong, they should be notified
  • If a user enters their password wrong 3 times, they should be locked out of the system
  • If a user enters their password wrong within 30 minutes of being locked, the lock-out time should start over
  • If a user enters their correct password more than 30 minutes after being locked out, they should be logged in

These tests start to tell us more about the system than we might see if we just went to the log in page and tried to log in.

By writing our tests in this fashion, we’re providing a specification for the code we will write. We know our code meets the specification when the tests pass.

Code as Specifications

The other option besides tests as specification is your code being your specification. This happens when you write your tests after your code. For the majority of the developers who write unit tests, this is likely flow. A feature will be written, and the developer will test it out via the UI or some other interface. They’ll make tweaks and reload the application and test again.  At some point they’ll determine they’ve reached feature complete status for this story. At that point, you’ll likely hear “I’m done with the feature, I just have to write some tests.”  The next step for the developer is to look at the feature and determine what things need to be tested. Since they’ve already spent a considerable time testing the feature, they’re likely to write fewer tests than if they’d done TDD. (Side note: did you catch that? They spent time during their development testing their feature. Even people that don’t write unit tests still spend time testing their application.)

For their log in page, they might only have a couple tests:

  • If a locked out user enters a correct password more than 30 minutes after being locked out, they should be logged in
  • A user should be locked out after 3 failed attempts

At this point, you could look at the tests and get a gist of what the application is doing, but it likely won’t be an entire picture of the system.  This is because whereas TDD allows your tests to specifications for your code, writing tests afterwards means your code becomes the specifications of your tests.

I think this is an important point and one that causes a lot of frustration with unit testing. So I’m going to say it again:

 

If you write your tests after writing your feature, your feature becomes the specification for your tests

 

Why does that matter? What’s the goal of all your development? It’s to produce features that are valuable for users, correct? Even if you’re a QA engineer, the goal of development is not to produce tests. The goal of development is to produce features that your users will benefit from.

If that is true, then why spend time writing specifications for tests?  Your users aren’t going to use the tests.  Your users aren’t even going to see the tests. Plus, it’s not like you don’t already know it works, you wrote the code and tested in the application. This is where the sentiment that unit tests are a waste of time come from. Because it becomes an academic exercise.  You’ve written the feature, you’ve gone into the application and tested it, and you’ve tweaked your implementation until it worked in the application. At this point, you’re only writing tests because your team wants test coverage.  If you didn’t have to write them, you’d be done by now.

TL;DR

There are two main ways to write unit tests. Either you can write a failing test, and write code to make it pass (test first.) Or, you can write your feature first, and then write tests for that feature (test last.) While both attempt to accomplish the same goal, they are approached with two different mindsets.

In the end you have to decide if you want your tests describing how your feature behaves, or your feature describing how your tests behave. The answer to that will tell you if you should be doing TDD or not.

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *