TDD & Scientific Journals

27. April 2014 Uncategorized 0

I’m in the midst of writing one of my presentations for KCDC and I was thinking about TDD. I was also doing some reading about a case of a scientist falsifying data.  The falsified data was published in a journal and was heralded as one of the “top 3 discoveries in stem cell research.”  The problem was, nobody else has been able to reproduce the results. Recently a governing body came out and said the lead scientist did falsify data to make her findings look better.

As I read this, I thought about TDD and testing. I came up with this analogy. Doing TDD is like independent research to verify a published article or finding. A paper gets published with “This is how I conducted the experiment, these are the results.”  To truly test that, someone would have to repeat the experiment.  That’s kind of like testing first. You say “Here’s what I want to happen when X.”  And so you write a test, it fails, you write the code, and it passes.

Writing tests after the code has been written in slightly different. Instead of doing independent research to verify the results of the journal, testing afterwards is like verifying that the journal article accurately reflects what the scientist says. It’s still important to verify that, because we know that sometimes things get miscommunicated in writing. But it’s not the most important test for a scientific paper.  Testing after is like this as well. You look at the code you’re testing, and test what it says it’s going to do. Even if that test passes, it still might not mean a whole lot. Because you don’t know if it’s supposed to throw an exception if the inputs aren’t correct, you only know that it does throw an exception.

While I think this is one distinction between testing before and testing after, I don’t think it’s the only one, and really, might not even be the most important one. TDD is about more than testing, it’s also about better design, and higher quality code.