What a mechanical pencil taught me about software management.

17. November 2011 Uncategorized 0

As I sat on the floor of my grandma’s apartment, my mom started talking to me about my  penmanship. I was in 3rd grade and had gotten reasonably good marks on my most recent report card, the one exception was penmanship, it was my lone C.  Looking back on it now, I wonder how much genetics played. My paternal grandma (who’s house we were at), my dad and my sister all have less than average writing.  My kids also don’t have the prettiest letters. 

As we talked my mom said “I want to see this score improve.”  I pulled out the mechanical pencil I’d just gotten at the local Ben Franklin and said “Now that I have this I’m sure I’ll get a better grade.” My mom didn’t really protest, not wanting an argument, and basically said “We’ll see.”

Of course, you can imagine what happened.  I took my mechanical pencil to class and tried to write nice for the first day or so.  But before Friday, I was back to my normal sloppy handwriting.

That mechanical pencil represents so much of what we do as a culture today.  People buy new shoes, tops and pants to go to the gym, reasoning that the only reason they’re on the couch and not the treadmill is that they don’t have good workout clothes.  But the shoes only see the treadmill a handful of times before they’re back in closet for another 6 months.

I’ve seen (and been a part of) this same phenomenon when it comes to software development as well.  Managers want reports on how many items were closed, how many new bugs were opened, what’s the project’s test coverage, what’s my team’s velocity etc.  The best way to do this is through some sort of software package.  As a result, a small team is usually formed to investigate the various different ALM tools.  

In the end the tool that is picked often contains all sorts of cool bells and whistles. For example at one job, I was part of a team that picked an ALM suite that would allow you to email the server and turn it into a ticket.  This same suite would provide a detailed testability matrix, and integrated well with a QA testing tool.  It showed which steps of the test were completed, and also which testing tasks were effected by the current ticket.  It was actually pretty neat. 

The problem was, we didn’t do testing like that.  Our tester was good, but she didn’t really do testing that way.  She knew the product so well, that it was almost intuitive to her.  And she documented a lot of what she did, just not how each ticket effected the tests.  In the end, we paid a fair amount of money for this product and it’s associated testing product, for them to sit there unused, simply because that’s not how we do things.

As I look back now on the situation, I often wonder if we wouldn’t have been better off with just an index card or post-it note system.  It would have allowed us to focus on what we were there for, creating our software.  Instead we got caught up in thinking that the tools we had would change our behaviors.  But history has taught time and time again, that’s simply not the case.