We Have a Speed Problem

11. June 2016 Uncategorized 3

“How much longer do you have on your feature?” the tech lead asked.

After pausing for a minute to think, Steve answered, “probably about another day or so.”

“Hmmm.  I thought you were going to be done today. Did something unexpected come up, or did we just misestimate it?”

“I think we were close on the amount of time to actually write the code, but I had a problem with NPM and that ate up a bunch of time. Once I got that resolved, I think I lost about 1/2 a day.”

Steve pulls up the team’s schedule, and after looking, he asks his PM “We’re not demoing until next week, right? Isn’t this the first week of the sprint?”

“Yeah, you’re right, demo is a week from Thursday.  I was hoping you could wrap that up today. I need someone to start work on optimizing the calculation queries, so we can get those done for the demo.”

“Oh. I didn’t know those were part of the sprint…I guess I hope I can start on them tomorrow.”

“They weren’t originally, but we’re ahead of the schedule, and the customer was hopeful that we could squeeze this in.”


 

The above conversation is fictionalized, but it’s probably familiar to a lot of developers. While they work, they experience pressure from customers, teammates, managers and project managers to get more and more work done.  And when they get work done, they’re often rewarded by adding new features and functionality to the deliverable.

How Did We Get Here?

I didn’t always feel this pressure. Maybe it wasn’t there, or perhaps I was too oblivious to see it. Back when I first started professionally developing software, we were following a waterfall approach. We’d release twice a year, usually April & October.  As soon as our April release was out the door, we’d start coding the October release.  But there was more work than that. In order to release in October, our requirements had to be complete no later than April. So this meant our BAs were gathering and writing requirements a release ahead of where the developers were coding. At the start of each release, the engineers would write specification documents, design documents and testing documents. Then we’d write our feature. We’d follow that with a code review, and finish the feature by having our internal customers test it.  The largest feature I had was set for around 600 hours.  Most of them were probably in the 200-250 hour range.

As you can you see, development was very fast. From the time the customer specified a feature with our BAs until it was in our release, it could be close to a year.  In fact, the 600 hour feature I was talking about ended up taking about 18 months from initial discussion until release (and it blossomed into more than one feature.)

Agile Answers These Problems

I wasn’t the only one experiencing this. In fact, in the early 2000s, waterfall was pretty common. There was a group of people pushing for something they called the Agile Manifesto. They were a relatively small group of people that ultimately ended up changing the software world.

I came into contact with these thoughts almost 10 years later. I didn’t know what it was at the time, but as I worked in the flow, it started to make sense. Instead of taking a year or more to go from concept to production, we could get there in a matter of weeks.  This allowed our customers an opportunity to change where the project was headed. It solved a big problem at the time, and that was the problem of not releasing software. Too often, teams would work for months (or years) and turn the software over to the customer and they’d say it wasn’t right. Then there’d be debate over what was right and when it could release. Often it would end up on the shelf, all that time and money wasted.

Metrics

At some point, after a lot of people were doing agile development, someone thought “If we can track how much we get done each sprint, then we can start to project when we’ll be completely done.” So this set off various metrics that could be collected during Scrum sprints.  Probably one of the first of these was the metric of velocity. This told people how many “story points” the team did in one sprint.  So, if they did 50 story points this sprint, you could guess that they’d do 50 points next sprint, and you could tell your customer what features they could expect to see in the next demo, and the demo after that, and ultimately when you’d finally be “done.”

Acceleration

After we had velocity for a while something happened. We started seeing if we could accelerate that velocity. On any new team or project, there is likely going to be some baked in acceleration anyway. After all, your team is new to the domain, and so as they understand it better, they make progress faster. Or, if it’s the first time these developers have worked together, then as they become familiar with each other, they’ll move faster.

So it’s natural to see the velocity increase in the first few sprints of a new project. At some point, though, you expect to see it level off. And when it levels off, some people get anxious, because they see a steady velocity as an indicator that the team is slowing down.  That’s because they were actually measuring acceleration.

Constant Velocity Pressure

Once a team reaches constant velocity, the pressure starts to come in. Sometimes it’s subtle, other times, it’s blatant.  The pressure can take various forms. It can be, like our conversation above, the PM asking if someone can take on more work than they originally committed to. Other times it involves adding people to a team in the hopes that if 5 people can do 50 points a sprint, then 6 people could do 60 points.

Internalization

Sometimes what happens next is internalization by developers. By this, I mean that individual developers will start focusing on how much more they can deliver, even without someone applying direct pressure. You can tell when this happens based on the conversations around the office. You’ll often hear developers say things like

  • “I don’t have time to do _____”
  • “I have to get this feature in for the demo”
  • “That should only take a few minutes”

Once a developer has internalized this idea, it starts to impact how they estimate. They have this idea of speed anchored in their head. Which subconsciously pulls down their estimates. Which means that for the next feature, or next project, they don’t have enough time to do the things they need to do. And so the cycle continues.

Is it Sensible?

This doesn’t just happen in software. When I drive out to my in-laws, it’s roughly 275 miles from my house to theirs. At various points I’ll catch myself thinking “I’m doing 75mph. We’ve got 180 miles to go. If I were to go 90 mph, we could be there in 2 hours, saving almost a half-hour.”

But I don’t get up to 90 mph, for a few reasons. First, one time I did bump up my speed by about 10 mph only to get pulled over for speeding a few minutes later.  That was a costly ticket, and so in the back of my mind I think about that cost.

But it’s also not safe. Speed is a common factor in car accidents, which makes sense. The faster you go, the less time you have to react to something. Additionally, when you do react, speed magnifies the reaction. So if you swerve into the other lane, it will be a much more violent swerve at a faster speed.

Also, depending on which car I’ve been driving, I haven’t always been sure that my car could handle 90mph. In fact, 75 might have been the limit at how fast I could go before the car started shaking and rattling.

I think the same parallels can apply to software as well. People who have been developing for more than a year or two have likely been burned by going too fast. We might not recognize it as being burned, we might just chalk it up to the “pains of software.”  For example, having a painful release but believing that releases are always painful. Or having the customer come back with bugs after their testing and defending it by saying “that’s just what happens in software.”  Of course, there are times where being burned is obvious.  For example, a memory leak occurs that wasn’t caught because the feature was developed at the end of the sprint with no real time for testing.

Software also has indicators that you’re going to fast. They’re similar to our car shaking, but often we ignore them. When was the last time you (or someone on your team) ignored one of the following in order to get the feature “done” in time. I can’t tell you the number of developers I’ve talked to at work, at conferences, on-line who say in one form or another that they don’t have time to make sure they’re writing quality code, because they have to get the feature done.

The Point

If you made it this far, you’ve probably already figured out the point, but let me write it out anyway. The first point is this, if you have to measure anything, make sure you are NOT measuring acceleration.

Secondly, we need to understand what it means to go “as fast as possible.” It’s not purely about the number. After all, NHRA dragsters can achieve speeds over 300 mph, but they do that in a straight line.  In software, we can go faster if we sacrifice things like design, collaboration and quality. But that doesn’t help anyone. What’s the point of getting software out fast if it is low quality?


3 thoughts on “We Have a Speed Problem”

  • 1
    Charlie Koster on June 11, 2016 Reply

    Continuing with the car analogy, I think you’re right it’s not about measuring acceleration but I’d also argue that watching the speedometer isn’t the point either. It’s more about the destination and whether we’re traveling to the right destination.

    Tying it back to software, it’s not too often we find ourselves knowing exactly what to build from the start (we don’t know the destination when the car leaves the driveway, maybe only the general direction). We have a general understanding of the problem but using Agile to continuously deliver and get feedback from the [surrogate] users helps us know what will be of value to the user and adjust the roadmap accordingly.

    What good is building a quality product at a good pace if it’s built under possibly false assumptions about what the user needs? What good is safely driving 75mph due North when we should be going Northeast? We may find the destination eventually but only after much more waste if the speedometer gets more focus than the road signs along the way.

    • 2
      admin on June 11, 2016 Reply

      You’re right, staring at the speedometer isn’t he point. I wasn’t clear on that. My thought was “If you have to watch a metric, watch speed over acceleration”, but I’d much rather not be focusing on either.

      “it’s not too often we find ourselves knowing exactly what to build from the start (we don’t know the destination when the car leaves the driveway, maybe only the general direction)”

      I’m not sure I agree with that contention. Do we (software industry) know exactly how an application will operate and/or exactly how it will look? No, but we know that we’re building a quoting tool for our product line, or a lead generation tool for salespeople, or an underwriting workflow for life insurance policies. We know that it’s a web app, or thick client, or native. To me, it’s more like “let’s go to Chicago for the weekend. When we get there, we’ll find a place to stay somewhere in the city.” As we work on the product, we break it down into tasks (stories.) While working on those stories, we need to know when that story is complete. We often fail at this, but I think it’s a critical point. Do you know 100% that you’ll build the feature the customer wants? No. But you CAN know that you built the feature that satisfies the conditions of acceptance. Going back to our travel analogy, if we found a hotel and told them we needed a room with 2 beds, they could be 100% sure they gave us a room with 2 beds. We might still leave a bad Yelp review, because we wanted a room with a view, instead of staring at the building across the street. But the conditions of acceptance were met.

      “What good is building a quality product at a good pace if it’s built under possibly false assumptions about what the user needs?”

      I think the answer is “It’s way better than the alternative.” Let’s say that 50% of our the features we deliver are re-written for one reason or another. If you KNOW which 50% that is, then I’d say “sacrifice everything and get there fast.” But we don’t know which 50% will be rejected and which 50% will be accepted. And so we either need to make sure 100% of the software we build is a “quality product” because we don’t know which of those pieces are going into production.

      I know one time I was about 12 months into a project and the project file was still named something like “Quote_Tool_UI_Prototype.” We were nearing release and I was still using this “spike” solution from day 1 where I was playing around with things. And it wasn’t just that the project was still named after a prototype. There was code from the prototype style as well. That is, it wasn’t following an MVC pattern (or any pattern other than “Let’s place this code here”.) It happened because I wanted to demo a proof of concept to some internal users, and didn’t want to spend time doing it “the right way” because I knew they’d have changes. I ended up releasing “prototype” software that was a pain to maintain and had other quality issues because I wanted to get something out in front of the customer. That happens way more than we realize.

      It seems like we’re so afraid of doing work that might get thrown away, that we’re willing to release an inferior product. We justify it to ourselves by saying that we had to get it out fast.

  • 3
    Charlie Koster on June 11, 2016 Reply

    I suppose this is one reason why I don’t normally make points with analogies because I think we’re saying the same thing. If we know we’re building a thick client quoting tool, that is to say we know the “right direction” but not the destination, at least in my opinion.

    On the point of conditions of acceptance, I agree that we can be near certain whether we’ve met those conditions, and that we can’t be certain that the customer needs/wants have been met. But I’ll assert that the difference between the two is waste. Wasted time and wasted money.

    The implication there is instead of focusing on satisfying a set of upfront requirements, a subset of which only satisfy what the user needs, why not focus on satisfying the users’ needs instead?

    This is what I meant when I said we (software development teams) should focus on the road signs along the way. They are periodic indicators on how to get to the destination. In the real world that means constantly delivering and getting feedback (directly or measured) from users to better understand the problem and better understand how to address their needs.

Leave a Reply

Your email address will not be published. Required fields are marked *