“How much longer do you have on your feature?” the tech lead asked.
After pausing for a minute to think, Steve answered, “probably about another day or so.”
“Hmmm. I thought you were going to be done today. Did something unexpected come up, or did we just misestimate it?”
“I think we were close on the amount of time to actually write the code, but I had a problem with NPM and that ate up a bunch of time. Once I got that resolved, I think I lost about 1/2 a day.”
Steve pulls up the team’s schedule, and after looking, he asks his PM “We’re not demoing until next week, right? Isn’t this the first week of the sprint?”
“Yeah, you’re right, demo is a week from Thursday. I was hoping you could wrap that up today. I need someone to start work on optimizing the calculation queries, so we can get those done for the demo.”
“Oh. I didn’t know those were part of the sprint…I guess I hope I can start on them tomorrow.”
“They weren’t originally, but we’re ahead of the schedule, and the customer was hopeful that we could squeeze this in.”
The above conversation is fictionalized, but it’s probably familiar to a lot of developers. While they work, they experience pressure from customers, teammates, managers and project managers to get more and more work done. And when they get work done, they’re often rewarded by adding new features and functionality to the deliverable.
How Did We Get Here?
I didn’t always feel this pressure. Maybe it wasn’t there, or perhaps I was too oblivious to see it. Back when I first started professionally developing software, we were following a waterfall approach. We’d release twice a year, usually April & October. As soon as our April release was out the door, we’d start coding the October release. But there was more work than that. In order to release in October, our requirements had to be complete no later than April. So this meant our BAs were gathering and writing requirements a release ahead of where the developers were coding. At the start of each release, the engineers would write specification documents, design documents and testing documents. Then we’d write our feature. We’d follow that with a code review, and finish the feature by having our internal customers test it. The largest feature I had was set for around 600 hours. Most of them were probably in the 200-250 hour range.
As you can you see, development was very fast. From the time the customer specified a feature with our BAs until it was in our release, it could be close to a year. In fact, the 600 hour feature I was talking about ended up taking about 18 months from initial discussion until release (and it blossomed into more than one feature.)
Agile Answers These Problems
I wasn’t the only one experiencing this. In fact, in the early 2000s, waterfall was pretty common. There was a group of people pushing for something they called the Agile Manifesto. They were a relatively small group of people that ultimately ended up changing the software world.
I came into contact with these thoughts almost 10 years later. I didn’t know what it was at the time, but as I worked in the flow, it started to make sense. Instead of taking a year or more to go from concept to production, we could get there in a matter of weeks. This allowed our customers an opportunity to change where the project was headed. It solved a big problem at the time, and that was the problem of not releasing software. Too often, teams would work for months (or years) and turn the software over to the customer and they’d say it wasn’t right. Then there’d be debate over what was right and when it could release. Often it would end up on the shelf, all that time and money wasted.
At some point, after a lot of people were doing agile development, someone thought “If we can track how much we get done each sprint, then we can start to project when we’ll be completely done.” So this set off various metrics that could be collected during Scrum sprints. Probably one of the first of these was the metric of velocity. This told people how many “story points” the team did in one sprint. So, if they did 50 story points this sprint, you could guess that they’d do 50 points next sprint, and you could tell your customer what features they could expect to see in the next demo, and the demo after that, and ultimately when you’d finally be “done.”
After we had velocity for a while something happened. We started seeing if we could accelerate that velocity. On any new team or project, there is likely going to be some baked in acceleration anyway. After all, your team is new to the domain, and so as they understand it better, they make progress faster. Or, if it’s the first time these developers have worked together, then as they become familiar with each other, they’ll move faster.
So it’s natural to see the velocity increase in the first few sprints of a new project. At some point, though, you expect to see it level off. And when it levels off, some people get anxious, because they see a steady velocity as an indicator that the team is slowing down. That’s because they were actually measuring acceleration.
Constant Velocity Pressure
Once a team reaches constant velocity, the pressure starts to come in. Sometimes it’s subtle, other times, it’s blatant. The pressure can take various forms. It can be, like our conversation above, the PM asking if someone can take on more work than they originally committed to. Other times it involves adding people to a team in the hopes that if 5 people can do 50 points a sprint, then 6 people could do 60 points.
Sometimes what happens next is internalization by developers. By this, I mean that individual developers will start focusing on how much more they can deliver, even without someone applying direct pressure. You can tell when this happens based on the conversations around the office. You’ll often hear developers say things like
- “I don’t have time to do _____”
- “I have to get this feature in for the demo”
- “That should only take a few minutes”
Once a developer has internalized this idea, it starts to impact how they estimate. They have this idea of speed anchored in their head. Which subconsciously pulls down their estimates. Which means that for the next feature, or next project, they don’t have enough time to do the things they need to do. And so the cycle continues.
Is it Sensible?
This doesn’t just happen in software. When I drive out to my in-laws, it’s roughly 275 miles from my house to theirs. At various points I’ll catch myself thinking “I’m doing 75mph. We’ve got 180 miles to go. If I were to go 90 mph, we could be there in 2 hours, saving almost a half-hour.”
But I don’t get up to 90 mph, for a few reasons. First, one time I did bump up my speed by about 10 mph only to get pulled over for speeding a few minutes later. That was a costly ticket, and so in the back of my mind I think about that cost.
But it’s also not safe. Speed is a common factor in car accidents, which makes sense. The faster you go, the less time you have to react to something. Additionally, when you do react, speed magnifies the reaction. So if you swerve into the other lane, it will be a much more violent swerve at a faster speed.
Also, depending on which car I’ve been driving, I haven’t always been sure that my car could handle 90mph. In fact, 75 might have been the limit at how fast I could go before the car started shaking and rattling.
I think the same parallels can apply to software as well. People who have been developing for more than a year or two have likely been burned by going too fast. We might not recognize it as being burned, we might just chalk it up to the “pains of software.” For example, having a painful release but believing that releases are always painful. Or having the customer come back with bugs after their testing and defending it by saying “that’s just what happens in software.” Of course, there are times where being burned is obvious. For example, a memory leak occurs that wasn’t caught because the feature was developed at the end of the sprint with no real time for testing.
Software also has indicators that you’re going to fast. They’re similar to our car shaking, but often we ignore them. When was the last time you (or someone on your team) ignored one of the following in order to get the feature “done” in time. I can’t tell you the number of developers I’ve talked to at work, at conferences, on-line who say in one form or another that they don’t have time to make sure they’re writing quality code, because they have to get the feature done.
If you made it this far, you’ve probably already figured out the point, but let me write it out anyway. The first point is this, if you have to measure anything, make sure you are NOT measuring acceleration.
Secondly, we need to understand what it means to go “as fast as possible.” It’s not purely about the number. After all, NHRA dragsters can achieve speeds over 300 mph, but they do that in a straight line. In software, we can go faster if we sacrifice things like design, collaboration and quality. But that doesn’t help anyone. What’s the point of getting software out fast if it is low quality?