This morning I had the opportunity to chat with software engineers and data scientists at the AI Dev World Conference on a topic I just happen to be v...
If you think about software engineering as a pipeline (as we do), it clarifies, at the highest level, what we care about: how much we get done, how fast we get it done, and how good are the results. The measure for understanding how much moves through any pipeline is throughput. In a traditional manufacturing pipeline, throughput is simply how many widgets get produced in a certain timeframe. Measuring throughput for a virtual pipeline—in this case, how many ideas get turned into working software—is slightly more complicated. But it’s worth it.
There are two variables to software engineering throughput. To understand the quantity of work getting done, we look at the number of issues* a team completes, per person, per month. It’s important to divide the amount of work a team gets done by the number of people on the team. This normalizes the figure, allowing for a team by team comparison, regardless of the size of each team.
The second variable is the complexity of the work completed. Unlike manufacturing, our work units, our “widgets,” are seldom of equal size. The work required to complete a single issue might range from tweaking a line of code to rethinking a layer in the architecture.
The usual way to try to account for work size or complexity is through something like story points. But there’s a problem: story points make team by team comparison impossible. Because they work to obscure the time aspect, story points are—by design—team-specific. (Something that even Ron Jeffries, a founder of XP and the creator of points, now regrets.)
Instead of story points or any other kind of estimation magic, we use the actual average number of days it took to finish the given issues, a signal we call Cycle Time, that we derive automatically from Jira or the equivalent work system. Cycle Time here serves as a proxy for size or complexity.
So, to calculate throughput, we look at both how much work was done, as well as the time it took to complete the work. The formula looks like this:
Imagine a team that completes an average of 10 issues per person per month, with an average cycle time of five days. Another team completes five issues per person per month, with an average cycle time of 10 days. Both teams have a throughput of 50—but the team with the longer cycle times is likely working on more complex projects. (Or, possibly, they’re just less efficient. Which brings us to the next section…)
There are some key benefits to knowing your throughput:
There’s another important benefit to throughput: it’s a measurement the business can understand. Non-engineers don’t know what a “story point” is. (Even some of us in engineering struggle...) But they do understand throughput. It gives engineers and the business a common language to describe something everyone cares about—which is no small thing, in a world that increasingly depends on engineering to run.
*Issue is a little Jira-specific; in this context it can be read simply as any unit of work as captured in your work/ticket system.
CTO and Co-Founder
Meet Andrew. He’s our Director of Backend Engineering and is a member of Team Bolt ⚡️ who is currently working on buildi...
Meet Mike. He’s a Platform Operations Engineer here a Pinpoint and member of Team Bolt ⚡️ . who is currently working on ...