Let’s start with the basic idea. Sprints are a good way to plan work because they’re bounded by a short duration. What can we get done in three months? Boy, hard to say...a lot can happen in three months. What can we get done in two weeks? Ah—that’s much clearer. And based on what we deliver or fail to deliver, we’re a little bit smarter, a little more aware of the sticking points and areas for attention, going into the next sprint.

But no matter how short the sprint interval, it’s not a license to chuck planning out the window. The effectiveness of sprints counts on a few do’s and dont's:

  • Don’t bite off more—or less—than you can chew. In an ideal world, you know your team’s average throughput or velocity, and you use that to decide a reasonable workload for the sprint, not too much, not too little.
  • Do plan one to two sprints ahead. Effective planning should let you fill about half of the work for the next sprint, and a quarter of the work for the sprint after.
  • Don’t let too much unplanned work into the sprint. Some unplanned work is inevitable, we know this. But too much, especially given the tight window of a sprint, plays havoc with velocity, quality, and/or team morale.
  • Do limit scope volatility. Inevitably the final sprint plan won’t look like the first—that’s why it’s called a first plan. But too much expansion or contraction in planned issues has the same effect as too much unplanned work.
  • Don’t kid ourselves about how much of the plan was delivered. Tracking this is how we get better at sprint packing.
  • Do run retrospectives. This goes back to a chief aim of having a short, time-bounded period of work: to stop, see what got done and why, and then apply the lessons going forward.

So far, so textbook. What’s interesting, however, is the number of avowed sprint teams we see for whom sprint plans are little more than a bundle of cans to be kicked down the road as needed. In this approach, because there’s little planning, there’s no real tracking of how much can realistically get done within the sprint window. Whatever gets done gets done, and the rest just rolls to the next sprint.

There’s nothing criminal in this behavior. The idea of a backlog from which one picks off tickets to complete, then moves to the next item, is entirely appropriate for certain kinds of work. It even has a name: Kanban.

The muddle starts when teams declare that they’re running sprints, declare that they want the continuous improvement that are supposed to come with contrasting actual to plan and adjusting for the next plan, and then just sort of…let it all go. If you’re curious why some watchers think “Agile” is just a synonym for “Forget planning and do whatever the hell we feel like”, look no further than this tendency.

We tackle this through a signal we call Sprint Health. To derive Sprint Health, we examine a sprint’s work at three points: the initial plan, the final plan and what’s actually delivered. Looking at these, we detect and calculate:

  1. Plan Survival Rate: the percent of issues from the initial plan that remain in the final plan.
  2. Plan Growth Rate: the percent change in the total number of issues from the initial plan to the final plan.
  3. Delivered vs Initial Plan: the percent of issues in the initial plan that are completed in the sprint.
  4. Delivered vs Final Plan: the percent of issues in the final plan that are completed in the sprint.
  5. Unplanned Delivery Rate: the percent of issues completed in the sprint that were not in the initial plan.

We summarize the results and assign a Sprint Health score in Pinpoint’s Sprint Pipeline view. The Recently Completed tab serves as a kind of instant, automatic sprint retrospective. You can choose a single sprint, or evaluate the health across all the most recently completed sprints:

Sprint Pipeline view - Sprint retrospective

In this example, the Sprint Health score is dragged down because of the increase in issues from initial plan (top bar) to final plan (middle bar, where the lighter blue represents issues added). Sprint Health is further hurt because far fewer issues were delivered (bottom bar) than were planned.

Because we have this historical understanding of a team’s sprint performances, we use that actuals data to provide teams guidance as they evaluate active sprints. For example, we can look at the current sprint plan and compare it to the team’s historical actual sprint throughput and cycle times—their Plan Capacity. (You can read more about how we use historical cycle time and throughput to determine capacity and forecasts here.) This comparison helps to surface whether the current plan is over- or under-packed.

Current Sprint Performance

The example here shows that the team’s current plan delivers only 38 percent of what their historical actuals say they’re capable of. They’re sandbagging! Or more likely (ahem) they’ve seen this sprint plan has room to grow, and are taking a second look at scope options...

The real question of course is, what kind of sprints are your teams running? True sprints, or Kanban-by-another-name sprints? Would you like to see for yourself?


Get the data science behind high-performance teams.