For engineering, the ability to accurately estimate how long an issue will take is crucial to properly setting expectations and delivering products on time. The team is trying to estimate cycle time, which is a key metric for teams to understand how they are doing, but it is inherently a reactive, after-the-fact metric. 

Story points are one of the most common ways engineering teams estimate, but they were created to discuss the relative effort required by issues without giving a timeframe for completion. The ambiguity behind story points is one of the reasons Pinpoint was founded. They are not accurate enough for anyone to plan around, there is no common agreed-upon definition of what they mean, even within the same team, and they keep business stakeholders in the dark.

Knowing how long something will take is important for sprint capacity planning, prioritization, and generally as a conversation driver. It has ramifications outside the engineering organization as well — especially for high-priority or customer-facing products, where everyone from sales and marketing to customer support needs to know when the feature will be ready. 

When we first introduced the Issue Forecast metric over a year ago, it was primarily used for team rollups — so that teams could see an estimation of how much longer it would take to complete all the total issues left in the sprint. Unfortunately, it wasn’t nearly as accurate as we would have liked, sometimes off by weeks or months. Often, issues would have a very high forecast, sending teams into a panic when they saw it. Refining this metric has been a high-priority project for us, as we’ve been working to incorporate customer feedback into the product.

Issue Forecast utilizes machine learning to reduce our reliance on asking engineers during sprint planning or standups how long an issue will take (which is usually an arbitrary number) in favor of a more accurate and predictable estimate. In creating a new model, we also wanted to be transparent about what level of confidence the model has that an issue will be completed in the projected time frame. You can see below how this is represented in the product today alongside the estimated story points. Having these two data points together will help drive those conversations about how teams estimate in order to improve planning going forward. 

Issue Forecast - Cycle Time

Below are more details on how we chose the model for the Issue Forecast. More posts will be coming on how we’ve decided to structure the user interface and visualize results and various use cases for users.

Prioritizing Transparency

One of our core principles for metrics in our product is transparency. One common critique of machine learning applications is that they are a black box, which leads people to mistrust the results. It’s important not to just provide a raw number or estimate, but also explain how we arrived at that number. To ensure that we could do that, we needed to be able to show how each data point contributed to the overall forecast.

With this in mind, we chose a model that allows us to provide each issue its own customized explanation. We specifically decided to not use neural nets and parametric models, because those models make it very challenging to interpret how individual data points contribute to the model’s end result. 

The Model in Action

The current Issue Forecast model pulls information from Jira, as well as insights from other models in our product about each issue and creates an estimate for what the issue’s cycle time will be. We also display the margin of error in order to be as transparent as possible. For example, the model might say an issue will take 5.5 days, but it will have a 90% confidence that the issue will be completed in under 7 days. 

For customers with enough historical data, this data is pulled specifically from their own organization’s history. If there’s not enough historical data for the model to work, we have generalized global data we can pull from. 

Here are some of the data points we use to make the Issue Forecast: 

  • The day of the week the issue is created
  • The issue’s priority level
  • The issue type: bug vs new feature

Even as-is, this model is more accurate than any other Issue Forecasting system available — especially story points. But the possibilities are endless for how the Issue Forecast will evolve as we bring more data into the model. In the future, we’d like to incorporate data that’s directly related to the source code, like what language the code is written in, the developer’s level of experience with that language, and the repository the issue will contribute to. We could also incorporate more individual and team data points, so we can be even more specific such as comparing how long an individual or team’s usual cycle time is with issues in a particular language.

Our hope is that Issue Forecast helps teams have more data-driven conversations during their sprint planning so they can be more efficient in the planning and execution of their work. The ability to more accurately predict what can and can not be completed in a given timeframe will help keep engineers productive, so they aren’t running out of things to do or feeling like they are constantly behind by having work constantly rollover from sprint-to-sprint.  

Issue Forecast will help engineering teams confidently communicate with other business stakeholders in advance about when a new release, feature or bug fix is likely to be ready allowing engineering to truly be tied to business outcomes.


Get the data science behind high-performance teams.