This morning I had the opportunity to chat with software engineers and data scientists at the AI Dev World Conference on a topic I just happen to be v...
For a bulk of its history, software engineering has been like a factory where we can see the orders going in, we can see the product that comes out, but we can’t see much of what happens in the middle. The fulfillment process is largely mysterious.
Double-clicking on the metaphor, software engineering is like a factory without lights. We know there are people in the building, we can hear all kinds of activity, and if you’re on the factory floor, you can probably make out what your neighbor is doing. But collating this into a broader picture of what’s being done, by whom, why, and how well, all tends to require a lot of feeling around in the dark.
There’s a better way.
The unit of work, the “widget,” in our software factory are issues—those requests to engineering captured in a task management system like Jira, around which teams estimate, define requirements, plan, and track progress. True, most issues include code as an output, and code is clearly also something whose progress we should be able to see, which we address here. For this post, we’re primarily concerned with issues. When it comes to understanding the state of work, issues are the common currency, the coin of the realm, among engineers, leaders, and stakeholders.
All issue work has three fundamental states: backlog, in progress, and closed. As simplistic as that sounds, many software organizations lack a clear view into how much work they have in total, and into which state it falls. Knowing, for example, that (a) your backlog has 4x as many issues as you currently have in progress, (b) that the backlog has grown 20% over the past sixty days, and (c) that your closed issues have declined by 15% over the same period, tells the story of an organization whose work demand is pretty quickly outstripping its capacity.
To get the specifics of that story, to understand what’s really going on, we need to delve deeper. Each of these three work states has a series of diagnostic questions we should be able to answer:
Here we want to look back, evaluating the performance, the efficiency of our ‘factory’ (or, as we also like to frame it, the engineering pipeline):
For any or all of these questions, I should be able to see the answer not only for my entire engineering organization, but also by team, timeframe (last month, last quarter, etc.), and/or type of work (bugs, tasks, epics, etc.).
If the above sounds pie-in-the-sky, it’s not surprising. Understanding software engineering performance—seeing inside the factory—has been difficult. Not because we don’t know the questions to ask, but because the usual way of trying to dig up answers is so labor-intensive: spreadsheets and custom queries, stand-ups and debates, all around work with lots of moving parts... The factory is dark for a reason.
At Pinpoint, we use signals to answer these questions. We derive the signals we need by harnessing the raw activity data of the systems where engineering work happens (e.g. Jira, GitHub), then letting our machine intelligence do the rest.
To begin, we provide a view of all work and its state. This view can be filtered by team, type of work, and/or timeframe (1) In this example, we’ve elected to look at work over the past 90 days. We show the total number of issues in each state, as well as the percentage change from the prior period.
(2) We then analyze the backlog and actual work activity to answer the questions named above, such as whether the engineering organization’s focus and response is matching demand:
For work in progress, we can see the current state of all work started, broken down by time in state:
With the view for closed work, we provide a full performance evaluation of how, and how well, the work was delivered. This includes how fast we delivered (1), how much (2), and with what quality (3), as well as how this performance compares to the prior time period. We can also answer important questions about the way we work. For instance: Are we getting faster or slower? How is the amount of work in progress impacting our cycle time?
What software engineering is good at—what we’ve spent the past decades advancing through smarter methods and better tools—is automation. Specifically, automating the work of the software lifecycle (design, build, test, release), as well as automating the hand-offs between. While these investments have sped the rate at which software can be delivered, they haven’t done much to help leaders answer the broader, business-centric questions that are second nature to most other departments:
Our aim is to wring another benefit out of these investments in automation tooling. By synthesizing the activity data across those systems, we can derive a wealth of information about the efficiency and performance of the software “factory.” It takes only the right machine learning to unearth it.
CTO and Co-Founder
Meet Andrew. He’s our Director of Backend Engineering and is a member of Team Bolt ⚡️ who is currently working on buildi...
Meet Mike. He’s a Platform Operations Engineer here a Pinpoint and member of Team Bolt ⚡️ . who is currently working on ...