This morning I had the opportunity to chat with software engineers and data scientists at the AI Dev World Conference on a topic I just happen to be v...
Every now and again, we like to scrutinize the received wisdoms of software engineering against what the data says. Is the value of a given best practice backed by measurable results? Is the practice even realistic? If the answer to either question is No, we render our verdict: ditch it. Otherwise, it’s a keeper.
The reason for linking code commits to their originating issue is traceability. Traceability is good. The linkage makes it possible to know why the code was created in the first place, and speeds up impact analysis when a given issue needs to be revisited. That in turn speeds things like mean time to repair (MTTR).
At least, that’s the classic rationale.
In fact, traceability unlocks a lot of other things. With it, we can surface things like:
The question is less whether this practice is valuable than whether it’s realistic. You’re an engineer, you’re under the gun for a new release, it’s late, you’ve been staring at the same lines of code for nine hours, and now, at the moment of commit… you have to remember to fish around for an issue ID?
Actually, that’s a trick question. We do all of that for you using ML.
“Technical debt” is the phrase that launched a thousand best-practices ships. Everyone agrees it’s important; everyone agrees it makes a material impact; no one wants the job of trying to figure out how much. Here again, the question is less whether the practice is useful or important, but whether it’s realistic.
It is. We’ve done the work.
The first step is to put technical debt into terms the company’s purse holders can understand. That means cost—dollars. More specifically, it means knowing how much technical debt is slowing you down. Which code bases are getting creaky, and how many more days, on average, are required to ship product as a result? Multiply that difference by your average labor cost, and you have a concrete figure that everyone understands. If it’s big enough, no one can ignore it either.
Look, we all know the story behind story points. They were created as a way to abstract the time element out of our estimates, in large part so that others (read: management) wouldn’t use those estimates as a cudgel.
The problem is, story points (or Fibonacci numbers, or T-shirt sizes) don’t mean anything to anyone outside of the team that assigns them. That’s the whole point. This makes it tough for other dependent teams to plan their work.
The other problem is, story points didn’t make management or business stakeholders stop caring about how long something would take.
There’s a much better way. It starts with using historical actuals. If we can analyze how a current project looks in comparison to projects from our past—looking at patterns that include not only the type and priority of work involved, but also things like the specific people and teams assigned, perhaps even the time of year… we can create ML models that use past actuals to predict future timeframes.
In fact, we already did.
Meet Andrew. He’s our Director of Backend Engineering and is a member of Team Bolt ⚡️ who is currently working on buildi...
Meet Mike. He’s a Platform Operations Engineer here a Pinpoint and member of Team Bolt ⚡️ . who is currently working on ...