Pinpoint Engineering

The TL;DR from my session on AI and EngOps at AIDe...

This morning I had the opportunity to chat with software engineers and data scientists at the AI Dev World Conference on a topic I just happen to be v...

How we built our new Agent — a full-featured Go SDK

As part of our latest release, our Agent underwent a complete transformation in order to simplify the installation of integrations and processing of data to our application. “Agent 4.0” was a fundamental project for us to be able to deliver on some future initiatives that require immediate access to data and just a better UX overall.  

We’ve taken a lot of what we’ve learned from the past versions of the agent to improve this one. This version is much faster, super stable, simple code-wise, and data is processed in real-time. The end product is a full-featured Go SDK to make it easier to build integrations that run both in the cloud and self-managed environments. This is now an open source project 🎉. 

Read on to learn more about how the team built the new Agent. 

Modular Approach

One of the main differences between this new version of the agent and the previous version is that it was modular from the beginning. Meaning that the integrations are no longer part of the agent, and the agent is not an agent anymore; it’s an SDK. I realize that’s probably confusing since the project is literally called “pinpt/agent” 🤔

What this means is the project contains the SDK and all the necessary build tools to make a new integration. An integration is a free-standing project and compiles to a standalone binary. This lets us separate all the different integrations into their own projects (pinpt/github, pinpt/jira, etc.), and then we can version, deploy, and scale them all separately

Flexibility in Processing Data

The previous Agent’s limitations were mostly self-imposed from a time when Pinpoint’s focus was on analyzing all historical data. Our backend was built in a way that required it all chronologically. This meant users would not be able to use Pinpoint until their data was fully processed. 

As we have shifted our focus to analyze and optimize engineering activity in real-time rather than solely analyzing historical trends, this freed us from processing chronologically and allowed us to be more flexible. Our streaming backend now accepts and processes data as soon as it is discovered — from new to old.  This allows new users to see their current work as soon as they connect their integrations while the older data is still processing. 

Scaling Agents to Optimize Uptime 

If necessary, we can now scale out each integration type (Jira, GitHub, GitLab, etc.) to handle any request load that may occur.  We only need a single integration type to service all of our customers and provide high-availability for real-time integration data processing.

Working with APIs 

We attempted to unify the way we connect to Jira as a source system. As you can imagine, there are many different Jira flavors, and to be honest, a lot of the APIs seem like they were built in a vacuum. Finding the lowest common denominator proved to be quite a challenge, but alas, we have overcome.

Due to Jira's fragmented nature when it comes to auth and APIs, we had to decide what was going to provide the best user experience possible both when setting up the integration and with data processing.  We determined that an app link in Jira was the best solution to allow for easy integration and give us access to the full Jira API.  This allowed us to bring in things like Agile Boards, issue ranking, and the ability to create sprints.  It wasn't easy but it was well worth the effort in the end as processing and updating Jira and other issue systems have never been faster.  

What’s Next?

Now that Agent 4.0 is out in the wild, we can move on to some of our other, cooler features for users on our roadmap, including bringing in different integrations and data sources. We are even thinking about a cross-platform SDK so that Go isn’t a limitation.

A couple of notes that someone would call me out on for not mentioning: 

  • I set up a channel in our Slack group specifically for data engineering. If you are into that sort of thing, join us and share what you are working on. Here's a preview of all the fun we have. 

  • You can see the Agent in action by signing up for Pinpoint here — it's free. I would love your feedback.
  • This project was a labor of love from a lot of people, especially on our Gold Miners team.

- Robin 

Related Post:

How I learned to stop worrying and love automating data science

Automating data science is hard, and we do a lot of it.

Data Scientist Spotlight: Evan Lutins

Meet Evan. He’s one of our Data Scientists and an honorary developer here at Pinpoint. He’s currently a member of Team N...

Using Next.js to build a live, embeddable developer profile

Like many software engineers, I work with a team to build our application, but I also enjoy learning new languages and t...

Subscribe to the Pinpoint Engineering Blog

cat developer