Bugs: It’s a data problem

A number of years ago, a friend complained to me “Why does everyone seem to want to reinvent the bug tracker?!” That semi-inebriated conversation stayed with me as a question that has since rolled around in the back of my head.

A couple of years ago at Nebula, I was actively triaging bugs for a weekly development meeting. The number of issues logged was up around ~250. It wasn’t just bugs – big and small feature ideas were included, as were the “hey, we took a shortcut here to get this out the door for a customer, let’s get back here and clean this up” stuff. It was the whole mess of “tracking development” stuff, freely intermixed with what was far more appropriate for product management to manage. I was guiding the engineering effort, so I wallowed in this data all the time – I knew it inside and out. The reason that day stood out is that I had just hit my threshold for getting overwhelmed with the information.

It was at that point that I realized that bugs, development ideas, and tracking technical debt was fundamentally a data problem, and I needed to treat it as such to have any semblance of control in using it. Summarization and aggregation was critical, or we’d never be able to make any reasonable decisions about what should be focused on. And by reasonable I mean somewhat objective, because up until then I realized that everyone was counting on me to know all this information and provide the summary – and I was doing that intuitively and subjectively.

I call them “bugs” because that’s easy and I’m kind of lazy, and it’s kind of a cultural norm in software engineering. The software that has been most successful at wrangling this kind of data flow has generally been called “bug trackers”. Bugzilla from back in the day, Trac, Fogbugz, JIRA, or more often these days github issues. Bugs is not technically accurate, it’s all the ephemera from development, and it starts to get intermixed with “Hey, I just want to track and make sure we do this” stuff that may not even be about the product. Once you put in place a system that does tracking that you actually track, it gets used for all sorts of crap.

It gets more confusing because some of the data tracked really are bug reports. Bug reports have narrative included – or at least I hope narrative exists. And free-form narrative  makes it a complete pain in the butt to aggregate, summarize and simplify.

I think that is why many folks start adding on tags or summarization metadata to these lists. The most common are the priority tags – “really important” vs. “okay to do later” markers. Then there is the constant tension between fixing things and adding new things that tends to encourage some teams to utilize two different, but confusingly similar markers like “severity” vs “priority”. In the end, neither does a particularly good job of explaining the real complexities of the collection of issues. At this point, I would personally rather just make the whole damn thing a stack rank, factoring in all the relevant pieces, rather than trying to view along two, three, or more axis of someone else’s bucketing. In practice, it usually ends up getting bucketed – and who (individual or group) does the bucketing can be very apparent when you look though the data.

This is all fine while you’ve got “small amounts” of data. Small amounts in my case being up to about 250 reports. The exact number will be subjective to whomever is trying to use it. So far, I have been writing about being overwhelmed by the volume of this information. Growth over time caused most of that volume – there is always more work than you will ever be able to do. If you accelerate that flow by collecting all the crashes from remote programs – on desktops, or in browsers, or on mobile devices – the volume will accelerate tremendously. Even with a smaller “human” flow, there is a point where just getting narrative and data summarized falls apart and you need something other than a human being reading and collating to summarize it all.

A few years ago Mozilla started some really amazing work of dealing with the aggregate data collection or crashes and getting it summarized. They were tracking inputs on a scale that nobody can handle individually. They built software specifically to collect tracebacks and error messages that they automatically aggregate and summarize. They also set up some thoughtful process around how to handle the information. Variations on that theme have been productized and made into software solutions such as crashalytics (now part of Fabric), Rollbar, or Sentry.

Like its cousin performance metrics that are commonly collected on hosted services (with something like Grafana or Graphite), this is informational gold. This is the detail that you want to be able to react to.

It is easy to confuse what you want to react to, with how you want to react. It’s common when you’re growing a team; you start mapping process around the data. When you’re dealing with small numbers of bugs, it is common to think of each report as an exemplar – where duplicates will be closed if so reported – and you start to track not just what the issue is, but what you’re doing to fix it. (Truth is as the volume of reports grow, you almost never get any good process to handle closing duplicates)

At first its easy – it is either “an open issue” or it is “fixed”. Then maybe you want to distinguish between “fixed in your branch” and the fix included within your release. Pretty soon, you will find yourself wandering in the dead marshes trying to distinguish between all the steps in your development process – for nearly no value at all. You are no longer tracking information about the issue, you’re tracking how you’re reacting to the issue. Its a sort of a development will-o’-the-wisp, and not the nicer form from Brave, either.

The depths of this mire can be seen while watching well meaning folks tweak up Jira workflows around bugs. Jira allows massive customization and frankly encourages you down this merry path. I’m speaking from experience – I have managed Jira and its siblings in this space, made more than a few swamps, and drained a few as well.

Data about the bugs won’t change very much, but your development process will. When you put that process straight into the customization of a tool, you have just added a ton of overhead to keep it in sync with what is a dynamic, very human process. Unless you want to be in the business of writing the tools, I recommend you use the tools without customization and choose what you use based on what it will give you “out of the box”.

On top of that, keep the intentions separate: track the development process and what’s being done separately from feature requests and issues for a project. Maybe it’s obvious in hindsight, but as data problem, it gets a lot more tractable once you’re clear on what you’re after. Track and summarize the data about your project, and track and summarize what you’re doing about it (your dev process) separately.

It turns out that this kind of thing maps beautifully to support and customer relationship tracking tools as well. And yeah, I’ve installed, configured, and used few of those – Helpdesk, Remedy, ZenDesk, etc. The funny thing is, it’s really about the same kind of problem – collecting data and summarizing it so you can improve what you’re doing based on feedback – the informational gold to know where to improve.

Published by heckj

Developer, author, and life-long student. Writes online at https://rhonabwy.com/.

%d bloggers like this: