Alfonso Catrón

WordPress Engineer, Support & Tooling

Measuring impact of tests and tools in Support (and turning it into an OKR)

Measuring the real impact of a Test or a Tool in support work is complex.

Because it’s not just about saving time, a good tool also makes a complex thing doable for everyone. It turns a weird edge case into a standard routine. It gets added to our thinking process, to our troubleshooting flow. It changes behavior.

So a test that was hard to do and often skipped… suddenly becomes part of how we work daily. That raises the overall quality of support. It’s like getting a free medical checkup that now includes a bunch of tests we never did before. No extra effort, better outcomes.

In reality, you can’t predict the impact. It’s not something linear. It’s not “this saves 5 minutes.” It’s more like a curve that grows. Sometimes it changes everything.

1) We can start by classifying “Process types

Generally speaking, processes can be classified in two ways: by how complex they are, and how often they happen. Let’s say:

  • A = high (complexity / frequency)
  • B = medium
  • C = low

So, an AA process is something complex and frequent. That’s gold. That’s what you want to automate.

A CC process is a low-frequency and low-complexity one,  not super interesting, still it can add to the goal.

But sometimes an AC (complex but not frequent) can become super useful, because once it’s automated, it does get used more. And suddenly we’re doing checks we never did before.

And then you’ve got the OO processes. These are not just automations, they’re new ideas, new techniques, new approaches. Stuff that didn’t exist before. You can’t predict their impact. But they might change the game. They move the limits of what’s possible.

2) Estimate complexity

Before building a test, we have to estimate how complex this development will be. Is something quick and easy? Will it take too much time and effort? Will we need help from Developers, IT? Can we integrate it into an existing tool, or do we need to create something new? How easy will it be for the team to adopt it?

Complexity is a key decision factor. And it is also directly related to the Process Type, if a OO or AA process is complex to build, it might still be a great opportunity. But, if we need a huge effort for a CC process, maybe we can skip it for now.

3) Measuring, an example

How does this come to place? How can we measure all this for the purpose of OKRs, to make it visible, to evaluate, and track?

We assign points:

  • A = 3 points
  • B = 2
  • C = 1
  • O = 5

Then, for each process we discover and automate, we score it based on its type. Let’s say in a quarter we created:

  • 2 AA processes → 2 x (3+3) = 12
  • 1 BA → 3 + 2 = 5
  • 4 CB → 4 x (2+1) = 12
  • 1 OO → (5+5) = 10

That’s 12 + 5 + 12 + 5 = 44 points
44 points of what? You might ask… it doesn’t matter. This is a metric that will become relevant after repetition. Next quarter, you get 100 points, then 20 points. The higher the score, it means we’re pushing forward with improving our tooling. It doesn’t have to grow forever, but it helps us track where we’re going. And, over time, it will have an impact on other metrics we are already measuring, such as Resolution time, Response time, etc

4) Track adoption

After release, measure:

  • Adoption: How many teammates use it
  • Usage:  How often it’s run (or triggered)
  • Impact: Does it change the outcome of tickets? Empowers teammates? Does it make their lives a bit better? Speed up resolution?

Even just a simple “used in 30 tickets this month” can show it matters.

5) After each cycle, reflect with a retrospective

End each quarter with:
– Which tool or test had an unexpected impact?
– What didn’t get used?
– What process changed how we troubleshoot?
– Are we building too many tools? Too few? Should we clean up or merge anything? This helps avoid tooling bloat and guides better decisions

Example OKR

Using this approach, a good OKR could be:

Objective: Improve support processes through automation
Key Result: Identify and classify at least 4 support processes per quarter and implement tests or tooling improvements

Track each one’s complexity, frequency, effort, and impact using this model. Aim for a quarterly score > 30.

The score will vary depending on the type of processes: AA, CB, OO, etc, but the point is to stay intentional, keep the rhythm, and track the impact.

A few rules that can help

  • Start by observing the struggles of the Support team, the friction points. They know better than anyone else.
  • Focus on Tests, not on Tools.
  • Don’t create new tools unless it’s really necessary. Always first try to expand what you already have: same UI, more tests. Creating 20 different tools is a failure, it will be a nightmare to use and maintain
  • Keep tools simple. Avoid complex UIs.
  • Add complexity under the hood, not in the workflow. Workflows should be simple.
  • Don’t try to impose tools or tests. Success comes after adoption, you can’t force that.
  • Track what you build, measure, and report. Then repeat.

Remember every automation counts. Every new test makes us better. That’s the goal. Are you already building?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *