AI did not break testing

Sometimes I despair about the QA profession.

Not because the work is hard, or because quality is complex. Both of those are true, but they always have been. What gets to me is where so much of our collective energy still goes.

We have much bigger fish to fry. We have the tools to do it. And yet we are still arguing about what to call the fish.

I recently commented on a LinkedIn post that drifted, as these conversations often do, into a debate about titles. QA versus tester. Tester versus quality engineer. Whether QA should exist at all. Whether words matter.

They do matter. Precision with language helps us think more clearly. Shared understanding helps teams work better. But fighting over labels, especially when we already understand each other well enough to argue about them, does not move the profession forward.

What would move us forward is learning.

A large proportion of people enter testing and quality roles sideways. They come from support, from development, from the business. That is not a problem in itself. Some of the best people I know found testing that way. The problem is what happens next.

Because there is no widely shared baseline of education, every new generation has to relearn the same lessons. Lessons that were already understood twenty years ago. Lessons about risk, about feedback, about context, about why testing exists at all.

Worse, many of the loudest and most visible voices are still teaching the state of the profession as it was decades ago. Those materials are easy to find, well marketed, and familiar. So new testers encounter them quickly, absorb them, and pass them on. The cycle repeats.

A few people break out of it. Most of the ones I know who do eventually leave testing altogether. They become delivery leads, engineering managers, product people. Not because they stopped caring about quality, but because it is exhausting to fight the same battles over and over again.

I understand that fatigue. After twenty years, I feel it too.

What frustrates me most is not that we are still debating fundamentals, but that we are doing so at a moment when we need our collective maturity more than ever.

Instead of asking what we have learned over the last two decades, instead of applying the practices that actually helped us deliver better software, we are circling old arguments while staring at what many see as a new black box: AI.

AI is not unknowable. It is not magic. It is not even particularly new in the ways that matter here.

We already have tools for dealing with systems whose internal logic we cannot fully inspect. We have observability. We have monitoring. We have feedback loops. We have rejection mechanisms. We have guardrails.

When we work with large language models, we can observe their outputs. We can study how they respond to different inputs. We can shape our interactions based on evidence, not hope. We can reject answers that do not meet our standards before they ever reach a user. None of that requires blind trust. None of it requires pretending the system is deterministic or safe by default.

This is not clever. It is not novel. It is simply applying the same thinking we have used for years to a slightly different kind of system.

That is why it feels so wasteful to be stuck arguing about titles and definitions. The real work is not naming roles. The real work is improving feedback, judgement, and decision making in the presence of uncertainty.

Quality has always been contextual. Testing has always been one input into understanding risk. AI does not change that. It just makes the consequences of weak thinking show up faster.

We do not need to reinvent the profession. We need to remember what we already learned, and have the discipline to apply it.

Leave a Comment

Your email address will not be published. Required fields are marked *