I’ve been seeing a lot of posts on LinkedIn recently that make the same claim: speed of writing code was never the bottleneck in software engineering. The real bottleneck was always thinking, choosing the right thing to build, understanding the problem. AI has just made that painfully obvious.
I partly agree, but the framing is incomplete. What’s happening now isn’t just a revelation about where the bottleneck has always been. It’s a reversal in what gets valued, and by whom.
Skills that testers and Quality Engineering practitioners have always brought to teams (critical thinking, context awareness, judgement, problem framing) were often dismissed as soft, secondary, or less rigorous than “real” technical work. Now, as AI handles more of the execution, those same skills are being recognised as essential. This creates an opportunity for those who already valued these skills to deepen trust and demonstrate impact more clearly, while those who didn’t are starting to see why they matter. Mary Parker Follett wrote in the 1920s that “the insight to see possible new paths, the courage to try them, the judgment to measure results – these are the qualities of a leader.”1 That insight and judgement have always been central. We’re just noticing it again.
The QA and testing community has its own version of this defensive move. The argument goes: AI can automate, but it can’t really test because it lacks context. That’s true, but it’s also narrow, and it misses something significant about what’s happening. AI is a democratising force. People who couldn’t or wouldn’t invest the time to learn coding just to write test automation can now build what they need without that upfront cost.
Someone who understands their testing context can now build a small tool to parse logs, generate test data, or automate a repetitive setup task without needing to become proficient in a language, a framework, and all the surrounding tooling. The barrier was never the idea. It was the technical overhead required to act on it. As Ursula K. Le Guin put it, “I don’t know how to build and power a refrigerator, or program a computer, but I don’t know how to make a fishhook or a pair of shoes, either. I could learn. We all can learn. That’s the neat thing about technologies. They’re what we can learn to do.” 2 The dependency shifts. The balance changes.
That shift has implications. Critical thinking becomes more central to how work gets done, and people with that skill no longer need to wait for engineers to realise every idea. They can build tools, run small experiments, act on what they understand. This matters because critical thinking is not an innate gift. It’s a learnable skill. Learning to think critically about software quality takes practice, feedback, and often discomfort. It means questioning your own assumptions, sitting with ambiguity, and being wrong in ways that help you get better. But it is learnable, and the people who invest in it now have more leverage than they’ve had in years.
If you’re looking to sharpen how you frame observations and communicate value to stakeholders, I’ve built a short course that walks through exactly that: small experiments, clear questions, and translating what you see into insights that get heard. You can find it at Stakeholder Ready.
The power dynamic between technical skill and critical thinking has shifted. That opens new possibilities for how QE practitioners work, what they build, and how they influence delivery. The question now is what you do with that.
- Mary Parker Follett, Freedom and Co-ordination: Lectures in Business Organization (1949). For background on Follett, see herWikipedia article. ↩︎
- Ursula K. Le Guin,“A Rant About Technology”, ursulakleguin.com. ↩︎

