Katja Obring

You were promised speed. You got a new job instead.

You know the feature inside out. You have tested it, broken it, rebuilt the test suite around it twice. So you ask the AI to generate a few test cases. Save yourself twenty minutes. What comes back looks reasonable. The structure is right. The naming conventions are close enough. Then you start reading. The first

You were promised speed. You got a new job instead. Read More »

Review as analysis, not authority, when working with AI

Every time the topic of using AI for code reviews comes up, someone will eventually say that it is like letting the AI mark its own homework. It sounds clever. It also sounds responsible. Underneath it sits a familiar discomfort about oversight, trust, and professional judgement. As an analogy, it points attention away from the

Review as analysis, not authority, when working with AI Read More »

Become the tester people listen to: introducing Stakeholder Ready

If you work in testing or quality engineering, you know the feeling.You see risks early. You spot patterns before anyone else. You raise concerns because you care about what happens next.And yet the conversation stalls. People nod politely, then move on. It is frustrating. Not because you want attention, but because you want to make

Become the tester people listen to: introducing Stakeholder Ready Read More »

Why the Experimenter’s Mindset Outlasts Automation in Software Testing

The experimenter’s mindset beats the automation mindset A few months ago, someone at a conference asked me whether I thought testers would still have jobs in five years. It wasn’t a joke. You could hear the anxiety in the room, because the arrival of generative AI has reignited a very old fear in our industry:

Why the Experimenter’s Mindset Outlasts Automation in Software Testing Read More »