Practical quality workshops for software teams
I help QA and engineering teams make sharper quality decisions: on AI tools, on metrics, on what’s actually worth measuring.
I’m Kat Obring. I’ve spent twenty years in software as a QA engineer, a test lead, and a Head of Delivery. I’ve seen quality initiatives fail because they tried to measure everything, ran pilots with no clear success criteria, and adopted tools because someone in a meeting said so.
The frameworks I teach are built around a single principle: evidence beats guesswork. A well-designed experiment reveals more than months of planning. And asking the right question before you start a pilot is worth more than any tool you could choose.
I run workshops for engineering and QA teams, speak at conferences, and occasionally write about quality, metrics, and the gap between what AI promises and what it delivers.
Workshops for engineering and QA teams
Both workshops are designed for delivery with your team, remote or in-person. They’re built around structured frameworks, not slides and talking. Your team leaves with something they’ve actually practised.
Deciding Fast
A structured approach to evaluating AI tools before you commit
Most AI evaluations in software teams start with a hope and end with an inconclusive result. The pilot ran for three weeks. Nobody agreed on what good would look like. The tool either gets cancelled or adopted by default.
Deciding Fast is a four-hour workshop that interrupts that pattern. Your team works through a four-step framework: Question, Uncertainty, Constraints, Action, applied to a real AI tool you’re currently evaluating or considering. By the end of the session, they have a structured evaluation, a decision, and the habit of asking the right question before running a pilot.
For QA and software engineering teams who are using or considering AI tools and want a principled way to evaluate them.
4 hours, remote (requires individual devices and a Miro board). In-person delivery available - contact me to discuss setup.
The Q.E.D. Workshop
Build a quality metrics practice your team will actually use
Most quality dashboards don’t drive decisions. They track things that are easy to measure, not things that matter. And most quality initiatives try to fix too many things at once, run for too long, and deliver too little.
The Q.E.D. Workshop gives your team a repeatable framework for quality improvement: identify one problem worth solving, design two or three metrics that actually measure it, and run a short experiment to test a solution. The cycle takes two to four weeks. Teams learn to make evidence-based quality decisions rather than arguing about what to fix next.
Q.E.D. stands for Quality-focused Experimentation and Development.
For Engineering teams, quality leads, and QA engineers who want to move from gut-feel quality decisions to a shared, data-driven approach.
Half-day or full day, depending on team size and depth. Remote or in-person.
1:1 coaching
If you’re working through something specific — a career decision, a stakeholder conversation that keeps going wrong, what to actually do about AI pressure from your manager — sometimes the most useful thing is an hour to think it through properly with someone who knows this territory.
Sessions are one hour, focused on whatever you bring. No fixed curriculum.
Single session: £100 — Book a time directly
Bundle discount: 10% for 3 or 6 sessions booked. Send me an enquiry via the form below if interested.
Learn at your own pace
If you’re an individual tester or quality engineer looking to develop your own skills, I also offer self-paced courses.
Stakeholder Ready is a five-day course on communicating quality risks in language that lands with developers, product managers, and leadership. It’s practical, short, and designed around the situations testers actually face.
"I am impressed by how simple, but concrete, questions can effectively restructure thoughts and lead to significantly clearer decisions and better outcomes."
"Kat provided a much-needed framework for evaluating AI tools without the 'six-month pilot'. She reminded us that AI is not a tool decision, but a way to protect and empower human judgment."
Hima S.
"Kat shared a practical, evidence-based approach to evaluating AI tools quickly and responsibly, helping teams cut through the hype and make confident decisions without long pilots or heavy process."
Work with me
If you’re thinking about bringing a workshop to your team, or you want to talk through what would be most useful, get in touch. I’ll reply within a couple of days.
