AI Use Case Selection

Four criteria for getting more from the AI tools you already use

You’re using AI tools in your testing work and want to get more out of them. This course gives you four criteria for working out which tasks they’re actually suited to, and a worksheet to apply them to your current work straight away.

What you’ll learn

  • Apply four criteria – risk, reversibility, context requirements, and judgment load – to the tasks you do every day
  • Identify which tasks in your current work AI will genuinely help with, and which will cost you more time than they save
  • Work through a realistic QA scenario to see how the criteria apply before using them on your own work
  • Produce a scored shortlist of 2–3 use cases with a rationale you can act on or share with your team

Who this is for

You’re a quality engineer using AI tools, either because you chose them or because they were rolled out across your team. You’re producing output, but you’re not always confident you’re using the tools on the right tasks, or how to tell whether you are. You want a clear framework for making that decision, not a high-level overview of AI in testing.

This course is not for people new to software testing or quality engineering. It assumes you know your domain and want to make better decisions about where AI fits within it.

Kat shared a practical, evidence-based approach to evaluating AI tools quickly and responsibly, helping teams cut through the hype and make confident decisions without long pilots or heavy process.

Agile Yorkshire, meetup organiser

Prerequisites

  • You work in a testing or quality engineering role
  • You’ve used at least one AI tool in a work context (doesn’t need to be regularly or confidently)
  • No prior knowledge of AI evaluation frameworks is needed

Course description

Using AI tools is straightforward. Knowing which tasks they’re worth using for is harder. Most guidance focuses on how to prompt better, but if you’ve chosen the wrong task, no prompt will fix that. The real question comes earlier: which of the things you do every day is AI actually suited to?

This course gives you four criteria for answering that question: risk, reversibility, context requirements, and judgment load. You’ll see how they apply through a realistic QA scenario, then use the Use Case Selection Worksheet to score your own tasks. You’ll finish with a shortlist of 2–3 use cases worth pursuing, along with a scored rationale you can act on.

Your instructor

Kat Obring has spent over 20 years in software testing, QA engineering, and delivery leadership. She runs workshops and courses for quality engineers navigating AI adoption, and has spoken on evidence-based AI evaluation at conferences including Agile Testing Days, HUSTEF, and Motacon, among others. Her approach applies the same test-design thinking she’s used throughout her career: criteria first, evidence second, conclusions last.

Kat’s workshop sharpened my day-to-day use of AI.

Nathaniel Farnsworth

Course Content

Introduction
Welcome to the course!
Why use case selection matters
Lesson 1: The real source of AI slop
Lesson 2: The four criteria
Applying the criteria
Lesson 3: Worked example of mapping a mandated tool to the criteria
Course project: build your own use case shortlist

Enquire about a workshop with me

Enquire about a workshop with me

Enquire about a workshop