A few years into my QA career, my manager asked if I had any tips for handling support tickets efficiently. I said of course, and put together a wiki page (this was before Confluence removed wiki from companies, anyways) describing exactly how I had my desktop set up. Most people never followed it. Some said it was too rigid. Honestly, I think half of them just never read it. And that’s how it goes, right? You work out the best way to do something, document it carefully, and then it sits there.
What I’d missed is that a system that works for one person carries hidden assumptions: shared goals, shared instincts about what matters, a shared sense of what “good enough” looks like. I’d been working at a bar years before and we had a clear system there too: glasses in designated places, always, so you could reach for something without looking and know exactly what you’d get. That system didn’t come from a document. It came from working together until we agreed on where things went and why. Once we had that shared understanding, the instructions became unnecessary.
What I’m hearing now, in workshops and at conferences, is a version of the wiki page problem playing out with AI.
Some people on the team have figured something out and it works brilliantly for them. Others struggle to get reliable results. Some don’t feel comfortable with the tools at all and avoid them. And if you’re the manager of a quality team, or maybe the head of QA, that is a problem: you have people breathing down your neck saying “why are we not seeing the productivity gains that were promised when we paid for these tools?”
The obvious response is to write something down. An AI policy. Usage guidelines. A list of what the tools can and can’t be used for. It feels responsible, and it is also exactly what I did with the wiki page.
Policies are written for the organisation, not for the person doing the work. They tell people what to avoid, but they don’t help anyone decide when to use AI for a specific task, how to give it enough context, or what to do when the output isn’t right. And they assume everyone is starting from the same place, with the same instincts about risk and quality, which they aren’t.
The teams getting uneven results from AI aren’t missing better tools or a stricter policy. They’re missing a shared way of deciding when and how to use it.
A playbook addresses that differently. You define guiding principles together rather than handing down rules: which activities are worth using AI for, what good output looks like for each one, where the guardrails are and why, what to do when results fall short. It’s a living document, built by the team, tested in practice, revised as the tools and your understanding of them develop. The process of building it together is what creates the shared context that makes results consistent across the team, not just for the person who figured it out first.
Teams can absolutely do this without outside help, and I’d encourage it. The time investment to do it well is real: agreeing on principles, running sessions, pairing people to test things and teach each other what they find. For teams that want a faster start, I work through it with you in a single 60-minute session and deliver a polished starter playbook within two days. Something concrete the team can react to and build on, rather than starting from a blank page.
In the next post, I’ll go into what the playbook actually looks like and how to run the process with your team.

