Quality professionals are fighting the wrong battle

Quality professionals are fighting the wrong battle

I read a post on LinkedIn this week that went, roughly, like this. The level of quality in software has never been as low as it is right now. We as quality professionals have been fighting for years to raise it, and now everyone is chasing speed, more lines of code faster, everything is broken, and nobody seems to care. Someone replied underneath, in the way only LinkedIn replies do: nobody cared about quality before AI either, so why did you expect that to change now?

The back-and-forth that followed went in the usual direction. Someone mentioned professional software craftsmen who did care before and still do. Someone else brought up the analogy that AI is a second industrial revolution. The frame kept coming back to craftsmanship on one side and slop on the other, with the industrial revolution doing real work in the middle, as if the lesson of that period were obvious and on the side of the craftsmen.

I don’t think it is.

I think the quality profession is having a moral fight about purity, and it’s the wrong fight. The conversation we should be having is what “good enough” means for software now that production has become cheap and accessible. That’s a different argument, it’s harder, and it’s the one that matters.

The slop is real, and so is the grief

I want to be clear what I am not arguing.

I am not arguing the slop isn’t real. The drop in baseline care is visible. You can watch it happen in pull requests where the prompt-generated code has the shape of competence but none of the underlying choices that a person with taste would have made. The brittle selector that looks fine until it doesn’t. The test that asserts on what was easy to assert on rather than what was worth asserting on. The function that “works” against the happy path and has nothing to say about the edges. The audience I write for uses the word “slop” for this, and the post I started with used the same word, and they are both right to use it.

I am not arguing the environmental cost is rhetorical. Training GPT-4 took something in the order of 50 million litres of water1. Total AI water consumption in 2025 sat somewhere between 312 and 764 billion litres, depending on whose accounting you trust2. The pattern of where that water is taken matters: Iowa, Arizona, Chile, parts of Spain, regions that were already under water stress and are now being asked to subsidise the affordability of an output the consumers using it can’t see the bill for.

I am not arguing the analogy holds in every direction. There is a contamination effect from AI mass production that didn’t exist in the original industrial revolution. AI-generated code spreads into shared codebases, into public libraries, into the training data that produces the next generation of models. The mass-produced furniture of the 1850s didn’t make the craftsman’s wood any worse. AI mass production of code might make the codebase the craftsman is working in degrade around them. That is a real problem and the analogy doesn’t work very well for that.

And I am not arguing the discomfort is misplaced. There is something specific that happens when you have spent twenty years building craft into a discipline and a tool arrives that doesn’t care about any of it and produces output anyway. People have a word for that feeling, and the word is grief.

All of that is true. None of it tells me the quality profession should be in a moral fight about it.

Who got a dining table

The fight about craftsmanship versus mass production has been running for nearly two centuries and is still unresolved. Hans Wegner chairs are still made. So is IKEA furniture. Both serve a purpose. What changed permanently with the industrial revolution was who got to have a dining table.

I owned an IKEA dining table for twenty years. It was not an heirloom. It would not have made it into a furniture magazine. It was a flat-pack pine thing with a slightly wobbly leg and a scratch in the laminate from a moving van. It was also the only dining table I could afford at the time, and it held up under twenty years of meals, craft projects, dinner parties, and the small ordinary uses a dining table gets put to. When I think about what that table was, I keep coming back to the fact that without IKEA I wouldn’t have had a dining table. I would have had a folding camping table. The Hans Wegner option was never on the menu.

That is the part of the industrial revolution the craftsmanship-vs-slop framing keeps skipping. The shift was about an entire category of object becoming available to people who had been shut out of it. The people who could already afford handmade furniture kept buying handmade furniture. The people who couldn’t, got something.

The fast-fashion parallel is the cleanest version of the trade-off, because it shows both the upside and the downside. A cotton t-shirt costs about 2,700 litres of water to produce3. The fashion industry runs at roughly 93 trillion litres a year4, making it the second-largest industrial water consumer5. The cost is mostly borne by people in production regions, in Bangladesh and India and Uzbekistan and Egypt, rather than by the consumer at the till. Fast fashion is a real environmental harm and it is also, simultaneously, the reason that owning more than two changes of clothes stopped being a class marker. Both of those things are true.

AI is on the same road. It is earlier on it, the absolute numbers are smaller, AI water consumption in 2025 is still under one percent of fashion’s footprint. But it is concentrating in the same shape of place and growing fast. There is one difference that matters for the argument. Fashion’s water cost is locked in at production. Once the t-shirt is made, the water has been spent. AI’s water cost is paid continuously at operation. The model uses water every time it answers a query, and the data centre running the model uses water for as long as it is running. That changes the shape of the obligation. The operating decisions, where data centres go, when they run, how aggressively they are cooled, are all still in play. The shift is happening either way. The version of the shift we get is being negotiated right now.

The same shift, in software

Here is the version of the AI conversation I don’t see happening, and I would like to see happening.

What AI is doing is opening up software production to people who couldn’t reach it before. A partner at Quinn Emanuel, a major US litigation firm, built the firm’s internal litigation platform on Claude with what Anthropic’s writeup describes as “virtually no coding background”6. He treated the model like a member of the case team, gave it chronology and key excerpts and themes, and ended up with a working internal tool. Anthropic’s own non-technical staff, in legal and security and operations, are shipping internal automations and prototype systems that wouldn’t have been built in the previous regime because the cost of building them would have been higher than the value of having them. “Vibe coding” was the Collins English Dictionary’s Word of the Year for 20257.

None of these tools are heirlooms. They don’t need to be. They serve a specific purpose for a specific group of people, and they exist at all because the cost of production dropped enough that someone who couldn’t previously justify having a tool could now have one. The litigation platform at Quinn Emanuel is the AI equivalent of my IKEA dining table. Not the best version of the object. Sufficient for its purpose. Wouldn’t have existed otherwise.

And this is what I keep wanting to say to the craftsmanship camp: the consumers of software already don’t ask about the internal workings. They never have. Nobody opens their banking app and wonders whether it was hand-built by a craftsman or assembled by a non-engineer with an AI assistant. Nobody fills in a hospital intake form and pauses to check the provenance of the codebase. They ask whether it works. They have always asked whether it works. The internal-purity question, “was this produced the right way,” is a producer’s question. It has never been a consumer’s question.

The quality profession’s job, the thing the profession is actually qualified to do, is to define what working means at the new floor. That is a different job from the one a lot of the craftsmanship-defending posts seem to be defending. It is also a more interesting job, and a more durable one.

The fight worth showing up for

The “good enough” conversation is a hard conversation, which is part of why nobody is having it.

It contains real questions. Where the floor sits for safety-critical software means “good enough” is not a thing you get to negotiate, in avionics, medical, finance. How to contain the contamination effect, where AI-generated code feeds back into shared libraries and into training data and shapes the next generation of output. How to keep human decision points in a pipeline that is running faster than the pipeline was designed for. How to think about quality at population scale, where the unit of evaluation is “the standard we hold across millions of products” rather than “the standard we hit on this one.” Every one of those is something the quality profession could lead on, and every one is harder than the moral question of whether AI should exist.

None of that work happens while the profession is busy losing a moral fight about purity. And the fight is losing on its own terms. The craftsmen didn’t win in 1850, they are not going to win this one, and “the slop is bad and we should refuse it” is a complaint dressed as a strategy. The conversation about what good enough means is happening with or without the quality profession at the table. The question is whether the people who have spent their careers thinking about what quality is want to be the ones shaping that conversation, or the ones standing outside it.

Stop fighting the moral fight. It was settled in 1850 and we are losing the rerun.

Start having the harder conversation. Where does the floor sit. Who is accountable. What gets traded for affordability, and what doesn’t get traded under any circumstances. How do we keep the human decision points in place when the throughput has changed. What does contamination of shared materials look like, and how do we contain it. How do we define quality at the scale of a population of products rather than at the scale of one.

That is the conversation that benefits from twenty years of quality experience. The other one doesn’t.

The IKEA table, by the way, is still standing.


  1. Pengfei Li et al., “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models”, Communications of the ACM, 2025. GPT-3 training is estimated at around 5.4 million litres of water; GPT-4 is estimated at roughly an order of magnitude higher.↩︎
  2. David Mytton and Maria Lozano, “The carbon and water footprints of data centers and what this could mean for artificial intelligence”, Science of the Total Environment, 2025.↩︎
  3. European Parliament, “The impact of textile production and waste on the environment”, 2024 update. Citing WWF.↩︎
  4. Global Fashion Agenda, “Is Fashion’s Impact on Water Being Overlooked?”. 93 billion cubic metres = 93 trillion litres.↩︎
  5. UN News, “UN launches drive to highlight environmental cost of staying fashionable”, 25 March 2019.↩︎
  6. Anthropic, “How AI is Transforming Work at Anthropic”. Quinn Emanuel partner case study and internal non-technical adoption.↩︎
  7. Collins Dictionary, “Collins’ Word of the Year 2025: AI meets authenticity as society shifts”, November 2025.↩︎

Leave a Comment

Your email address will not be published. Required fields are marked *