Skip to content

Apples vs. Pears

Apples are better than pears!

Well – actually, pears are better than apples – if you ask me.

In the testing community, there’s ongoing trench warfare around vocabulary, primarily manifesting in embittered discussions around “automated vs. manual” test(er)s. Some people immediately declare that one or the other doesn’t even exist and that they don’t know what it means. Others proclaim that only one of them is ACTUALLY testing, while the other is the vastly inferior checking.

Is there a fundamental, qualitative difference between one method over the other that makes one objectively better than the other?

Some, like Maaret Pyhäjärvi, try a more productive and proactive approach and suggest a new naming system. And if asked, yes – I find her model of “attended” vs. “unattended” testing more helpful.

However, the way I see things, this is not a matter of nomenclature. Like many debates in testing, it’s trying to solve the wrong problem.

Let me explain.

To my mind, the main question here is:

Is there a fundamental, qualitative difference between one method over the other that makes one objectively better than the other?

To me, the answer is no. There are plenty of quantitative differences, but at the end of the day the real question is not how the test is executed – the real question is how it was conceived.

I’ll take a well-thought-out automated test suite that was created in pairs or ensembles of developers and test specialists over a list of test cases exercised by a human any day.

There are many reasons for that, and for me, the main ones are speed, reliability, and repeatability.

Speed, because if I have an automated test suite that’s strategically built and leans heavily on small, easy-to-understand, and easy-to-maintain tests, I can have an entire regression test done every single time someone wants to merge a change. That’s huge.

Reliability, because machines don’t have bad days, are distracted by their sick father in the hospital, or their kid being bullied at school. To be very clear: I work with humans, I want them to be human, and I do not, ever, blame anybody for having personal issues. Which is why I want to build my test suites to be independent of this.

Repeatability is in the same vein as the previous point: a machine will do the very same thing every time. If that works fine 8,000 times, and then it doesn’t? I know something changed. And that something is relevant to the test. It’s not a typo in a password, or a confusion of accounts between TestEnv1 and TestEnv2. It’s not necessarily a bug, but it’s something I can find, assess, and address as appropriate.

However, all of the above does not mean I don’t see any value in humans using their skill to test applications. Humans have this wonderful ability to assess things quickly, taking into account a gazillion implicit rules. If I open a website, I can see immediately when it’s “broken”. A machine may not.

When I use a login flow, I know immediately that the “don’t accept pasted text” does not make sense, because many people use password managers – as they should! My automated test can’t deal with a password input that won’t allow pasting. Which, in this case, would be a good thing, because hopefully, at that point, someone would go and ask the designers about this particular requirement.

And that, really, is why I think it doesn’t matter one way or the other, and why I’m not super bothered about the terminology we fight over. Because there is no war to be had – choose whichever approach gives you the result that allows you to release with reasonable (for your domain) confidence to get customer feedback as early as possible.

More often than not, it’ll be a mix of multiple methods. Apples and pears.