TestResultsLogo_V2
Ë
Tobias Müller By Tobias Müller • February 12, 2019

Automated testing makes testing obsolete!

I started with a bold statement and throughout this article I will show why this is in fact the case.

In addition you will:
(1) refresh your testing glossary in terms of “checking”, “testing” and “approval”
(2) learn why you might read statements like “testing cannot be automated” in the testing community
(3) see why automated testing will not be required in the future

Some background to bring you up to speed

Is it “testing” or “checking”? The first discussion that is going on for quite some time is testing vs. checking. If you are not familiar with Michael Bolton’s Rapid Software Testing (RST) [1] you might have never heard about it. If you want to get in-depth details regarding the topic, follow the link to this article [2]. If you have some additional spare time, check out a very intense Twitter discussion which followed it [3].

To save you some time, I copied the latest definitions of testing and checking below, followed by a 1-sentence summary formulated by me.

Before you get the summary, here are the definitions as used by Bolton et al. [2]:

Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.”

and

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.”

My 1-sentence summary: Testing helps you to learn something new, while checking confirms something you already knew.

How does “approval” fit into that? Approval is an interesting concept which implies that you learn about the software by its usage – so it can be classified as testing. Think about it like this: A first version of a new word processor was released and happily accepted by the market. Nevertheless, a bug slipped through which creates a new page every time a user triple clicks a page (it is a wierd coding bug). Version 2 fixes this problem and there is an uproar from your user base afterwards as 90% of your users relied on that functionality.

That is what approval is about: Don't use the specification as the input on how a system should behave. Instead, use the current version of the system to identify differences between the currently accepted state (which does include bugs) and the newly anticipated state, i.e. your new software version.

Why I think you must be able to differentiate between testing and checking?

It helps you to better position test automation tools in the domain of testing. If a vendor is selling you a tool for test automation it doesn’t mean that he is selling you a tool that automates your testing. He is selling you a tool that allows you – in the best case – to automate your checking. And that is an important difference.

You still need to write your test cases (see my blog entry on that [4]). We all know that writing test cases is an extreme simplification of:

  • Understanding requirements
  • Understanding specifications
  • Understanding software implementation
  • Identifying gaps between requirements and specifications
  • Discussing with stakeholders what the software should really do

Most of the time you will be the first person that really challenges the requirements etc. And that is only the happy path testing. Think about negative testing as well. It requires a completely new test vector that you will need to cover:

  • Understanding platform limits to challenge them
  • Understanding underlying development frameworks
  • Understanding dependencies
  • Understanding deployment implications, ….

In a nutshell: To write test cases you need to learn a lot. Which brings us back to the previously mentioned "testing". Therefore, there is no tool out there today that can automate this extremely difficult process. If somebody tries to make you believe that is possible, challenge them how an AI-based tool would understand the “Intended Use” of your application (I borrowed the term Intended Use from the FDA [5]).
Not only that, it is a bit more than what the FDA asks for, it needs to include the understanding of what a malicious use would be if we also speak about negative testing.

All of this leads to one single result: Based on today’s technology testing cannot be automated.

Checking, being a discipline within the context of testing, on the other hand, can be automated. What a normal test automation tool should help you with is to execute test cases. That is the bare minimum. Some advanced tools, e.g. TestResults.io [6], also help you to collect evidence automatically and provide you means of approval.

Will this change in the future?

As future is not predictable, I cannot give you a straight answer. To understand the completeness of a software (Halting Problem [7]) is the first step into the direction of automated testing – within the description given above. Even if that would ever be solved, the next challenge is to identify the possible state transitions that are important to fulfill the Intended Use. You can imagine this might be quite hard. Why? Do you know any human out there that would liable claim that he/she fully understands a software?

Not only do we need to have a kind of AGI (Artificial General Intelligence) [8] to automate testing but it also needs to be far more evolved than today’s human intelligence – based on your answer to the question about a human understanding a software fully. If we ever reach this point in technology, then testing should not be a concern anymore as the code was already created by the AGI.

This leads us to another statement: As soon as we have a real automated testing solution, we don’t need testing anymore. Which is exactly what I've promised to show.

Does it really affect you?

Does all of this affect you in your daily life? This is a valid question. I assume it does not. It is always good to understand the differences.

Does it affect me, Wojciech Olchawa? Yes, it does!

Whenever I must explain to our customers why TestResults.io is superior than any other test automation tool.... errrrr.... check automation tool, this differentiation helps. As TestResults.io is also a tool that helps you with testing, not only checking. For example, the human factor, a specialized technique that automatically "checks" the SUT for deviations from previous runs, allows you to gather insights i.e. new learnings more quickly. Therefore, it does not only automate your checking but also assists you in your testing.

I hope this short entry helped you in understanding the differences between testing, checking and approval. Perhaps, even more important, it helped you to find the right tools for your daily testing. Why do we nevertheless market TestResults.io as an Automated Testing tool? Good question! Contact me and I will tell you.


1) http://www.developsense.com/courses.html
2) http://www.satisfice.com/blog/archives/856
3) https://twitter.com/marlenac/status/865234381622738945
4) https://www.testresults.io/blog/how-to-write-a-really-good-test-plans
5) https://www.fda.gov/RegulatoryInformation/Guidances/ucm073944.htm
6) https://www.testresults.io
7) https://en.wikipedia.org/wiki/Halting_problem
8) https://en.wikipedia.org/wiki/Artificial_general_intelligence

 

TestResults.io helps organizations, in fast-paced and highly regulated markets, to access Test Automation by Automated Testing

The Future Of Testing Starts Today.

See the magic, with your own eyes.

Request a Demo