Test automation and generative AI are transforming how software quality is delivered, but separating marketing hype from practical value has become increasingly difficult. New tools promise faster testing cycles, smarter automation, and fewer human errors, yet many teams struggle to understand what these technologies can realistically achieve in day-to-day testing work.
In this expert discussion, four respected voices in the software testing community — Larry Goddard, Michael Bolton, Paul Grossmann, and Tobias Müller — come together to share honest, experience-driven insights on what generative AI can and cannot do for software testing teams. Rather than focusing on bold predictions or vendor promises, the conversation is rooted in real testing experience, practical examples, and long-term thinking about quality.
Throughout the session, the panel explores real-world use cases for generative AI, ethical responsibilities when AI causes errors, the limitations of large language models, and why human testing expertise remains essential even as automation becomes more advanced. The result is a nuanced and realistic perspective on where AI fits into modern testing practices.
Cutting Through the Marketing Hype Around Generative AI
Generative AI tools are often positioned as silver-bullet solutions that promise to replace large parts of the testing process. Marketing narratives suggest that AI can automatically generate complete test suites, identify all defects, and dramatically reduce the need for human testers.
The panel challenges this narrative by explaining how generative AI actually works. These systems generate outputs based on patterns in existing data rather than true understanding. They do not comprehend software behavior, business logic, or user intent in the same way human testers do. This distinction is critical, especially when testing complex systems where context, risk, and real-world usage matter.
As a result, generative AI can be a powerful assistant when used thoughtfully, but it becomes a risky decision-maker when relied on without proper oversight. Treating AI as an authority rather than a tool can lead to blind spots, missed defects, and false confidence in test results.
ChatGPT and AI Models in Software Testing
Tools like ChatGPT and other large language models can support testers in several ways:
- Generating ideas for test scenarios
- Helping structure test cases
- Speeding up documentation and exploratory prompts
However, the experts emphasize that AI-generated results must always be reviewed. These tools can produce outputs that look confident and convincing while still being incorrect or incomplete.
The key takeaway: AI enhances productivity, but it does not replace critical thinking or testing expertise.
Ethical Questions: AI Errors and the Autonomous Driving Comparison
One of the most thought-provoking parts of the discussion draws a comparison between AI in software testing and autonomous driving technology. Both are areas where automation promises increased efficiency and reduced human effort, but both raise serious questions about responsibility when something goes wrong.
The central question posed by the panel is simple: if AI causes an error, who is responsible?
Just as self-driving cars still require human accountability, AI-powered test automation must operate under clear ownership and ethical responsibility. Someone must decide what is tested, how results are interpreted, and when software is safe to release. Over-trusting automation introduces risk, especially in complex systems or regulated environments where failures can have serious legal, financial, or safety consequences.
This comparison highlights an important principle: automation does not remove responsibility. It shifts it.
Practical Value of AI in Test Automation
Despite the cautions, the panel also highlights where generative AI adds real value when used responsibly:
- Supporting large-scale test creation
- Assisting exploratory testing through pattern recognition
- Reducing repetitive manual work
- Helping testers focus on higher-value activities
AI works best as a supporting layer, not as a replacement for testers or structured testing strategies.
Measuring Human Testing Expertise in an AI World
While AI can process large volumes of data quickly and consistently, it lacks several qualities that are central to effective software testing. These include context awareness, business understanding, product intuition, and empathy for real user behavior.
Human testers bring these qualities into the testing process. They understand how software is actually used, what matters most to customers, and where failures would have the biggest impact. This expertise is essential when evaluating AI-generated outputs, deciding which issues are truly important, and ensuring that test coverage aligns with real-world risk.
Rather than diminishing the role of testers, AI makes human expertise more visible and more necessary.
Why AI Is Not Taking Testing Jobs
The panel is clear: AI is not taking testing jobs.
Instead, roles are evolving. Testers who understand how to work with AI, recognize its limitations, and apply human judgment will become even more valuable. AI shifts the focus away from repetitive execution toward analysis, strategy, and quality ownership.
Key Moments and Timestamps
| Time | Topic |
| 04:30 | Marketing Hype and ChatGPT |
| 12:25 | Ethical Aspects and Autonomous Driving |
| 25:00 | AI in Test Automation |
| 41:00 | AI-powered Tools |
| 52:49 | Measuring Human Testing Expertise |
| 01:24:20 | Why AI Is Not Taking Our Jobs |
Experience AI-Supported Test Automation in Practice
If you’re exploring how AI-powered test automation can help you automate complete user journeys without sacrificing quality or confidence, consider signing up for a TestResults demo.
Discover what it feels like to release software that actually works, with fewer flaky tests and AI support that complements human expertise.
Watch the Full Expert Discussion
If you want to dive deeper into the conversation and hear the full, unfiltered discussion from the panel, you can watch the complete session on YouTube. The experts go beyond theory and share practical perspectives on generative AI, test automation, ethics, and the evolving role of human testers.
.png&w=3840&q=75)

