A penny causes a price to tilt, a form crashes when a character is inserted on the left. Findings like these show how valuable natural intelligence remains in testing. Automation tests diligently, but it mainly checks the known. Testing discovers the unexpected, makes hypotheses, digs into systems until patterns and causes become visible. The distinction between testing, checking and targeted digging into data and behavior is crucial. AI supports generation, prioritization and repetition, but curiosity, contextual knowledge and judgment create quality. Those who understand software as a socio-technical system choose tools wisely and keep people in charge.
In this episode, I talk to Christian Brandes and Jonas Poller about testing with natural intelligence. We start with two real-world finds: the gross-net flick flack, where a price goes up by a penny by wildly switching, and a crash that only happens by inserting it on the far left of the field. This leads to bigger questions: How creative is AI really? What is testing, what is checking and what do we call digging when models just rummage through data?
"I entered some number, switched back and forth several times and suddenly this number, in this case a price, increased by one cent without it actually having to do so." - Jonas Poller, Christian Brandes
Dr. Christian Brandes works as Head of Test & QA for isento GmbH. He has been supporting project teams for many years as a coach and consultant on the topics of software testing and quality assurance, requirements engineering and agile development processes. The ISTQB®-certified testing specialist (including Full Advanced Level and AI Tester) has worked in various IT projects as a test manager, test architect, test designer and test automation specialist. He is particularly interested in test process improvement, the testability of requirements and ensuring "Quality from the Beginning". He also works as a trainer and university lecturer and is a regular speaker at specialist conferences. Publications in specialist magazines and books as well as the production of podcasts and videos round off his portfolio.
After a varied career in IT, Jonas Poller has found his true calling in software quality assurance. Since successfully completing the tester trainee program at isento, he has dedicated himself to the challenges of software testing in customer projects, especially in the area of test automation and exploratory testing. With his enthusiasm for QA, he brings both traditional approaches and creative solutions to the test process. Despite his still young career as a tester, he already impresses with his expertise, curiosity and commitment. His declared goal is to fully immerse himself in the world of quality assurance and make a real difference to product quality.
When testers talk about natural intelligence, they don't just mean the difference between humans and machines. Rather, it is about the ability to remain curious as a tester, to think outside the box and discover errors that a program - no matter how well trained - would not come up with on its own. Jonas Poller and Christian Brandes describe in the podcast how they found errors that no AI could discover on its own. At one point, Jonas clicks through the system seemingly without a plan, changes states, plays with parameters - and suddenly a price increases by a cent. Such bugs are often caused by complex processes or rare combinations that only come to light through the spirit of research.
In conversation, it quickly becomes clear that many typical bugs can only be found if you test like a human and question expectations. Jonas reports on an input field that was supposed to look secure. The developers had apparently intercepted all sources of error. However, due to an unplanned copying process and a strange cursor position, he is able to insert a number that leads to a crash. No checklist, no classic "path" can help here. Only testing, creative trial and error and a bit of chance will lead to the goal.
Christian talks about his training sessions with children's learning laptops. One participant comes up with the idea of trying out the laptop like a real child in the car without a mouse - and completely new errors promptly appear. This is a good example of the value of context and experiential knowledge, which no AI develops on its own.
Brandes and Poller distinguish between "checking" and "testing". Checking are mechanical processes: running scripts, automated checks. Testing is what many human testers are particularly good at: Intuitively finding new paths, forming hypotheses, looking for surprises. But with AI, a new category has been added: "digging". By this they mean digging through huge amounts of data, puzzling together test cases from training data without any real understanding. An AI can recognize trends, make suggestions - but it doesn't know why certain tests are particularly important.
The podcast guests call for this: Don't eliminate human testing, keep it as a valuable element for software quality. Exploratory testing is becoming more important - especially because many companies are increasingly relying on automation and AI. However, these systems usually only generate surface quality. Recognizing and evaluating real, surprising errors? This is only possible with experience, intuition and a critical eye.
The conversation ends with an association: AI can take on many tasks, but never all of them. It lacks a gut feeling, an understanding of correlations and responsibility for the result. In a market where more and more tests are being automated, developers and testers alike need to critically question: Do we understand what and how we are testing? Or are we blindly relying on tools and models? Christian warns against having unit tests written solely by AI - otherwise the understanding of quality will be lost.
That's why training and curiosity are so important. If you've never tested or programmed by hand, you won't know whether the automated results make sense later on. Human intelligence remains the backbone of solid software testing.
AI provides support, collects data and automates laborious tasks. But humans remain at the center of software testing. Critical thinking, creativity and the ability to think outside the box are irreplaceable. Test strategies should continue to rely on a mix of automation, AI-supported methods and motivated human tinkering. This is the only way to create software that not only works - but really delivers what it promises.