Increase Team Autonomy
Team culture and psychological safety are highlighted as crucial drivers for effective collaboration and problem-solving in software development....
Budget pressure often pushes quality to the edge. Shortcuts look cheap, then cost more in defects, outages, and trust lost. AI and low code speed delivery. They can hide weak architecture and gaps in security or performance. Testing earns its place when linked to outcomes. Revenue protected, incidents avoided, happier users, clear KPIs. As code comes from generators, domain knowledge gains weight. Reviews now include prompts and model behavior, beyond classic requirements. Early feedback moves earlier, from intent to code. Progress needs hands on tools and small experiments. The signal is clear. Treat quality as product strategy, and learn AI with purpose.
In this episode, I talk with Daniel Knott about the real pains in testing and what comes next. Why do managers cut quality when money gets tight. We look at AI and low code that spit out apps fast, often without clear architecture. We warn about skipping performance and security. We also reflect on how testers can sell value in business terms. Speak revenue, KPIs, and user happiness, not code coverage. Daniel says domain knowledge may beat deep coding as AI writes more code. We explore prompt reviews as a new shift left habit.
"I truly believe that we have like in five to 10 years we see a huge demand in people who are able to understand system architectures." - Daniel Knott
Daniel Knott loves digital products with high quality being web or native mobile applications. He has been working in the IT industry for almost 20 years with experience in hands-on software testing for desktop, web and mobile applications. He also worked as product manager for mobile and web products. At the moment, Daniel is working as an IT manager as Head of Engineering, helping software development teams ship great products with high quality.
Daniel wrote two books - Hands-On Mobile App Testing and Smartwatch App Testing and is a frequent blogger and conference speaker. In 2022 he also created his YouTube Channel about Software Testing which has grown to more than 145k subscribers.
In a recent episode of Software Testing Unleashed, host Richie sat down with software testing veteran Daniel Knott to discuss the pressing challenges and evolving landscape of quality assurance. Daniel, who has nearly two decades of experience in the industry, opened up about a pain familiar to many testers: the enduring challenge of communicating the real value of the craft.
Despite the evolution of testing tools and techniques—from Selenium’s heyday in 2010 to today’s surge in AI-powered systems—the struggle remains. As Daniel puts it, testers are adept at finding problems, but “we are not so good in communicating and being sales advocates for our craft.” This lack of advocacy often means that, when companies face tough times, quality and testing teams are the first on the chopping block. As Richie notes, development and UX carry an obvious value, while the unique benefits of dedicated testers can be overlooked or misunderstood.
It’s a common scenario: when budgets tighten, testing and QA are seen as expendable. Daniel highlighted the shortsightedness of this approach. While developers are expected to test their own code to some extent, layering additional responsibilities—pipeline maintenance, production monitoring—can dilute focus and introduce risk. Daniel warned that replacing testers with AI or shifting their tasks to junior developers may seem viable in the short term, but it sets companies up for long-term trouble.
Looking ahead, Daniel predicts a coming need for people who deeply understand system architecture. “It’s so easy now to build your own app in a couple of minutes,” he says, but warns against the pitfalls of rushing without adequate architectural planning. The issues are not just functional—neglecting non-functional requirements like performance, security, and accessibility is a recipe for costly problems after release.
A recurring theme in the conversation was the importance of aligning testing with business objectives. Instead of focusing on technical metrics like code coverage, testers should try to speak in terms that resonate with business stakeholders—such as revenue, user satisfaction, or achieving specific KPIs. Daniel encourages testers to tailor their message to the background and priorities of their audience, making the case for quality in a way that leadership can understand and champion.
Still, bridging the gap between technical day-to-day testing and high-level business metrics isn’t simple. Daniel acknowledges this as a tough challenge, with no one-size-fits-all solution. The context of your business, your competitors, and customer feedback all play a role in shaping how testers can best demonstrate their impact.
With technologies like AI, no-code, and low-code platforms changing the development process, Daniel suggests the tester’s role doesn’t have to be as deeply technical as once thought. Instead, being a domain or business expert could become even more valuable. AI might handle the nuts and bolts of coding, but human testers will be needed to judge the quality of outputs, navigate edge cases, and understand the nuances of the business domain.
He also floated a novel idea: testing and reviewing the prompts used to generate AI-created code—a new form of “shift left” testing for our AI age. The quality of a prompt can make or break the outcome, so reviewing these early in the process could head off issues before they reach production.
Daniel’s advice: try out new AI-powered tools, but don’t just believe the marketing. Many new offerings promise to automate test case generation, analyze user journeys for automation opportunities, and more. The key is to experiment and find the tools that fit your workflow, your team, and your context. And above all, stay grounded in testing fundamentals—those core skills aren’t going away.
To wrap up, Daniel stressed the importance of learning about AI and its implications for testing. The pace of technological change is faster than ever, and testers who embrace new tools and develop a foundational understanding of how AI works will be best positioned to add value. Yet, as he points out, the fundamental principles of testing still matter—and it’s up to practitioners to keep making their case, both within tech teams and at the leadership level.
Whether you’re a tester, developer, or quality coach, the main takeaway is clear: quality may be harder to quantify, but it’s never been more important. As software becomes easier (and faster) to build, ensuring what gets shipped is robust, secure, and truly valuable is a challenge testers are uniquely equipped to meet—if they can keep evolving, advocating, and communicating their worth.
Team culture and psychological safety are highlighted as crucial drivers for effective collaboration and problem-solving in software development....
Nonviolent communication in tech teams enhances team dynamics. With its four components - observations, feelings, needs, and requests - this approach...
Modern software teams chase speed and features, yet fatigue and poor communication still derail delivery. Quality appears as a human practice, not a...