Holistic Testing
Holistic testing moves beyond the traditional boundaries of software quality assurance by embedding testing activities throughout the entire software...
Over two decades, performance testing moved from bare metal and heavy browser scripts to APIs, cloud, and Kubernetes. The focus shifted from lab load to production behavior. Separate performance from load. Do not run Black Friday tests each sprint. Watch production with observability, canary releases, and blue green deployments. Instrument early, build dashboards, and track cost.
In this episode, I talk with Leandro Melendez about how performance testing changed in the last20 years. Live at HUSTEF, we swap stories from bare metal and heavy browser scripts to APIs, cloud, and Kubernetes. Leandro draws a clear line between performance and load testing. Do not run Black Friday tests every sprint. Watch production, use canaries, and learn from real users. He pushes observability first. Build dashboards, instrument early, and think about cost.
"Nowadays with agile, with the cloud, kubernetes, environments that will Pop up and be destroyed completely. It's not the same game and I am a strong advocate of moving away from the old practices." - Leandro Melendez
Leandro is helping everyone to ramp up on their observability, QA and performance practices.He has over 20 years of experience in IT and over 15 in the performance testing practice where he served multiple S&P500 customers all over the USA, Mexico, Canada, Brazil, India, Austria, etc.Author of the performance testing blog Señor Performo where he curates a diverse set of learning material for performance testers and engineers together with a couple of YouTube channels in Spanish and English. As well he hosts the PerfBytes español Podcast, and is a co-host on the main PerfBytes show since 2018.He is an international public speaker participating in multiple conferences, events and webinars, with keynotes, workshops and multiple talks on his belt.And last, author of The Hitchhikers Guide To Load Testing Projects, a fun walkthrough that will guide you through the phases or levels of an IT load testing project.
Performance testing isn’t what it used to be. What was once a niche skill involving bare metal servers and cryptic scripts has become a central concern for teams managing complex cloud infrastructures and deploying cutting-edge technologies. On a recent episode of Software Testing Unleashed, host Richie sat down with performance expert Leandro Melendez, also known as Señor Performio, at the Gustev Conference in Budapest to discuss how performance testing has changed in the last twenty years—and what it means for today’s teams.
When Leandro Melendez began his career in 2007, performance testing was nearly synonymous with load testing. Tools like LoadRunner dominated, and the workflow was rooted in waterfall methodologies and physical server limitations. As Leandro Melendez describes, testers "were bare metal servers in the basement or under a desk," and performance tests were often complex, involving reverse engineering browser traffic and wrestling with session IDs. Though the process was challenging—sometimes, in his words, "madness"—it was fun in its own way.
Today, the landscape is different. Agile approaches and cloud infrastructure have changed everything. Teams no longer wait months or years for releases, nor do they worry about whether a single box can handle all user load. Now, infrastructure can scale automatically, environments are ephemeral, and APIs are everywhere.
But as Leandro Melendez points out, this doesn’t make traditional load testing obsolete—it simply reframes when and how it’s applied. For example, he cautions against "doing a massive test every sprint," since this is just "spending a lot of cloud money." Instead, companies should leverage observability and monitoring, using real production data to ensure systems perform well rather than relying solely on scripted simulations.
One of the biggest shifts discussed on the podcast is how performance testing has expanded beyond pure speed and capacity into new areas: cloud expenses, elastic scaling, and the impact of real user behavior. In the past, scaling a system meant physically buying hardware. Now, as Richie observes, it's "just a volume you turn on in the cloud," but with the caveat that this can quickly get expensive.
Leandro Melendez uses a series of car analogies to illustrate these new priorities. Having an elastic system that automatically scales is convenient, but if your software is inefficient—"has a hole in the tank"—you’ll burn through resources and money very quickly. Modern performance testing must also measure how quickly cloud instances spin up and shut down, balancing resource conservation with responsiveness for users.
He highlights real-world failures, such as the infamous Taylor Swift ticketing incident, to remind listeners that load testing is still vital for major events, but shouldn’t be the default for every release.
For testers and developers working today, where should they begin? Leandro Melendez encourages teams to start with observability and monitoring—not just scripts and automations. He advocates implementing "observability agents and some telemetry," so that everyone understands the system's performance metrics.
Ideally, these measurements should be established "before you even start the project," he says, but acknowledges that most teams join projects already underway. No matter when you start, make sure your system provides clear, human-friendly dashboards (not just raw metrics) that your team can access and interpret.
He also stresses flexibility in tool choice. When asked about the "best tool" for performance testing, Leandro Melendez compares it to choosing a piece of silverware at a dinner table—there’s no universal solution, and teams should seek out a combination of tools and platforms that suit their needs, rather than locking themselves into one approach.
The world of performance testing has transformed dramatically in recent years. It’s not just about pushing a server to its limits—it’s about monitoring, understanding real-world usage, controlling costs, and choosing the right tools for the job. As Leandro Melendez advises, "Know what is your performance without doing anything. Just know, wait." With the right foundation, teams can build reliable, scalable software that delivers quality experiences for users—no matter how much the underlying technology shifts.
Interested in more from Leandro Melendez? Check out his YouTube channel and LinkedIn live sessions for ongoing tips and conversation around software quality and performance.
Holistic testing moves beyond the traditional boundaries of software quality assurance by embedding testing activities throughout the entire software...
AI agents will have a transformative role in software development and quality assurance. Its necessity to adapt traditional testing methodologies to...
Podcast Episode: Test Pyramid - A Critical Look Everyone knows the test pyramid, everyone immediately has its image in mind. Sometimes it has three...