The Test Center - Efficient Software Testing
Errors in productive operation are expensive, but so is software testing. This is why the efficiency of software testing in the project is...
"AI delivers the result quickly. The development process passes us by." - Richard Seidl
I like to go hiking. Uphill like this. The path is often rocky, sometimes strenuous, sometimes surprisingly beautiful. There are all kinds of things to discover. When I reach the top, I am not only rewarded by the view, but also by the experience in my legs: the stumbling, the breathing, the conversations along the way. The nature. The path is part of the destination and has left its mark on me. Of course, I could also have taken the cable car up. Comfortable, fast, efficient - the view at the top is the same. Is it?
The same thing can happen to us when we use artificial intelligence. We tap in our prompt, poof, the output is ready. Finely honed texts related to our context. In the style we wanted. Or fully programmed code. Or test cases derived from requirements. Goal achieved.
But are we only interested in output? I think we need to be careful here. Because with this use, a lot of things fall by the wayside: the struggle for clarity, learning from mistakes, pondering, creative lateral thinking. And yet it is precisely this process that sharpens our thinking, allows us to grow and understand more deeply what is actually at stake.
I could probably write this column with AI. But what would I get out of it? For me, it's about dealing with topics. The kneading of thoughts in my brain. I wouldn't want to do without that, because it doesn't just result in a column, but also tons of new ideas, approaches, abstracts etc. that take me a step further.
If I only generate code, what do I learn? But if I debug my way through my bad code - then I learn to program.
Or hotly discussed and tried out in software testing: Deriving test cases from requirements. Yay. Another 10 new test cases created. But let's take a look at the role of the tester. Their task is actually not so much to create test cases. It's about communication, asking questions (even critical ones), informal networks and the needs of stakeholders for good quality. I know many testers who have the greatest overview in their projects and companies, who know all the ins and outs and who even developers ask about the context. Why is that? Because as a tester I have to run around to understand. To get my answers for test cases. And all the knowledge about a system allows me to test it better. If I only generate test cases, I'm missing a big slice of the experience pie.
Don't get me wrong. I like the AI. I also like using it. To think my ideas through ... as a critic of my thoughts and, above all, to avoid succumbing to my blind spots, which I like to ignore so much. It's great how my colleague GPT keeps correcting me. All trial and error is also important and right here. And automating stupid routines - yes, please!
But let's be vigilant about what we use these parts for. For higher, faster, further? More self-referential content? Lots of low-quality stuff? I think that's the wrong direction.
Perhaps all this AI stuff will ultimately make us a little more human again. Simply because we like learning, being creative, researching, making mistakes, correcting mistakes and developing ourselves further. AI can't (yet) do that for us.
Errors in productive operation are expensive, but so is software testing. This is why the efficiency of software testing in the project is...
2 min read
"We have to get back to doing what we do best: being human!" - Richard Seidl No, not another article about AI. Or maybe a lot? A little? Or not at...
2 min read
"If everyone throws out their personal opinion, someone is always pissed. We have to deal with that!" - Richard Seidl Everything used to be much...