Jan Jürjens has more than 25 years of practical experience in software security. His first book (2005) has been translated into Chinese. Current positions: Director of Research Projects (Fraunhofer ISST); Professor & Head, Institute of Software Engineering (University of Koblenz). Previously: Professor of Software Engineering (TU Dortmund), Senior Member/Research Fellow (Robinson College, University of Cambridge), Royal Society Industrial Fellow (Microsoft Research Cambridge), Postdoc (TU Munich), PhD in Computer Science (University of Oxford) in software security, B.Sc. in Mathematics (University of Bremen).
Artificial intelligence is not just hype, but is finding its way into more and more companies. From simple chatbots to complex forecasting algorithms - AI is everywhere. The question quickly arises: How do I actually secure these systems? Simply letting them run and hoping that nothing happens can end badly. It becomes particularly explosive when personal data or sensitive information is involved.
AI means machine learning: the software is trained with a large amount of data and can deduce things from it that nobody has programmed into it by hand. Sounds practical - and it is, but it brings new risks. You often don't know in advance how the system will react to new requests. Speaker B sums it up like this: 'If I already knew exactly how the software would behave, I wouldn't have to build it with AI. This uncertainty makes testing and safeguarding AI systems more difficult than with classic software.
Another special feature is that attackers could start during the training phase of the AI model. They can deliberately infiltrate data in order to influence or trick the AI to their advantage. This can lead to the system later making mistakes or revealing secret information that would have remained better protected.
There are many different attackers of AI-based software. Some try to introduce errors during model training. Many others start directly when the system is used. A popular target: so-called prompts, i.e. the user input used to query a chatbot or another AI. An example: a query not only gives a superficial answer, but may even spit out internal salaries if you ask cleverly enough. Gaps like this have real consequences - and at the latest when it is reported in the media that a company has lost data, it becomes expensive.
What helps? Nothing works without regular testing. For AI systems, for example, this means penetration tests with creative, unusual inputs. Testers try to behave like attackers and find vulnerabilities before someone else discovers them.
However, no one has to start without support. The OWASP consortium, known for security standards for web applications, has now developed its own catalogs for AI applications - for example the OWASP Top Ten for LLM (Large Language Models), in which typical risks are listed and described. Here, developers and companies can find many illustrative cases, best practices and tips for systematically tackling the most important issues.
However, it is not just about what is technically possible. From a legal and ethical point of view, everyone involved has a role to play: providers, operators and employees. The new AI regulations clearly stipulate who has which responsibilities. Even if a company "only" uses a purchased AI service and does not develop a model itself, it must check whether and how data is protected and processed.
Care must be taken, especially with sensitive data such as payrolls or customer information. Companies must not only ensure that no data is sent to the wrong address. They are also obliged to check whether they are allowed to use the AI for the intended purpose at all.
Securing systems is one thing; the other major weak point remains the human being. It is all too easy for users to load a confidential document into an AI application out of sheer convenience without considering where this data might be transferred to. Raising awareness helps. Clear rules and training are needed to prevent data from being accidentally leaked or employees blindly relying on the AI's answers, no matter how plausible they sound.
AI offers many opportunities - but also new risks. If you want to build or use software with AI components responsibly, you need to think about security right from the start. Helpful tools such as the AI-specific OWASP standards make it easier to get started. But at the end of the day: Testing, reworking and educating is essential - to avoid a nasty surprise at some point.