Artificial intelligence (AI) has in no time become a powerful tool for accelerating business processes. More and more organizations are applying the capabilities of AI applications in their daily operations. But beware: cybercriminals are also taking advantage of AI. Chester van den Bogaard, Cybersecurity Consultant at BDO, explains.

AI offers attackers new ways to achieve their goals. For example, by generating attack code. "Imagine you are an attacker looking for code to analyze vulnerabilities in a Web site. While you probably won't get the right code right away, AI can quickly provide you with actionable phrases that you can turn into working attack code," Van den Boogaard explains. This allows attackers to develop and deploy new code at lightning speed. It is also possible to paste the source code of a Web site into an AI tool and then ask it where bugs and vulnerabilities are. This makes an attack easier. "Therefore, it is imperative that we deploy similar tools to defend our software," he said.
Fortunately, there is also a positive side to the story. "We are increasingly able to detect attacks and arm ourselves against these threats," Van den Bogaard explains. "For example, we can have AI tools analyze how to make our code and applications more secure and how to speed up detection processes." New solutions are also being developed from within Microsoft, such as Security Copilot. This allows companies to securely use artificial intelligence in their Microsoft 365 package.
When it comes to preventing AI-assisted cyber attacks, security testing, among other things, plays a crucial role. "In security testing, professional hackers check the digital infrastructure of companies (such as applications being worked with, ed.) for vulnerabilities so that the relevant parties can fix them before they are abused," Van den Bogaard says. "Moreover, the ability to generate attack code is already hampered by ethical roadblocks in many AI applications," he explains. Still, vigilance remains necessary: "Criminal organizations and state actors often have the resources and expertise to build their own (unlimited) AI applications that are not restricted by ethical blockades. This still allows them to use it for attacks."
For many companies, the benefits of artificial intelligences outweigh the risks. "But," Van den Boogaard emphasizes, "it is precisely those risks that you must take into account. Before getting started with AI tools, make a considered choice and seek advice from an independent organization." Thus, AI increases dependence on third parties. "Regular organizations will not run an AI infrastructure themselves, which will increase dependence on AI platforms and suppliers working with tooling."
Creating policies for employees can help integrate AI safely into an organization. This also prevents customer data from unintentionally falling into the wrong hands. "After all, ChatGPT uses all the information it gets to improve the language program. Even the pieces with sensitive content that you accidentally enter."
Among other things, BDO itself uses secure AI tools to provide automated cybersecurity advice. "In addition, we are going to shape security monitoring with Microsoft Security Copilot that will allow us to detect attacks faster and better. Furthermore, we are using AI for incident response research. For example, if someone receives a phishing email, we can analyze the data behind that email much faster."
Europe also recognizes the dangers of AI. The European Parliament has now agreed to the latest version of the European AI Act. Van den Boogaard: "This legislation should ensure that companies, governments and consumers can use artificial intelligence safely. At the same time, it is expected to bring additional obligations in addition to more rights." It is not yet known when this law will take effect.
