Although public discussion about AI and the role of algorithms in our society has erupted, several topics remain invisible and under-researched. These include the impact of AI on the nature and extent of crime. For what new forms of crime is AI leading to? Who are the victims of AI crime? And what measures can be taken to counter AI crime? After all, AI turns out to be not only a powerful tool for innovation, but also a potential threat with major social implications. The editors at Erasmus University Rotterdam discussed this with Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, who recently published a book on the subject.
In his book Making Surveillance Public - Why You Should Be More Woke About AI and Algorithms (1), Schuilenburg shares his research on the deployment of AI applications, the rise of AI crime and how AI is changing the issue of security. In addition, he explains why other forms of public accountability are necessary. In a forthcoming paper, Schuilenburg delves deeper into the issues surrounding the development of AI crime.
Society is digitizing and so is crime
Today's society is in a digital revolution; not only is everyday life digitizing, but crime is changing with it. In a world where technological developments are the norm, their dark side is also growing. While the discussion about AI and algorithms is in full swing, a crucial aspect seems to be underexposed: the impact of AI on crime. "It is important to distinguish between cybercrime, such as online fraud and cyberbullying, and AI crime because the use of AI will greatly expand and facilitate the playing field of cybercrime. AI can also lead to completely new forms of crime that can cause more harm socially than cybercrime," Schuilenburg said.
In his paper, Schuilenburg distinguishes between three forms of AI crime: crime with AI, crime directed by AI and crime by AI. In the first form, AI is used as a tool for traditional forms of crime. One can think of the threshold-reducing chatbots that make crime more accessible to those without technical knowledge. But also consider deepfakes and voicecloning that can be used for criminal offenses such as spreading disinformation and for pornographic and fraudulent purposes. In the second form, an AI system is precisely the target. An example is the hacking of autonomous vehicles for terrorist purposes. The third form refers to crime made possible only by AI, with human actions taking a back seat. This raises important questions of liability and criminality, as AI makes autonomous decisions that may be considered criminal under the law.
In addition, Schuilenburg emphasizes that much has changed in terms of the frequency of online crime. The professor expects an increase: "The most recent figures show that in Western countries registered crime has dropped by more than a quarter since 2002. But that decline does not extend to cybercrime. Thus, by 2022, 15 percent of Dutch people aged 15 or older will have been victims of one or more forms of online crime. That's more than two million people. The scope of cybercrime is expected to increase further in the coming years, and AI crime is now coming on top of that."
Although the impact of AI crime on society will become significant, Schuilenburg argues that we should not thwart developments but rather take developments in the right direction: "Technological innovation is inevitable." In this regard, Schuilenburg sees an important role for government, industry and knowledge centers to place greater emphasis on three sets of public values: driving, anchoring and process values when developing innovations for the security issue using AI and algorithms. "Only by taking into account all public values can we increase security for everyone in society," Schuilenburg said.
(1) https://www.elevenpub.com/en/100-15110_Making-Surveillance-Public