On Nov. 7-8, 2024, the International Centre for Counter-Terrorism (ICCT), the Institute of Security & Global Affairs at Leiden University and the National Coordinator for Counterterrorism and Security (NCTV) jointly organized the 'Blue-Sky' Workshop on Terrorist Misuse of Artificial Intelligence (AI) in The Hague.

This private event brought together 26 experts in the fields of terrorism, national security, counterterrorism and technology. These experts, drawn from academia, industry, law enforcement and policymaking, represented 11 different countries. The goal was to discuss and explore how terrorists and violent extremists could adapt to the rise of AI and potentially exploit these technologies. The two-day workshop was led by Dr. Joana Cook (ICCT, Leiden University), Dr. Bàrbara Molas (ICCT) and Dr. Graig R. Klein (Leiden University).
The workshop used a unique approach that encouraged participants to think in a "blue-sky" way. Blue-sky thinking refers to the process of generating creative, imaginative and out-of-the-box ideas without limitations or boundaries. In short, it is a brainstorming format in which participants are encouraged to think freely about a topic, explore possible scenarios and anticipate future developments. With this proactive approach, the workshop aimed to address the common criticism of (counter)terrorism studies, namely that they are too reactive, by enabling participants to think creatively about new threats.
In preparation for the event, participants were invited to read an introductory concept note. This note provided an overview of how terrorists have already made use of artificial intelligence (AI) and provided a shared understanding of key concepts within the field of AI and terrorism.
Participants were then divided into three groups, each consisting of a diverse composition of experts from different subfields and professional groups related to terrorism and AI, and one workshop organizer who guided the discussions. Each group participated independently in a series of five roundtable discussions, focusing on specific aspects of possible adaptations by terrorists to, and misuse of, AI.
The first day began with discussions on the potential operational applications of AI within terrorism. Participants explored how AI can enhance and facilitate weapons production, as well as the risks of decentralized AI in providing instructions beyond the control of content moderation. This naturally led to a discussion of current legal frameworks and new ways to counter AI-optimized weapons. The second session focused on the misuse of decentralized and open-source AI. Participants discussed the increasing decentralization of large language models and its impact on content creation, radicalization processes and interactions with extremist chatbots.
On the second day, session three examined the life cycle and distribution of AI-generated extremist online content. Participants compared decentralized, low-level and centralized AI with "traditional" online extremist content. Questions about how AI can facilitate cross-platform migration and increase the overall visibility and impact of extremist content took center stage. In the fourth session, participants considered how emerging AI technologies can enable states to support non-state actors, such as terrorist and violent-extremist groups. In addition, they discussed how these groups may adapt their tactics if they receive AI products, weapons and tools from state sponsors. Key topics in this discussion included how AI can aid in target identification, improve the technical capabilities of groups, and the impact of AI on communication between state sponsors and non-state groups.
The final round of discussions encouraged participants to push their creative boundaries and consider scenarios in which AI can operate completely autonomously. Diverse ideas were discussed on how AI could be used to automatically create and distribute extremist content while bypassing current legal and moderation frameworks. Risks were also analyzed in which AI could evolve into a completely uncontrollable weapon.
After each session, participants completed work forms identifying potential targets, methods, perpetrators, means, contexts, circumstances and responses for future terrorist abuse of AI. Based on the five discussions and completed work forms, workshop moderators presented a summary of their group's insights during a concluding roundtable session.
The workshop resulted in valuable insights, threat assessments and both hopeful and dystopian scenarios surrounding the use of AI by terrorists and violent extremists. The open blue-sky approach allowed participants to explore a wider range of topics and scenarios, offering numerous opportunities for future research and collaboration. The diversity of input from experts from different fields once again highlighted the importance and need for this type of collaborative initiative and illustrated the success of this two-day meeting.
