Skip links

Navigating Extremism in the Era of Artificial Intelligence

The emergence of Artificial Intelligence (AI), particularly generative AI, has captured significant public attention and raised profound societal concerns. The release of ChatGPT in November 2022 marked a pivotal moment, sparking extensive discussions and apprehensions about its impact. While AI holds great promise, particularly in professional domains like software development and digital marketing, it is not without its dangers, notably in the context of far-right extremism. Political leaders and the general public have expressed escalating concerns regarding the potential exploitation of AI. A May 2023 survey by the Anti-Defamation League revealed that a majority of Americans are anxious about AI’s ramifications, including the propagation of false information, radicalization, and the promotion of hate and antisemitism.

This paper presents findings from an extensive monitoring of far-right channels conducted by the ICT between April and July 2023. Our analysis identifies four key themes in the discourse of the farright concerning AI: (i) Allegations of Bias, (ii) Antisemitic Conspiracies, (iii) Strategies to Overcome AI Limitations, and (iv) Malicious AI Usage.

The study reveals that the far-right’s engagement with AI encompasses discussions about AI’s reliability, strategies to circumvent its limitations, and nefarious applications, including autonomous attacks and disinformation campaigns.

The dialogue surrounding AI within far-right circles is deeply intertwined with anti-Semitic and conspiratorial beliefs. Our monitoring has uncovered that many individuals within the far-right community attribute AI censorship and monopoly to Jewish individuals.

Furthermore, far-right users are actively sharing technical tutorials and innovative strategies for leveraging existing AI technology for personalized objectives, particularly in bypassing protective measures established by AI developers. Notably, users are advocating for the creation of their own AI models and tactics for manipulating existing tools such as ChatGPT.

The paper sheds light on discussions regarding the utilization of AI for planning and executing kinetic attacks, including methods for programming drones using GPT models, even among individuals with limited technical expertise. These discussions are likely to have real-world implications that are yet to be fully realized.

This paper serves as a critical warning for policymakers, intelligence agencies, and military entities. Understanding the far-right’s perspective and exploitation of AI is imperative for shaping proactive strategies and countermeasures in a world where AI continues to exert multifaceted influence on society.