spot_img
36.7 F
Washington D.C.
Thursday, February 12, 2026

The Novel Threat of Nihilistic Violent Extremism Online

Editor’s Note: With the forthcoming UN General Assembly meetings and the associated UN Counter Terrorism Executive Directorate convenings in New York City this month, the international rise of tech-enabled and networked nihilistic violent extremists will be on the agenda. Given the increasing frequency of nihilistic violent extremist plots, and other instances of targeted violence in which the perpetrator had loosely held or mixed ideological views, this article calls for a paradigm shift in threat prevention. The authors make innovative recommendations including blending BTAM approaches with the tech sector’s trust and safety know-how, leveraging AI to assist and help protect human analysts, fostering cross-ideological education and team collaborations, and leveraging external red teams.  

Nihilistic Violent Extremism (NVE) consists of individuals who want to enact a variety of targeted violence, including grooming minors towards violence and sexual exploitation, as previously reported in HSToday. The DOJ has remarked that NVE networks “seeks to destroy civilized society through the corruption and exploitation of vulnerable populations“ and has a goal of pursuing the “downfall of…the US Government.” Given the scope and online nature of NVE, some scholars and organizations have advocated for considering tech companies as first line practitioners in the fight to prevent targeted violence that originates online. However, there is currently not enough consideration of what that means in terms of implementation. This article examines the connection between tech and prevention frameworks for NVE with the goal of moving this conversation forward.

A Paradigm Shift in Threat Prevention

The NVE threat signals the need for a paradigm shift among prevention practitioners and researchers, similar to the previous transition in focus from Salafi Jihadists to Racially Motivated Violent Extremists (RMVE). Just as the approach for countering Salafi Jihadist threats was inadequate for the new threats posed by RMVE, the existing policies based on RMVE approaches were not designed to address the unique nature of NVE, the types of threats it poses to children online, or the collaborative interventions needed to respond to those threats. NVE threats blend personal grievances and a desire for chaos with undercurrents of various extremist beliefs, making them difficult to categorize under traditional extremism policies. This has encouraged a shift within Trust and Safety (T&S) teams away from proscriptions based on ideologies or entity lists and toward a more adaptable, behavior-based enforcement model.  This is where T&S can intersect with the frameworks used by established prevention practitioners, including those in other fields.

Extremism prevention has explored a variety of new interventions, including programs funded by the now-defunct Targeted Violence and Terrorism Prevention grant program. While some of these innovative programs show promise, few can be generalized beyond the small samples on which they have been tested. Similar to these new programs, online counter-narrative campaigns have been launched to undermine extremist recruitment and hateful content. However, these campaigns have rarely been subjected to rigorous empirical examination, or have sought to counter an ideology and not a behavior tied to online harms. This highlights a critical need for greater dialogue between tech companies and traditional prevention practitioners, as both sectors continue to face similar challenges while exploring evidence-based best practices. One possible path forward is to create a prevention framework within tech similar to the Behavioral Threat Assessment and Management (BTAM) approach, but also incorporates the unique insights and practices of T&S teams across the industry. This integrated framework would leverage the strengths of both fields to create a more comprehensive and effective approach to violence prevention.

So, how can tech companies execute a behavioral approach effectively? We see at least three areas of focus:

AI Assistance

AI can be a powerful ally in the fight against NVE, but it is not a complete solution. It can be trained using existing databases to recognize key terms, patterns, and symbols associated with NVE recruitment and exploitation. However, these online spaces are constantly evolving; bad actors are acutely aware of platform policies and consistently work to evade and thwart automated detection systems. Therefore, one potential outcome is not to replace human review with AI, but rather to use AI to assist human practitioners by more efficiently identifying and flagging concerning NVE content for human review. In addition to this, use of AI that reduces human reviewers’ exposure to content that can cause psychological trauma is also highly encouraged.

Education and Cross-Team Collaboration

Effective enforcement of NVE-related content requires a cohesive approach across a tech organization’s disparate teams, ideally led by a key point of contact empowered to make decisions across these teams and push efforts forward to meaningfully reduce harms. Unlike other extremist ideologies that may fall relatively neatly under a single policy, NVE’s varied nature means policy violations span numerous categories and thus multiple teams. This makes a cross-functional approach essential for preventing NVE harms. Teams with different enforcement criteria can work together through shared level-setting, ongoing dialogue, and shared descriptions and definitions of NVE harms. This ensures consistency and prevents gaps in policies and enforcement that bad actors can exploit. This model should also be expanded within individual technology and social media companies, broadening cooperation and task force purview to a whole of industry effort.

External Red Teaming

Tech companies cannot anticipate every way in which their platforms may be exploited by NVE actors, nor should they be expected too. Tech thrives when it can focus on innovation and meeting the needs of its users. To fill this gap, external red teams can bring in subject matter experts to identify where safety measures can be improved. These external organizations are not beholden to a company’s internal outcomes and can have greater insights into how platforms may be abused by NVE’s. These red teams should be cross-discipline and ideally have practitioners from various sectors for a diversity of perspectives and mechanisms to deploy. This threat will not be countered solely through arrests nor the hashing and removal of CSAM content.

Conclusion: Safety by Implementation

Ultimately, moving tech companies into greater dialogue with traditional prevention practitioners while focusing on internal procedures to prevent NVE exploitation establishes a framework for responding to NVE’s evolving engagement and threat. As the recent report on BTAMs from DHS notes, successful prevention “requires coordinated effort among various stakeholders, including administrators, teachers, counselors, psychologists, families, and community partners…” In this model, tech companies are one of numerous community partners working to prevent vulnerable individuals, particularly youths, from being drawn into networks that could harm them and our communities.

The goal for both the tech and the prevention community is to find ways to deescalate and off-ramp individuals before—not after—they reach a crisis point or commit an act of violence. The dynamics of online spaces have become so dire that not engaging is no longer a solution; it only makes the risk worse. By embracing a strategy of safety-by-implementation and fostering a collaborative dialogue, tech companies can become a vital part of a broader, more effective prevention ecosystem.

Dr. Amy Cooter has studied extremist groups in the U.S. for two decades and consults globally with other experts and practitioners about ongoing and emerging threats. She has testified to U.S. Congress regarding domestic extremists’ recruitment of veterans and provided written testimony regarding militia involvement in January 6th. She has served as an expert consultant on federal hate crime trials. Dr. Cooter is the author of Nostalgia, Nationalism, and the US Militia Movement, and her work may also be found in a variety of other outlets including The Conversation, Scientific American, The Washington Post, High Country News, and more.

In addition to conducting novel research, Dr. Cooter builds on her previous teaching experience at Vanderbilt and elsewhere to conduct seminars with various stakeholders including law enforcement. She and other ICDE staff are working to ensure that rigorous work to investigate and prevent extremism harms including child exploitation online continues, as does supportive mentorship of the next generation of practitioners.

Matthew Kriner specializes in researching and analyzing militant accelerationism, US domestic violent extremism, transnational far-right extremism, extremist exploitation of digital and social media technologies, and threat assessment and radicalization. Matt is a consultant for the US Department of State, and he regularly briefs tech companies as well as US, Canadian, New Zealand, and UK law enforcement and intelligence agencies on militant accelerationism and emerging terrorist threats. Matt also serves as an expert witness for US DOJ on a prominent 764 case, as well as the Royal Canadian Mounted Police (RCMP) and the Public Prosecution Service of Canada (PPSC) on a prominent Terrorgram Collective terrorism case.

Matt’s research has been featured throughout online and written media, including NPR, NBC, Rolling Stone, The Atlantic, and more. Matt also has had his research included in governmental reporting, including two written expert testimonies to the US House’s January 6th Committee and the 2024 Countering Violent Extremism report from the US Government Accountability Office.

Pete Kurtz-Glovas (he/him) is a Researcher and Non-Profit Professional with a focus on Extremism and Democracy, bringing a unique perspective through his background in public health approaches to problems such as homelessness and addiction. He has worked with the Global Project Against Hate and Extremism conducting investigations into US and Internationally based hate groups and extremist activity, and helped launch the Community Advisory Resource and Education (CARE) Center’s pilot program while at the Polarization & Extremism Research & Innovation Lab (PERIL). Pete has provided comment on extremism in the United States to Business Insider, NPR, Al Jazeera English and has published research and analysis through the Global Network on Extremism & Technology.

Related Articles

- Advertisement -

Latest Articles