Editor’s Note: With the forthcoming UN General Assembly meetings and the associated UN Counter Terrorism Executive Directorate convenings in New York City this month, the international rise of tech-enabled and networked nihilistic violent extremists will be on the agenda. Given the increasing frequency of nihilistic violent extremist plots, and other instances of targeted violence in which the perpetrator had loosely held or mixed ideological views, this article calls for a paradigm shift in threat prevention. The authors make innovative recommendations including blending BTAM approaches with the tech sector’s trust and safety know-how, leveraging AI to assist and help protect human analysts, fostering cross-ideological education and team collaborations, and leveraging external red teams.
Nihilistic Violent Extremism (NVE) consists of individuals who want to enact a variety of targeted violence, including grooming minors towards violence and sexual exploitation, as previously reported in HSToday. The DOJ has remarked that NVE networks “seeks to destroy civilized society through the corruption and exploitation of vulnerable populations“ and has a goal of pursuing the “downfall of…the US Government.” Given the scope and online nature of NVE, some scholars and organizations have advocated for considering tech companies as first line practitioners in the fight to prevent targeted violence that originates online. However, there is currently not enough consideration of what that means in terms of implementation. This article examines the connection between tech and prevention frameworks for NVE with the goal of moving this conversation forward.
A Paradigm Shift in Threat Prevention
The NVE threat signals the need for a paradigm shift among prevention practitioners and researchers, similar to the previous transition in focus from Salafi Jihadists to Racially Motivated Violent Extremists (RMVE). Just as the approach for countering Salafi Jihadist threats was inadequate for the new threats posed by RMVE, the existing policies based on RMVE approaches were not designed to address the unique nature of NVE, the types of threats it poses to children online, or the collaborative interventions needed to respond to those threats. NVE threats blend personal grievances and a desire for chaos with undercurrents of various extremist beliefs, making them difficult to categorize under traditional extremism policies. This has encouraged a shift within Trust and Safety (T&S) teams away from proscriptions based on ideologies or entity lists and toward a more adaptable, behavior-based enforcement model. This is where T&S can intersect with the frameworks used by established prevention practitioners, including those in other fields.
Extremism prevention has explored a variety of new interventions, including programs funded by the now-defunct Targeted Violence and Terrorism Prevention grant program. While some of these innovative programs show promise, few can be generalized beyond the small samples on which they have been tested. Similar to these new programs, online counter-narrative campaigns have been launched to undermine extremist recruitment and hateful content. However, these campaigns have rarely been subjected to rigorous empirical examination, or have sought to counter an ideology and not a behavior tied to online harms. This highlights a critical need for greater dialogue between tech companies and traditional prevention practitioners, as both sectors continue to face similar challenges while exploring evidence-based best practices. One possible path forward is to create a prevention framework within tech similar to the Behavioral Threat Assessment and Management (BTAM) approach, but also incorporates the unique insights and practices of T&S teams across the industry. This integrated framework would leverage the strengths of both fields to create a more comprehensive and effective approach to violence prevention.
So, how can tech companies execute a behavioral approach effectively? We see at least three areas of focus:
AI Assistance
AI can be a powerful ally in the fight against NVE, but it is not a complete solution. It can be trained using existing databases to recognize key terms, patterns, and symbols associated with NVE recruitment and exploitation. However, these online spaces are constantly evolving; bad actors are acutely aware of platform policies and consistently work to evade and thwart automated detection systems. Therefore, one potential outcome is not to replace human review with AI, but rather to use AI to assist human practitioners by more efficiently identifying and flagging concerning NVE content for human review. In addition to this, use of AI that reduces human reviewers’ exposure to content that can cause psychological trauma is also highly encouraged.
Education and Cross-Team Collaboration
Effective enforcement of NVE-related content requires a cohesive approach across a tech organization’s disparate teams, ideally led by a key point of contact empowered to make decisions across these teams and push efforts forward to meaningfully reduce harms. Unlike other extremist ideologies that may fall relatively neatly under a single policy, NVE’s varied nature means policy violations span numerous categories and thus multiple teams. This makes a cross-functional approach essential for preventing NVE harms. Teams with different enforcement criteria can work together through shared level-setting, ongoing dialogue, and shared descriptions and definitions of NVE harms. This ensures consistency and prevents gaps in policies and enforcement that bad actors can exploit. This model should also be expanded within individual technology and social media companies, broadening cooperation and task force purview to a whole of industry effort.
External Red Teaming
Tech companies cannot anticipate every way in which their platforms may be exploited by NVE actors, nor should they be expected too. Tech thrives when it can focus on innovation and meeting the needs of its users. To fill this gap, external red teams can bring in subject matter experts to identify where safety measures can be improved. These external organizations are not beholden to a company’s internal outcomes and can have greater insights into how platforms may be abused by NVE’s. These red teams should be cross-discipline and ideally have practitioners from various sectors for a diversity of perspectives and mechanisms to deploy. This threat will not be countered solely through arrests nor the hashing and removal of CSAM content.
Conclusion: Safety by Implementation
Ultimately, moving tech companies into greater dialogue with traditional prevention practitioners while focusing on internal procedures to prevent NVE exploitation establishes a framework for responding to NVE’s evolving engagement and threat. As the recent report on BTAMs from DHS notes, successful prevention “requires coordinated effort among various stakeholders, including administrators, teachers, counselors, psychologists, families, and community partners…” In this model, tech companies are one of numerous community partners working to prevent vulnerable individuals, particularly youths, from being drawn into networks that could harm them and our communities.
The goal for both the tech and the prevention community is to find ways to deescalate and off-ramp individuals before—not after—they reach a crisis point or commit an act of violence. The dynamics of online spaces have become so dire that not engaging is no longer a solution; it only makes the risk worse. By embracing a strategy of safety-by-implementation and fostering a collaborative dialogue, tech companies can become a vital part of a broader, more effective prevention ecosystem.

