spot_img
37.9 F
Washington D.C.
Friday, February 13, 2026

Tech Interventions for Novel Nihilistic Violent Extremist Threats

Editor’s Note: Trust and Safety teams within the tech sector are primarily focused on detecting content or behaviors that violate terms of use, enforcing those terms of use, and reporting criminal conduct to authorities as necessary. While clearly important, there is a noticeable lack of focus on prevention of harm through positive interventions for at-risk users.  This article discusses options for both platform-specific and cross-platform interventions.  

The recent tragedy at Evergreen High in Colorado is the latest attack to highlight how novel extremist threats present themselves online. As researchers and tech platforms continue to adapt to these novel threats, it is important that tech companies remain solutions-oriented in the broadest way possible. The question isn’t whether tech companies should act, but how they can do so effectively.   

While the problem is vast and has cross-platform impacts, the space is also ripe for innovative solutions whose success can be measured, demonstrated, and adapted by others. These innovations must happen within individual platforms, as well as within the tech ecosystem at large. We believe these potential innovations will also be most effective when they are grounded in a realistic understanding of what tech can and cannot do. This article seeks to lay the groundwork for those working to create these innovations by identifying potential paths forward for tech platforms and the tech ecosystem as they face these novel threats.  

Interventions Platforms Can Do Alone 

A crucial first step for any platform is to identify how they can implement, test, and refine early-stage interventions. By engaging in this cycle of implementation, testing, and refinement, companies can identify data-backed interventions that will be effective on their platform. One simple way to approach this is by implementing pilot interventions and using pre- and post-tests to measure their ability to decrease harmful content and/or redirect users toward pro-social behavior. After piloting and testing these new interventions, companies can begin to understand how their users respond to different strategies and refine those strategies accordingly.   

We don’t yet have robust scientific literature on what works and what doesn’t work in the early prevention stages. Leading research labs such as PERIL are testing the impact of digital literacy curriculums in schools, which have the potential for broad implications at the level of primary prevention. Counter messaging directly on-platform for users seeking content from extremist groups may also pose an opportunity for primary prevention measures. However, it’s important to remember that the wrong messaging may only push users deeper into extremism. Testing interventions in controlled settings first mitigates against risks that may be incurred when we plow ahead too quickly with new strategies.   

Additionally, much of the data that would inform digital inoculation or primordial prevention success metrics sits behind companies’ operational sensitivity barriers. This creates limits on researchers’ abilities to assess and generate insights from it, let alone view it. While we wait for researchers and companies to make more data available, soft intervention strategies do show some promise. These early-stage interventions, which do not require bans or censorship, have the potential to discourage harmful behaviors. In 2019, for example, Instagram implemented a feature that alerts users when their comment is similar to others that have been reported as part of a broader anti-bullying push on the platform. This “nudge” was intended to gently shift behaviors in a more positive, pro-social direction. Since it was first implemented, Meta has reported that nearly 50% of users who receive this soft intervention either edit their comment before posting or choose not to post it altogether.  

Similar “nudge” strategies are needed for addressing the rise of sadistic extortion and the growing risk within true crime communities online to glorify and emulate school shooters. Specific nihilistic violent extremist (NVE) networks and mass shooter fixation cultures online are proliferating faster than tactical enforcement can keep up and are becoming more nuanced than traditional policy development can accommodate. This can be seen in the unique way in which NVE networks glorify the deaths of violent actors, and how it connects to their own thoughts of self-harm. NVE actors are often motivated towards violence in part by an unchecked desire for self-annihilation. Their audiences are similarly primed to engage with suicide and self-harm activities. This suicidal thinking does not emerge randomly or by chance, and mainstream social media platforms can serve as an important connector to resources for people showing early signs of depression and suicidal ideation. For example, searches on TikTok for “suicide” first show information for local resources, and Instagram offers a “Get Support” prompt that helps users find assistance. Nudges that work in other Trust and Safety focus areas, like suicide and self-harm, can be applied in these newer ecosystems and begin to chip away at the feeder pathways into the more egregious and hybridized threat environments.  

Interventions the Tech Ecosystem Can Do Together 

While individual companies can and should take decisive action to prevent harms by extremists on their platforms, the problem requires an ecosystem-wide approach. No single platform can solve this problem in isolation, especially when bad actors habitually jump or funnel people across different platforms. Especially with complex harms such as sadistic extortion, it is particularly important that a sector-level response includes prevention organizations that have helped support and deradicalize individuals offline to understand best practices that could be implemented in digital spaces. Groups already on the ground, such as those that work with at-risk youth or former extremists, can inform platform policies and designs. Additionally, organizations with specialized knowledge of niche digital communities and their unique behaviors are invaluable assets to any prevention program, remaining relevant and timely.   

When working together to address harms, it may be helpful to share information between organizations in order to limit lateral jumps by bad actors and the people they are attempting to exploit. There are important privacy considerations to be made regarding sharing of user data. However, even understanding key words, syntax, or naming conventions within an online subgroup can be helpful for evaluating risks. In the case of repeated policy violations on a platform, such as ban evasion, it can also be helpful to share usernames with other platforms. A similar example of this type of sector-level coordination is the GIFCT Hash Sharing Database, which assigns numerical hashes to terrorist content and makes that unique code available to all member companies to ingest and thereby limit that content’s spread. We suggest that a threat actor exchange for known harmful actors be established, thereby providing companies with the ability to share and ingest key insights about user behavior connected to negative outcomes that may be helpful across multiple platforms.  

Companies may also benefit by bringing in external experts to evaluate and improve their policy and enforcement approach and offer more understanding of how bad actors engage one platform as part of a broader strategy. Organizations like the Institute for Counter Digital Extremism (ICDE) can be tasked with red teaming: evaluating new features and policies through the lens of how harmful threat actors and NVE groups think or are likely to use and navigate them. This is particularly relevant for platforms like Discord, a group chat platform targeted at gaming communities where users join servers operated and moderated by other users.  Platforms like Roblox and AI companies would also stand to benefit from such an approach as their products are highly exploitable by ill-intentioned actors. External experts who specialize in threat assessment and extremist behavior can provide a more realistic and unbiased assessment of vulnerabilities while pursuing threats across digital boundaries. This approach ultimately protects business interests and public reputation as it demonstrates proactive vulnerability assessments and generates actionable solution pathways for companies’ Trust and Safety teams.   

By expanding the common understanding of the tech-ecosystem to also include extremist subject matter experts and prevention practitioners whose work leads them to tech platforms, we can bring in thought leaders and expand the boundaries of what is possible. 

Interventions that Platforms Can’t Do  

It is also important to remember that there are limits to what tech can accomplish with regard to extremism prevention. Extremism has a significant offline component. Research has shown that youth who are attracted to extremist groups generally report having more adverse childhood experiences (ACE’s) in their homelife, or otherwise have environmental challenges that are beyond the purview of any social media company. At the same time, social media can potentially reduce stress from adverse experiences among adolescents who are marginalized due to race, ethnicity, gender expression or sexuality, and some online communities can help youth of color have a more positive sense of self. This underscores social media alone cannot account for individual engagement in extremist activity online and how we must continue addressing other factors beyond the digital landscape to fully prevent extremist harms.   

Grappling with extremists who bridge the on and offline worlds to recruit and groom youth poses significant and unique challenges. Among those challenges is reconciling the value of corporate transparency with the very real risk that bad actors will use this transparency to their advantage. Human rights organizations rightly push for greater transparency from tech companies to prevent authoritarian abuses and defend democracy. But extremists also leverage this open information to fine-tune their tactics and evade detection. This tension is a central paradox of the digital age: a more transparent internet, while intended to protect democracy, can also be exploited by those who seek to destroy it. For example, some actors have weaponized police reporting requirements by developing sophisticated imitation capabilities and filing requests for information on targets they seek to harass and abuse. So effective are these imitation bids that companies have errantly disclosed user data to the imposter. Companies have since shifted to increase their efforts to prevent such an occurrence again by strengthening their user disclosure practices and firming up their law enforcement engagements and dialogues. But the lesson stands: clearly, tech cannot solve these problems alone.   

Conclusions 

While there are certainly problems that span across different tech platforms, it is important to remember that the ways those policies impact users and companies will differ from platform to platform. The same can also be said for interventions as well. Casual games that utilize VR technology may need to learn how different gestures, or even the proximity between players making those gestures, could violate their terms of service. Solving that problem will be very different from the way a social media company intervenes in harassment via DM’s.   

The reduction and loss of research and programmatic grant opportunities like from DHS Science and Technology Directorate and the Targeted Violence and Terrorism Prevention grant program deeply impairs our ability to derive a collective understanding of attaining evidence-based insights and how to implement these changes appropriately with technology and social media companies. However, while there are differences in the way platforms will address these issues, there is also a great deal that can be done through cooperation.   

At ICDE, we have established the preliminary framework for an upstream prevention service to support the tech sector. Our primary goal with this framework is to build a threat exchange system around behaviors, tactics and other forms of signals for the worst of the worst harmful actors online. Through this assembly of behavioral signals, companies can extract actionable insights and contribute their own observations to strengthen the broader community’s understanding of how users are being targeted, manipulated, exploited and ultimately harmed. However, this framework remains in preliminary stages of deployment as many funding opportunities for such an initiative remain sparse–particularly in the aftermath of US federal funding losses.  

VIOLENCE PREVENTION NOTICE: Warning signs often appear before violent acts. If someone you know makes general or specific threats, shows unusual interest in weapons, or fixates on previous violent incidents, you’re not overreacting by taking action. Ask direct questions and help them connect with professional support (or alert authorities if danger is immediate). Your intervention can prevent tragedy.

Dr. Amy Cooter has studied extremist groups in the U.S. for two decades and consults globally with other experts and practitioners about ongoing and emerging threats. She has testified to U.S. Congress regarding domestic extremists’ recruitment of veterans and provided written testimony regarding militia involvement in January 6th. She has served as an expert consultant on federal hate crime trials. Dr. Cooter is the author of Nostalgia, Nationalism, and the US Militia Movement, and her work may also be found in a variety of other outlets including The Conversation, Scientific American, The Washington Post, High Country News, and more.

In addition to conducting novel research, Dr. Cooter builds on her previous teaching experience at Vanderbilt and elsewhere to conduct seminars with various stakeholders including law enforcement. She and other ICDE staff are working to ensure that rigorous work to investigate and prevent extremism harms including child exploitation online continues, as does supportive mentorship of the next generation of practitioners.

Matthew Kriner specializes in researching and analyzing militant accelerationism, US domestic violent extremism, transnational far-right extremism, extremist exploitation of digital and social media technologies, and threat assessment and radicalization. Matt is a consultant for the US Department of State, and he regularly briefs tech companies as well as US, Canadian, New Zealand, and UK law enforcement and intelligence agencies on militant accelerationism and emerging terrorist threats. Matt also serves as an expert witness for US DOJ on a prominent 764 case, as well as the Royal Canadian Mounted Police (RCMP) and the Public Prosecution Service of Canada (PPSC) on a prominent Terrorgram Collective terrorism case.

Matt’s research has been featured throughout online and written media, including NPR, NBC, Rolling Stone, The Atlantic, and more. Matt also has had his research included in governmental reporting, including two written expert testimonies to the US House’s January 6th Committee and the 2024 Countering Violent Extremism report from the US Government Accountability Office.

Pete Kurtz-Glovas (he/him) is a Researcher and Non-Profit Professional with a focus on Extremism and Democracy, bringing a unique perspective through his background in public health approaches to problems such as homelessness and addiction. He has worked with the Global Project Against Hate and Extremism conducting investigations into US and Internationally based hate groups and extremist activity, and helped launch the Community Advisory Resource and Education (CARE) Center’s pilot program while at the Polarization & Extremism Research & Innovation Lab (PERIL). Pete has provided comment on extremism in the United States to Business Insider, NPR, Al Jazeera English and has published research and analysis through the Global Network on Extremism & Technology.

Related Articles

- Advertisement -

Latest Articles