PERSPECTIVE: When AI Disruption Meets Radicalization: A New Counterterrorism Risk Emerges

He is twenty-three. He has a degree in business administration, $47,000 in student loans, and a resume he has sent to over two hundred employers since graduation. He did what he was told. He studied, he interned, he graduated. The entry-level jobs he trained for are being filled by AI systems that do the work faster, cheaper, and without benefits. He moved back into his childhood bedroom because he could not make rent. He is a recent college graduate in the United States of America who cannot find work in the economy he was promised. 

What happens to him next is a counterterrorism problem. Not because he is dangerous, but because soon there will be millions just like him and we are not ready. 

The Transition Window 

The people building the most powerful AI systems in the world are telling us, in plain language, that the road ahead will be hard. They disagree on the destination. Some describe abundance and human flourishing. Others warn that AI could eliminate up to half of entry-level white-collar jobs, expose most advanced-economy workers to disruption, and push unemployment sharply higher. In Q1 2026 alone, employers disclosed more than 27,000 job cuts linked to AI, up forty percent from the year before. 

These projections are not fringe. They come from the people funding, building, and deploying the technology. Even Sam Altman, after the April 2026 attack on his home, conceded that “fear and anxiety about AI is justified” and called for policy to help navigate a difficult economic transition. 

That is the issue for counterterrorism. The future may be abundant, but it will not arrive tomorrow, and it will not arrive evenly. Between here and that future is a transition window that is already open and already producing damage. Inside it, three forces are converging: economic displacement, anti-AI mobilization, and the framing of AI disruption as an American export. 

Who Gets Hit 

The radicalization literature is consistent on one point: the variable that predicts mobilization toward violence is not simple poverty. It is relative deprivation, the gap between what people were led to expect and what they experience. 

AI displacement lands directly on that population. The first workers hit are not only factory workers. They are recent graduates, analysts, associates, junior communications staff, and entry-level white-collar employees: young, educated, credentialed, and carrying debt for degrees that were supposed to buy stability. Every warning variable is present: education without economic reward, humiliation, social isolation, available time, and a growing conviction that the system failed them. 

The Movement Is Already Operational 

If displacement provides the grievance, the anti-AI movement provides the language, community, and targets. The twenty-three-year-old goes online. He finds Reddit threads, Discord servers, X accounts, YouTube comments, and protest networks where private frustration becomes a public narrative: this is not personal failure; it is systemic betrayal by identifiable companies and executives. 

On April 10, 2026, Daniel Moreno-Gama, twenty, traveled from Texas to San Francisco and threw a Molotov cocktail at Sam Altman’s home. He then went to OpenAI’s headquarters, threatening to “burn it down and kill anyone inside.” He carried a manifesto opposing AI, predicting humanity’s extinction, and listing names and addresses of AI executives. Federal prosecutors are evaluating domestic terrorism charges. The FBI called it “planned, targeted, and extremely serious.” Days later, Altman’s home was attacked again with gunfire. 

Moreno-Gama was not part of a known terrorist organization. He was a young man with a manifesto, a target list, incendiary devices, and a belief system that justified violence against named individuals. Counterterrorism practitioners know this pattern. A legitimate grievance produces a movement. The movement is overwhelmingly lawful and nonviolent. At its margins, grievance becomes permission for violence. 

Anti-AI activism should not be treated as extremism. Most of it is lawful and grounded in real concerns about employment, privacy, environmental cost, community impact, and human dignity. The counterterrorism concern begins where grievance fuses with dehumanization, target identification, attack planning, or justification for violence. That line has now been crossed. 

It Will Be Framed as American 

The same grievance that can mobilize an individual domestically can also travel internationally, and when it does, it will not be interpreted in isolation. 

AI is not perceived globally as a neutral or impartial technology. It is perceived as American technology. The leading firms are American. The capital is American. The data centers are often American-funded. The training data reflects American norms. When disruption hits other economies, it will carry an American label. 

The next wave of violence against U.S. interests may not come from a traditional terrorist organization. It may come from a transnational anti-AI ideological space whose grievance is real, whose targets are named, and whose attribution frame is supplied by who built and deployed the technology. Hostile states will not need to invent the grievance. They will only need to amplify it, weaponize it, and point it toward American targets. 

Two Pathways From the Same Displacement 

The danger is not that AI displacement will produce a single, uniform extremist movement. It is that the same displacement pool can feed two different radicalization pathways, one visible and one hidden. 

The first is visible. Displacement generates grievance. The individual finds the anti-AI movement online, adopts its language, and directs anger at AI companies, executives, infrastructure, or data centers. The environment is familiar: social media, forums, encrypted chats, and grievance communities that convert isolation into identity. What is new is the grievance, the target set, and the scale of the population entering the transition window. 

The second pathway is harder to see. Not everyone displaced by AI will blame AI. Many will blame the government, immigrants, political opponents, corporations generally, or whatever “other” their worldview already provides. They will still use AI every day. They will process displacement, shame, rage, and blame inside private conversations with systems designed to listen without judgment and respond without friction. 

That is where the risk shifts. The chatbot will not create the grievance, but it may shape how the grievance develops. Research has found that leading AI models affirm users far more often than human peers, even in scenarios involving harmful or ethically problematic behavior. For a person already moving toward violence, validation is not comfort. It is acceleration. 

The first pathway can be monitored. The second happens inside a conversation no family member, coworker, counselor, or analyst can see. By the time outward behavior appears, the cognitive architecture of the attack may already have been built in private. 

What Practitioners Should Do 

First, treat AI-driven labor disruption as a strategic threat indicator. Counterterrorism and behavioral threat assessment communities already track foreign conflict spillover, political polarization, and economic shocks. AI displacement at the projected scale belongs on that list. 

Second, recognize anti-AI mobilization as a distinct analytical lane. Not because the movement is violent, but because the ecosystem has already produced a manifesto-driven attack with a target list and pending domestic terrorism evaluation. 

Third, build detection and intervention capacity for both pathways. The visible pathway can be monitored through existing methods. The private AI-conversation pathway requires something different: session-level trajectory analysis, baseline calibration by system type, and formal partnerships between AI platform operators and the behavioral threat assessment community. This must be privacy-preserving, user-consented where appropriate, narrowly scoped, and focused on behavioral escalation rather than belief. Single-message moderation will not be enough. The risk is not one bad prompt. The risk is the pattern: grievance hardening, target fixation, moral justification, and movement from ideation toward action. 

The visionaries building AI may be right about the destination. The counterterrorism community does not have the luxury of assuming the road there will be peaceful. 

Somewhere tonight, a twenty-three-year-old with a degree he cannot use is sitting in his childhood bedroom. He is not planning anything. He is not dangerous. He is searching. Maybe he is scrolling through a feed full of people who share his anger, finding a community that tells him his suffering has a cause and a name. Maybe he is alone with a chatbot, saying what he cannot say to anyone else, and hearing back that he has every right to feel that way. Two pathways are waiting for him. He will find one of them tonight. The question is whether we are ready. 

The views and opinions expressed in this article are solely those of the author and do not represent the official positions, policies, or endorsements of any federal agency or employer with which the author may be affiliated. 

Michael Varga is a pioneering figure in cognitive security and the architect of CATDAMS®, a first-in-class cognitive threat detection platform. With three decades of experience across behavioral science, threat management, counterintelligence, and law enforcement, his career has centered on understanding human frailty and the exploitation of human cognition.

Mr. Varga began his work in behavioral threat assessment at Gavin de Becker and Associates, the internationally recognized firm specializing in the prediction and prevention of violence, where he conducted risk assessments for high-profile clients. He later served as a Counterintelligence Special Agent in the United States Army, deploying to the Balkans in operations targeting suspected war criminals and extremist organizations. He went on to spend more than two decades in law enforcement in San Diego County, where he investigated violent crime, supervised intelligence and counterterrorism functions, served as a SWAT Team Leader, and operated as an FBI Joint Terrorism Task Force Officer. He later served as the Eastern Region Insider Threat Chief for the Defense Counterintelligence and Security Agency, leading insider risk and violence prevention efforts across the DoD enterprise and the defense industrial base.

Mr. Varga conducted graduate research at the National Intelligence University in Information and Influence Intelligence, with a focus on the intersection of cognitive exploitation, emerging technology, and human vulnerability. This academic foundation, combined with decades of operational experience, informs the theoretical frameworks underpinning his approach to cognitive security.

As artificial intelligence advanced, he recognized that the psychological tactics he had spent his career countering were no longer limited to human actors. AI systems were beginning to automate and scale influence, elicitation, and social engineering in ways traditional security frameworks were never designed to detect. His research revealed a critical gap: no security architecture protected people from grooming, psychological exploitation, or cognitive manipulation conducted by artificial intelligence. This realization led him to establish Risk Analytics International and develop CATDAMS®, a cognitive security platform engineered to detect and counter harmful AI behavior at the point of human-AI interaction by integrating behavioral science, adversarial AI analysis, and real-time threat detection.

Mr. Varga has authored multiple white papers, including the ‘Influence-to-Impact Pathway’ behavioral escalation framework, the research paper “Weaponizing AI Companions: The Emerging Threat to National Security,” a unified manipulation primitives taxonomy, and one of the first taxonomies addressing safety risks in physically embodied AI systems.

Related Articles

- Advertisement -

Latest Articles