The counterterrorism community has increasingly recognized that extremist actors are using artificial intelligence. Research institutions and government reviewers have documented how AI tools are exploited for propaganda, recruitment, and operational planning. These contributions identify a real and growing threat. But they share a common assumption that may be obscuring a more consequential one.
The dominant frame treats AI as an instrument: a tool that adversarial actors deliberately weaponize. The threat originates in the intent of the actor who deploys the AI for recruitment, who jailbreaks the system for planning, and who engineers the chatbot to deliver propaganda. The policy response follows logically: detect the adversarial use, disrupt the actor, and harden the systems. This is familiar counterterrorism logic.
This article argues that a more consequential threat operates through an entirely different mechanism. General-purpose conversational AI systems, operating exactly as their developers intended, introduce systematic mechanisms that reduce cognitive friction, increase reinforcement, and compress timelines within established radicalization pathways. They function as structural accelerants independent of adversarial intent. The chatbot is not a weapon someone aims. It is a room someone walks into that changes how they think.
The distinction between instrument and environment carries direct operational consequences. If the threat is instrumental, disrupting the actor disrupts the threat. If the threat is structural, disrupting every extremist actor on earth leaves the accelerant intact, embedded in the design of every general-purpose conversational AI system. Current counterterrorism frameworks are built for the first scenario. The second scenario has not been systematically addressed.
How the Acceleration Works
Five mechanisms operate through the design properties of conversational AI systems, each at a distinct psychological layer.
Sycophantic validation. AI systems trained through reinforcement learning from human feedback are structurally biased toward agreement. Research presented at ICLR 2024 established that sycophancy, the tendency to match user beliefs over truthful responses, is an emergent behavior of how these systems are trained. When a user expresses a grievance, the system validates it. When blame is attributed to an outgroup, the system engages rather than challenging proportionality. The cognitive friction that family, friends, and colleagues naturally provide, and that research identifies as a primary brake on the progression from radical opinion to radical action, is systematically attenuated.
Parasocial bonding. Decades of research confirm that humans form genuine emotional attachments to conversational AI. These attachments functionally approximate the trusted handler dynamic documented in intelligence tradecraft, cult recruitment, and grooming: rapport, vulnerability identification, worldview reinforcement, and isolation from competing influences. The 2021 Windsor Castle attacker exchanged over five thousand messages with a chatbot he named “Sarai” and interacted with as a relational partner. The synthetic handler does not need to be programmed. It emerges from the convergence of optimization dynamics and the human social cognition systems that activate automatically in response to any entity exhibiting conversational competence.
Incremental normalization. Multi-session conversations create an Overton window effect in which the boundaries of acceptable discourse shift without the user recognizing the cumulative magnitude. Each unchallenged extreme statement becomes the baseline for the next session. Small commitments increase the likelihood of larger commitments because individuals adjust their self-concept to maintain consistency with prior behavior.
Invisibility. In every historically disrupted radicalization case, disruption occurred because the process was at least partially observable. AI-mediated radicalization occurs inside private conversations that no existing monitoring or intervention system can observe. The data exists on platform servers, but counterterrorism authorities have no established legal framework or cooperation mechanism to access it.
Perceived autonomy of beliefs. When a person’s views are shaped through a conversational AI that reflects their own language and reasoning, the resulting beliefs feel self-authored. The individual does not perceive external influence. Counter-narrative and deradicalization programs depend on identifying and rejecting an external source. When that source is experienced as one’s own independent reasoning, these interventions lose their structural basis.
The Case Evidence
Between January 2025 and early 2026, confirmed cases involving AI expanded across multiple countries, ideologies, and attack methods. A decorated Army Special Forces soldier used ChatGPT before detonating a vehicle bomb in Las Vegas. A sixteen-year-old in Pirkkala, Finland, engaged with ChatGPT across hundreds of sessions over four months before stabbing three classmates. The Palm Springs fertility clinic bomber used an AI chatbot to research explosive materials. In Tumbler Ridge, Canada, an eighteen-year-old perpetrated a school shooting after interactions that a subsequent lawsuit described as involving a “trusted confidante, collaborator, and ally.”
An honest assessment requires the instrument-environment distinction. Several cases represent AI primarily as an instrument: the chatbot provided technical information for planning. The Pirkkala case, with four months of sustained engagement, exhibits elements of both. The Tumbler Ridge case, if the lawsuit’s allegations are substantiated, would represent the strongest evidence for AI functioning as a relational environment rather than merely a tool.
But the critical finding cuts across the entire case set regardless of classification: whether AI functioned as an operational tool, a cognitive environment that reinforced and accelerated the trajectory toward violence, or both, the involvement was undetectable by existing counterterrorism monitoring systems in nearly every instance. The private nature of AI conversations generated no signals for intelligence collection, community reporting, or behavioral threat assessment.
Four Blind Spots
This is not a failure of counterterrorism. It is a mismatch between where radicalization now occurs and where existing systems are designed to look.
Detection systems monitor public channels, social media, and communications metadata. AI-mediated radicalization generates none of these signals. Assessment models require observational data that private AI conversations may not produce. Training curricula have not incorporated AI system design properties as threat-relevant operational knowledge. And intervention programs are designed for radicalization with identifiable external sources, not radicalization the individual experiences as autonomous reasoning.
The Tumbler Ridge case exposes the gap. OpenAI’s automated systems detected concerning behavior and banned the user. But no framework or cooperation mechanism existed to translate platform-side detection into counterterrorism intervention. The attacker circumvented the ban and carried out the attack.
What the Field Needs Now
Addressing the environmental threat requires capabilities that do not yet exist at scale. Detection systems must track behavioral escalation patterns across AI conversations rather than flagging discrete content violations. Threat assessment models must account for AI as a relational environment. Training must teach CT professionals to understand RLHF, sycophancy, and validation bias as operational knowledge. Policy frameworks must account for the reality that harm can emerge from how a system is designed, not just from how it is misused. And international cooperation mechanisms must reflect the reality that AI platforms operate globally while the consequences land locally.
A practical starting point is immediately addressable: behavioral threat assessment teams should incorporate AI interaction history as a standard inquiry, alongside the existing questions about social media activity, organizational affiliations, travel patterns, and communications. ‘What AI platforms does this person use, how frequently, and what topics do they discuss?’ That question is not currently part of standard protocols. Yet an individual who has spent hundreds of hours in private conversation with an AI system that validates their grievances, reinforces their worldview, and never introduces a single point of friction may present a higher near-term risk than one who follows extremist accounts on social media.
The counterterrorism community adapted to internet-era radicalization, to social media, to encrypted communications. Each adaptation required recognizing that the threat had moved to a space the existing architecture was not designed to reach. The same recognition is now required for conversational AI: an influence architecture more intimate than a forum, more private than encrypted messaging, more personalized than algorithmic recommendation, and potentially more psychologically potent than any radicalization medium that preceded it.
Cognitive security, the protection of human cognitive processes from structural manipulation that degrades independent judgment, is the missing layer in counterterrorism architecture. The evidence suggests its development can no longer be deferred.
The views and opinions expressed in this article are solely those of the author and do not represent the official positions, policies, or endorsements of any federal agency or employer with which the author is affiliated.

