When the 9/11 Commission Report was published in July 2004, it became one of the most sobering documents in modern American history. The bipartisan commission concluded that the September 11, 2001 terrorist attacks represented not only a tragedy of lives lost, but a systematic breakdown of U.S. intelligence and homeland security. The report catalogued how signals were missed, agencies failed to share information, and counterterrorism never rose to the level of national priority that could have prevented the attacks. Above all, the commission concluded, 9/11 revealed “a failure of imagination.”
Two decades later, the world confronts a very different technological landscape. Artificial intelligence (AI), once the stuff of science fiction, has entered the operational bloodstream of governments, corporations, and militaries alike. AI systems now process enormous volumes of data, find subtle patterns, and even generate scenarios beyond the scope of human foresight. It is natural to ask, then: had the tools of 2025 existed in 2001, would the story have unfolded differently? Could AI have detected the hijackers’ trail, fused intelligence across agencies, and suggested scenarios that human analysts struggled to imagine?
This is not only a retrospective exercise. Homeland security leaders today face a wide spectrum of asymmetric threats — from terrorism and cyberattacks to pandemics and climate disasters. AI promises to be a force multiplier in all these domains. Yet the dangers of overreliance on opaque algorithms, bias in machine learning, and civil liberties erosion remain profound. As we revisit the 9/11 Commission’s findings through the lens of modern AI, we must grapple with both its potential and its perils.
The 9/11 Commission’s Key Findings
The Commission’s report laid out four overarching failures: imagination, policy, capabilities, and management. Policymakers did not fully grasp the scope of the threat, agencies did not elevate counterterrorism as a top priority, intelligence tools and resources were insufficient, and organizational silos prevented effective sharing.
The narrative is painful to revisit. The report describes, for example, how CIA officers knew of al-Qaeda operatives traveling into the United States but did not share that information with the FBI. It details how the FBI, oriented toward prosecuting crimes rather than preventing them, lacked the tools to integrate fragments of intelligence into a coherent picture. It criticizes aviation security for being backward-looking, focused on known weapons and tactics rather than novel methods. And above all, it describes how the U.S. government struggled to conceive of an attack like 9/11 until it was too late.
The Commission’s recommendations were wide-ranging: overhaul intelligence structures, create a Department of Homeland Security, integrate watchlists, reform information sharing, and develop new counterterrorism strategies. Two decades later, these reforms have reshaped the national security enterprise. But the central challenge remains: can we anticipate the next unthinkable threat? This is where AI enters the conversation.
Could AI Address a “Failure of Imagination”?
The Commission’s most memorable phrase – that 9/11 was a “failure of imagination” – remains as haunting as it is instructive. Imagination, in this sense, meant the capacity to envision scenarios that had not yet occurred, to connect disparate dots, and to take seriously the possibility of catastrophic surprise.
AI has a unique role to play in overcoming such blind spots. Machine learning systems excel at analyzing enormous data sets to surface anomalies or patterns too subtle for humans to notice. For example, anomaly detection algorithms could have highlighted the unusual clustering of individuals taking flight lessons at different U.S. flight schools in 2000 and 2001 – an activity that, while seemingly innocuous, was suspicious in aggregate. AI-powered systems might also have flagged unusual travel itineraries of operatives moving repeatedly between the United States, Europe, and the Middle East.
More recently, advances in generative AI suggest another tool: the use of large language models to act as “red team” scenario generators. By ingesting intelligence reports, intercepted communications, and historical precedent, such models could generate unconventional attack scenarios — hijackings, cyber intrusions, or combined methods — that stretch human imagination. AI, in this sense, does not replace human creativity, but it can expand the realm of the possible, ensuring analysts do not overlook threats simply because they seem implausible.
The caution, however, is that AI systems are prone to false positives. An overly sensitive anomaly detection system could drown analysts in noise, flagging thousands of innocuous behaviors. Thus, AI’s role must be carefully balanced: not to dictate what threats exist, but to expand the horizon of what analysts consider. Properly integrated, AI could be the antidote to the very failure of imagination that the Commission lamented.
Watch-listing and Border Security: Then and Now
The Commission documented in stark detail how several of the 9/11 hijackers were known to intelligence agencies abroad, yet their identities were not connected or placed on effective watchlists in time. Two of the hijackers were actually watch-listed only after they had entered the United States. Others passed through international airports without being flagged, despite their ties to known extremists.
This is precisely the kind of challenge where AI-enhanced systems shine. Today, AI tools allow for real-time cross-referencing of vast biometric and travel data. Facial recognition systems, although controversial, are increasingly capable of matching individuals against international databases at border crossings. Machine learning models can also analyze travel patterns — frequency of trips, unusual routing, group clustering — to identify suspicious behaviors even when individuals have no prior criminal record.
Imagine if, in 2000, an AI system had correlated Mohammed Atta’s repeated U.S. entries with his connections to other suspicious individuals abroad. Even without direct evidence of intent, the clustering of activity might have raised alerts. The system might not have provided certainty, but it could have prioritized closer scrutiny.
Of course, this also highlights AI’s double-edged nature. False positives could generate friction for innocent travelers, particularly minority populations. The risk of bias — whether in biometric systems that underperform on minority populations, or in predictive models trained on skewed data — remains significant. The Commission warned against sacrificing American values in the pursuit of security. That warning remains vital: AI can make borders smarter, but it must not make them less just.
Information Sharing: Breaking the Stovepipes
Perhaps no theme in the Commission report was more damning than the failure of information sharing. Critical data about known terrorists remained locked within the CIA and never reached the FBI. Likewise, the FBI failed to integrate field reports that, in hindsight, pointed to the hijackers’ activities.
AI could transform this landscape by serving as a translation and integration layer across agencies. Knowledge graph technology, for instance, can take disparate reports from multiple agencies — different formats, different classification levels — and integrate them into a coherent network of people, places, and events. Rather than being lost in isolated databases, connections could emerge dynamically as the graph grows.
Natural language processing could also be applied to automatically redact sensitive sources and methods, making information shareable without compromising security. In practice, this could mean an FBI agent gains access to relevant CIA reporting without ever seeing the raw intelligence source, but with enough context to act.
Even more transformative is the idea of cross-agency natural language search. Imagine an analyst being able to query, “Which individuals traveled from Hamburg to Florida flight schools between 2000–2001 with connections to extremist groups?” An AI system could surface relevant leads from across the intelligence community, regardless of which agency originally collected them.
In effect, AI offers a technical solution to the problem of stovepipes, though organizational culture and trust remain equally important. The Commission’s call for “unity of effort” could finally be realized through AI-enabled data sharing frameworks – if agencies are willing to use them.
Aviation Security and Threat Detection
Aviation security, according to the Commission, was both outdated and predictable. Screening procedures were designed for threats of the past — guns, bombs, known explosives — rather than the tactics al-Qaeda employed. The hijackers carried box cutters, items not flagged by existing protocols. They took advantage of cockpit vulnerabilities that regulators had failed to anticipate.
Today, AI is transforming aviation security in ways that could have mitigated those vulnerabilities. Deep learning systems are now capable of analyzing X-ray and CT imagery to detect hidden or unconventional weapons, including non-metallic items. Unlike human screeners, who may tire or overlook subtle indicators, AI systems can maintain consistent performance.
Behavioral analytics also play a role. AI-powered systems can monitor passengers for unusual behaviors in real time, using video analytics to identify signs of surveillance, evasion, or stress. Flight training records can be analyzed for unusual clusters of students requesting atypical courses, as the hijackers did when they focused only on learning to fly airliners, not to land them.
Had such systems been deployed in 2000, the unusual patterns of the hijackers’ preparations might have drawn greater attention. In today’s context, these systems are increasingly important as terrorist tactics evolve to include drone swarms, cyber intrusions, and potential chemical or biological methods. AI does not eliminate risk, but it can help aviation security stay one step ahead rather than one step behind.
Human Intelligence and AI-Augmented Analysis
The Commission recognized that one of the most enduring challenges was penetrating terrorist organizations with human sources. Al-Qaeda was a tightly knit, highly secretive network. The difficulty of cultivating reliable informants left U.S. agencies heavily reliant on signals intelligence.
AI cannot recruit human sources. But it can dramatically expand the utility of the information they provide. For example, sentiment analysis of extremist forums could detect when an obscure figure begins to gain prominence, potentially indicating a rising leader. Automatic translation tools now allow analysts to rapidly interpret intercepted communications in Arabic, Pashto, or Urdu, lowering the barriers to timely analysis.
Graph-based AI analysis can also help map terrorist networks from fragments of human intelligence. A single informant’s report of two acquaintances, when integrated into a larger graph, may reveal hidden hierarchies or bridge nodes critical to the organization’s function. In this sense, AI does not replace HUMINT, but it augments its reach — turning isolated reports into networked insights.
Crisis Management and Response
The chaotic response on the morning of September 11 underscored the weaknesses in crisis management. The Federal Aviation Administration, NORAD, and other agencies struggled to communicate clearly. Decisions were made on partial information. Confusion delayed critical responses.
AI can offer a new model of crisis management through integrated situational awareness. Command dashboards powered by AI can fuse radar tracks, communications, and intelligence updates into a single interface for decision-makers. Instead of fragmented feeds, leaders see a common operating picture.
AI triage systems can also support first responders. Using drones and computer vision, casualties can be rapidly identified and prioritized based on severity, ensuring scarce medical resources go where they are needed most. Generative AI systems, meanwhile, could serve as “advisors” to senior leaders, running rapid scenario simulations to evaluate potential courses of action under pressure.
The Commission emphasized the need for unity of effort in moments of crisis. AI, properly integrated, could provide that unity — not by replacing human decision-makers, but by ensuring they have the clearest possible picture in the fog of emergency.
Ethical and Governance Risks
For all its potential, AI introduces new dangers that mirror and even magnify the concerns the Commission raised. Predictive models trained on biased data could reinforce discriminatory practices, leading to disproportionate scrutiny of certain communities. Facial recognition systems may misidentify individuals, causing innocent travelers to be flagged. And perhaps most dangerously, an overreliance on opaque algorithms could erode accountability — decisions made by “black boxes” are difficult to question or contest.
The Commission warned against sacrificing America’s values in the name of safety. That warning is even more critical today. AI has the potential to create a surveillance state if deployed without safeguards. The challenge for homeland security leaders is to embrace AI’s benefits while embedding transparency, oversight, and civil liberties protections. Otherwise, in trying to solve the failures of 2001, we risk creating new failures of our own.
Beyond Terrorism: AI for All-Hazards Homeland Security
Although the 9/11 Commission focused on terrorism, the homeland security mission has since broadened dramatically. The Department of Homeland Security now grapples with cyberattacks, pandemics, and climate-driven disasters. AI is poised to play a critical role in each of these areas.
In cybersecurity, anomaly detection powered by AI is already a frontline defense against intrusions into critical infrastructure networks. In biosecurity, AI tools can scan genomic data to identify engineered pathogens, a critical safeguard in an age of synthetic biology. And in natural disasters, AI applied to satellite imagery can provide rapid damage assessments, helping allocate emergency resources in hours rather than days.
In this sense, the legacy of the 9/11 Commission extends beyond terrorism. Its call for imagination and integration applies equally to the full spectrum of homeland security challenges. AI is not a panacea, but it can help build the resilient, adaptive homeland security enterprise the Commission envisioned.
AI as Amplifier, Not Panacea
Could AI have prevented 9/11? The honest answer is that no technology alone could have guaranteed prevention. Human imagination, political will, and organizational reform were equally critical. Yet AI might have tilted the odds — surfacing weak signals, breaking down stovepipes, highlighting unconventional scenarios.
Today, AI offers the promise of fulfilling the Commission’s vision: a homeland security system that is imaginative, integrated, and resilient. But it also carries risks as profound as its potential. AI is an amplifier. It can expand our imagination, or trap us in new blind spots. It can make us more secure, or more brittle.
The challenge for homeland security leaders in 2025 is not whether to embrace AI — it is how to govern it wisely, ensuring it serves as a force for both safety and liberty. That, perhaps, is the real lesson the 9/11 Commission leaves us: imagination must be matched with responsibility.
Dr. Mark Bailey is a Lieutenant Colonel in the U.S. Army Reserve and an Associate Professor at the National Intelligence University, where he is the Department Chair for AI, Cyber, Influence, and Data Science. He is the author of Unknowable Minds: Philosophical Insights on AI and Autonomous Weapons. The views expressed here are his own.

