Artificial intelligence is becoming an integral part of the systems that sustain daily life. Utilities use it to forecast energy demand, transportation networks rely on it to route traffic, and cybersecurity teams deploy it to detect intrusions at machine speed. The trend is unmistakable: the nation’s critical infrastructure is being woven together with machine learning algorithms.
Yet this development carries a risk that is often underappreciated. Unlike older technologies, AI, in particular machine learning, does not simply follow programmed instructions. It learns patterns from massive amounts of data and then applies those patterns in ways that are not always predictable. When that unpredictability is introduced into the power grid, into air traffic control, or into the systems defending critical networks, the consequences are not just technical glitches. They are potential national crises.
The central challenge is what AI researchers call the control problem: how do we ensure that AI systems behave in ways that remain safe, reliable, and aligned with human intentions? For critical infrastructure, this question is not just theoretical, but urgently practical.
When AI Becomes the Operator
A decade ago, infrastructure automation meant a set of rules: if demand spikes, open this circuit; if a server shows suspicious activity, quarantine it. AI changes this equation. Instead of following fixed instructions, an AI system identifies correlations that even its designers may not fully understand.
Consider the electric grid. Utilities now deploy AI to balance loads across a patchwork of renewable and traditional energy sources. The models learn consumption patterns, predict weather impacts, and decide how to route power across thousands of substations. Most of the time, this works without issue. But when the system encounters conditions it has never seen before – an unprecedented heat wave, or a cyberattack injecting false data – it may react in ways no human operator can anticipate. A well-trained algorithm may conclude, with statistical confidence, that cutting off power to a region is the “optimal” choice, even if it creates cascading failures.
In transportation, similar risks emerge. AI could be integrated into air traffic management to help optimize flight paths and reduce fuel consumption. But a model trained on historical patterns might falter in a rare crisis – a volcanic eruption disrupting airspace, or a coordinated cyberattack feeding the system misleading inputs. What looks like efficiency in routine conditions becomes brittleness under stress.
Cyber defense brings its own hazards. Machine learning intrusion detection systems can spot subtle anomalies in network traffic that humans would miss. But adversaries have learned to craft attacks specifically designed to evade AI systems, slipping past defenses by exploiting their blind spots. In some cases, attackers could deliberately “poison” the training data so that the model itself becomes biased toward overlooking certain behaviors. The very tools designed to secure networks become an avenue of compromise.
The Adversary’s Advantage
These vulnerabilities present opportunities for adversaries. Sophisticated state actors and criminal groups alike understand that AI systems are only as good as their training data and their assumptions.
Imagine an adversary seeding false data into the sensors that feed an energy-management AI. The system could be tricked into misallocating resources on a sweltering summer day, triggering rolling blackouts. Or consider an attacker spoofing GPS signals used by AI-guided transportation systems. The result could be gridlock in major cities, slowing military mobilization or disrupting emergency services.
Even when attacks are not technically successful, the mere perception of AI-driven failure can be damaging. A blackout or a transportation accident attributed to an “algorithm error” erodes public trust in the institutions meant to safeguard daily life. Disinformation campaigns can amplify these doubts, portraying the government as reckless for entrusting essential services to machines that ordinary citizens do not understand.
Building Safeguards into the System
The answer is not to abandon AI. The efficiency gains are too significant, and in some cases, the complexity of modern infrastructure demands machine learning. The real challenge is to build safeguards that anticipate failure and constrain the risks when it occurs.
The first safeguard is redundancy. AI should augment, but not replace, traditional controls. Manual overrides, independent monitoring systems, and clear fallback protocols are essential. A human operator must be able to step in and reassert control when an AI system produces unexpected recommendations. This requires more than putting a person “in the loop.” It means designing systems so that human judgment can override machine speed when it matters.
The second safeguard is rigorous testing. Just as cybersecurity teams run penetration tests, AI systems should be subjected to “red-team” exercises that deliberately stress their assumptions. These exercises should introduce adversarial data, simulate rare events, and force the system into edge cases. Only by breaking these systems in controlled environments can we begin to understand their vulnerabilities.
The third safeguard is transparency and accountability. Even if the inner workings of a neural network remain opaque, operators need clear records of what the system did and why it appeared to do so. Logging, auditing, and after-action review are critical, not only for immediate response but for building institutional learning. When an AI system fails, the question should not be “why did the machine fail?” but “how do we prevent the same blind spot from recurring?”
Finally, oversight must be institutionalized. Regulators and policymakers should treat AI in critical infrastructure as a high-risk technology. Just as nuclear facilities and aviation systems are subject to extraordinary scrutiny, AI-mediated power and transportation systems require standards, reporting requirements, and compliance regimes. This is not simply a technical issue but a governance issue.
Resilience as the Ultimate Goal
The goal is not perfect control, which is unattainable. The goal is resilience: the ability of critical infrastructure to absorb shocks, adapt under stress, and recover quickly. AI can be part of that resilience, but only if it is embedded in a framework that expects failure and prepares for it.
This perspective shifts the conversation from trust to stewardship. Instead of asking, “Can we trust the AI?” we should be asking, “Have we built the infrastructure so that when the AI errs – as it inevitably will – the consequences are limited, and recovery is swift?”
Examples from recent years underscore the point. The 2021 Texas power crisis revealed how infrastructure systems can fail when confronted with unanticipated stressors. Although AI was not the central cause, it is easy to imagine how reliance on poorly tested models could compound such a crisis. In aviation, the Boeing 737 MAX tragedies demonstrated how automated systems, insufficiently transparent to pilots, can have deadly consequences. These are cautionary tales that foreshadow what could occur as AI becomes more deeply embedded in critical infrastructure.
Securing the Unpredictable
Artificial intelligence is here to stay in America’s critical infrastructure. It offers speed, efficiency, and capabilities that no human workforce alone can match. But its unpredictability is not a side effect – it is inherent to how it works. That unpredictability is manageable only if it is recognized, tested, and contained.
For homeland security professionals, the mandate is clear. Safeguards must be built in from the start, oversight must be strengthened, and resilience must be prioritized. The AI control problem is not an abstract research challenge. It is a practical security issue, unfolding right now in the power plants, transportation networks, and cyber defenses that sustain our way of life.
The future of critical infrastructure will depend not only on how well we harness AI, but on how wisely we prepare for its inevitable failures. The stakes are high, but the path forward is clear: build systems that expect the unexpected, and ensure that when AI falters, the nation does not.
—
Dr. Mark Bailey is a Lieutenant Colonel in the U.S. Army Reserve and an Associate Professor at the National Intelligence University, where he is the Department Chair for AI, Cyber, Influence, and Data Science. He is the author of Unknowable Minds: Philosophical Insights on AI and Autonomous Weapons. The views expressed here are his own.

