The acceleration of generative artificial intelligence (AI) technologies in recent years has reinforced a simple truth: information advantage is—and will remain—the dominant strategic edge of the 21st century.
AI is no longer a mere convenience. Used effectively, it can increase efficiency, extract insight, and compress decision timelines—making it indispensable to defense and intelligence operations. The race for better AI is not just a competition for more subscribers and better features; it’s a critical part of Great Power competition in the 21st century.
Protecting National Interests While Deploying AI at Scale
The Office of the Director of National Intelligence’s 2025 Annual Threat Assessment underscored that “China almost certainly has a multifaceted, national-level strategy designed to displace the United States as the world’s most influential AI power by 2030.”
Securely implementing AI in national security networks, however, introduces serious architectural, operational and governance challenges.
Architecturally, these networks are intentionally segmented by classification level, with any cross-domain connectivity tightly controlled by the National Security Agency’s National Cross Domain Strategy & Management Office (NCDSMO). Yet, effective intelligence requires data from multiple domains to be integrated.
From a security perspective, the data must remain readable for AI processing but inert to prevent technical compromise of the model or the secure systems it touches. On the other hand, from a governance and compliance perspective, training data must be traceable so inaccurate or improperly sourced information can be removed if needed.
Although the rigorous separation between different classification domains presents an added layer of complexity, the challenges listed above closely parallel the imperatives in the Cybersecurity and Infrastructure Security Agency’s recent memo on AI data security, which noted that “Successful data management strategies must ensure that the data has not been tampered with at any point throughout the entire AI system lifecycle; is free from malicious, unwanted, and unauthorized content; and does not have unintentional duplicative or anomalous information.”
Fortunately, many of the leading-edge cross-domain technologies already assessed by the NCDSMO can create an architecture capable of addressing these challenges effectively and efficiently.
Transferring data between security domains has always required precision. The key constraint has always been ensuring data can flow from a lower classification environment to a higher one without risking a reverse flow of sensitive information. What’s changed is the sheer volume of data required for effective AI modeling.
Defense and security agencies looking to implement high-side AI should look to the next generation of protocol-filtering diodes, which not only enforce directional flow, but also do so at the scale and speed that AI demands.
Another challenge of cross-domain data transfer to support AI is the security risk posed by unstructured data. Traditional cross-domain transfers were built around structured formats and strict protocol filters. Historically, the high-side systems expected certain formats of data, and all data that did not fit the rubric was discarded, reducing the risk that malicious code would be sent across the diode.
However, generative AI thrives on unstructured data: text, images, audio, and video. This means boundary filters must evolve to preserve the useful content within files while ensuring that malicious code and hidden data designed to subvert the AI model does not pass into training data.
Applying advanced Content Disarm and Reconstruct (CDR) technology — which is designed to scan through documents, create a schema based on their known good contents, and produce a new, safe copy containing only that good content in an identical file format — can mitigate the risk of unstructured data flowing cross-domain to feed an AI model.
The final challenge posed by a multi-domain AI environment, as well as the complexities of intelligence analysis, is maintaining data provenance for accountability and derivative classification purposes.
Using Data Centric Security (DCS) applications to apply metadata labels to files that can be preserved through the CDR process and re-verified at cross-domain ingress points, then feeding certain sensitive data types (e.g., data collected pursuant to specific legal authorities) into separate Retrieval Augmented Generation (RAG) databases can make that data more easily severable from the AI model. This will facilitate more compliant outputs from the model in case the relevant department or agency is directed to purge data holdings, furthering the government’s mission of high-integrity, accountable operations and analysis.
Looking to the Future
As the national security community races to harness the strategic advantage of AI, we must confront a critical truth: immature integration can undermine the very missions AI is meant to support.
Secure, cross-domain architectures, backed by technologies like protocol-filtering diodes, content disarm and reconstruction, and data-centric labeling, aren’t just technical niceties; they are operational imperatives.
If we want AI to serve as a force multiplier rather than a vulnerability, we must treat its deployment as a mission-critical system design challenge, not just a data science experiment. In a world where near-peer adversaries are investing heavily in AI to leapfrog U.S. capabilities, we can’t afford to fall behind. Getting AI right isn’t just about innovation; it’s about maintaining information dominance in an era of accelerating competition.

