PERSPECTIVE: Artificial Intelligence Has Mighty Defensive Benefits for Cyber Warfighter

The Defense Advanced Research Projects Agency (DARPA) announced recently that it would commit more than $2 billion in funding over the next half-decade toward the development of artificial intelligence (AI) to support the warfighter. While specific details of the new program, which is dubbed “AI Next,” remain forthcoming, defense researchers should expect a succession of Broad Agency Announcements (BAAs) for AI researchers in the coming years.

With this bold move, DARPA seeks to advance U.S. military usage of AI for applications ranging from clearance processing to optimizing power consumption. But the implications for cybersecurity could be the most intense – and will require firm commitment from our military, government, industry, and citizenry. It is in this context that this short note is written – namely, with the goal of helping to explain the positive implications of AI for cyber defense.

Certainly, the general societal risks of AI are well-documented, with prominent commentators like Elon Musk describing the potential drawbacks of artificial intelligence run amok. As with any technology, however, designers, manufacturers and users must include reasonable design and even ethical considerations in their work. If they do, then the belief here is that AI can be an overwhelmingly positive force for effective cybersecurity in our military.

By way of background, traditional approaches to cyber are either preventive, where effort is made to stop something bad from happening, or reactive, where effort is made to respond to something bad that has already begun. It doesn’t take a Ph.D. in computer science to recognize the advantages of preventing incidents. But, sadly, most cybersecurity experts from our military and industry sectors believe that preventing 100 percent of inbound attacks is currently not possible.

The reasons for this negative view are many, including insufficient funding, unavailable trained staff, and increasingly open systems. But the primary, and perhaps the most valid, concern with cyber defense in any military or government context involves the growing size, scale, and power of attacks from our national adversaries. In short, most cyber defensive teams believe that their protections cannot keep up – and that they need something new.

This is where AI will be of great use: That is, AI can accurately detect early indicators, such as any new evidence of malware. This is done through fast automation, powerful platforms, and advanced algorithms. Such advances in AI are designed to be executed on distributed processors, in an arrangement not unlike neurons in a brain. The resulting parallelism allows new problems to be solved with AI, and its subclass of methods known as machine learning.

To understand how AI is used for cybersecurity, let’s start with a greatly simplified view of how advanced machine learning can be used to detect a hand-drawn letter symbol, even if the handwriting is not perfect. Suppose, for this example, that we have three coordinating AI processors that each ingest some portion of the image of that written letter. The first looks at the top, the second looks at the middle, and the third looks at the bottom of the written letter.

Let’s imagine that the processor examining the top of the image sees what appears to be a short line from left to right. It might be wobbly, but the geometry in the image recognition software concludes that a short horizontal bar is present. Now, let’s imagine that the processor examining the middle of the image recognizes a vertical line running roughly up-and-down. And, finally, let’s assume that the processor viewing the bottom of the image sees nothing.

The most reasonable conclusion from the three processors is that any short top bar connected at its middle to a longer vertical line is most likely a capital T. This detection process is only accurate if the processors perform their recognition task properly. And certainly, if more processors are available to perform more fine-grained recognition, then the overall accuracy can be increased. This is how machine learning programs detect images of cats.

For cybersecurity, the process is analogous. Let’s presume that we want to detect the presence of malware. We know that traditional indicators used to detect malware today, even by military teams, require an exact match to the name of the file, the location where it has been discovered, and something called a hash of the file – which is a unique numeric indicator that is calculated from the file contents to provide a rough fingerprint of its contents.

If we replace this traditional scheme with AI, then one processor can be trained to recognize a range of typical file names for malware. The second processor can be trained to recognize typical locations where malware is likely to be discovered. And the third processor can be trained to detect known hash values from a threat intelligence feed. None of these require a perfect match – just as with recognition of the imperfectly handwritten letter T.

In theory, all three of these processors would work together to determine whether a given file presented to the AI was, in fact, malware. This fuzzier process, as with modern letter or cat image recognition using AI, is only as accurate as the component tasks, and is also made more robust and accurate by decomposing tasks into smaller, more granular activities. Thus, more processors, like more neurons in the brain, produce more accurate results.

The likelihood grows each day that our adversaries can cause increasingly serious damage to U.S. critical national infrastructure. The use of AI now offers us hope that the prevention of malware becomes a more tractable goal in the cyber defense of our nation. Commercial tools from companies such as Cylance, JASK, and Deep Instinct are already demonstrating the practical viability of this claim in industrial contexts. So, this is not science fiction.

Congratulations to the DARPA team for allocating this large pool of research funding. Let’s hope that a significant portion of the money goes toward improving our cyber defenses using techniques for detecting malware that can reduce our nation’s risk and make our military more effective. And for those of you AI researchers interested in cyber, watch for BAAs from DARPA in this important area!


The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email Our editorial guidelines can be found here.

Dr. Ed Amoroso is currently Chief Executive Officer of TAG Cyber LLC, a global cyber security advisory, training, consulting, and media services company supporting hundreds of companies across the world. Ed recently retired from AT&T after thirty-one years of service, beginning in Unix security R&D at Bell Labs and culminating as Senior Vice President and Chief Security Officer of AT&T from 2004 to 2016. He is author of six books on cyber security and dozens of major research and technical papers and articles in peer-reviewed and major publications. Ed holds the BS degree in physics from Dickinson College, the MS/PhD degrees in Computer Science from the Stevens Institute of Technology, and is a graduate of the Columbia Business School.

Leave a Reply

Latest from Cybersecurity

Go to Top