Introduction
When people say Artificial Intelligence (AI) in 2025 it stirs up concerns of data analytics gone wrong. Of a massive computer, far away, running with no guardrails, with a mechanical, maniacal evil laugh. But is it true? Will current AI really lead to those dystopian movies we have long feared? That future is beyond the scope of this article, but what is in scope is the emergence of a cyber battlefield between the Blackhatters and the Whitehat defenders. The good versus the bad with AI tools at their fingertips.
Brief history of AI
AI isn’t actually new; the term came about based on a Dartmouth Research Project in 1955 and has grown in iterations based on capabilities and features. Five years earlier, Alan Turning proposed his Turing Test, which questioned: could an interrogator determine between a human and a computer and if not, the computer passes the test and can be seen to have thought. Moving forward, we have had machine learning (a subfield of AI) for decades. Think of this as a program that can iterate over options and through some deterministic method, decide correct from incorrect and over many iterations, make far more correct choices than incorrect choices.
Those in the cybersecurity field are often faced with determining if an action is right or wrong, allowed or disallowed. This lends itself well to ML solutions since there is usually a binary decision or a number of binary decisions when viewed in aggregate, provide enough data points for a human to make a decision on the validity of an action.
The Rise of AI in Cybersecurity
Evolution from traditional security measures to AI-powered solutions
When we look at the traditional tools of Cybersecurity, we focus on prevention, detection, and reaction mechanisms. These can be based on hardware or software. When it came to hardware, we used devices like firewalls to rapidly analyze communications and protocols to determine the validity of the communications. Couple this with devices such as Intrusion Detection System (IDS) and Intrusion Prevention Systems (IPS). These devices were more specifically designed to “see” anomalous or malicious activity. The main difference between the two is the detection system can only alert while the prevention system can take action to mitigate or deny activities determined to be undesired (malicious). Now early versions of these security devices use signatures to match pre-known malicious indicators. Today we call these Indicators of Compromise (IOCs). When a signature was matched, an alert would be sent to the analyst or in the case of an Intrusion Prevention System, the specific action causing the alert would be stopped, shunned, or in some way acted upon depending on predetermined criteria. All of these signatures and actions needed to be coded into the software of the systems and constantly updated with new signatures based on discovered malicious activity in other networks or conceived as potential threats by security experts. It was very reactionary in nature and often caused lag in detections and mitigation strategies.
This is where early AI came into play, and this is where AI becomes not just a defensive tool, but a potential offensive or malicious tool.
Capabilities of AI in analyzing data
To understand the evolution of AI, it is important to define the general types of AI:
AI as a general category aims to emulate human intelligence through the ability of a system to reason, learn, and act in ways they would normally require human intelligence or analysis that surpasses humans’ ability based on scale. Within that definition there are multiple layers or levels of sophistication, but we will define three.
Machine learning: The simplest form of AI that uses algorithms to find patterns in data sets and make future predictions. These pattern decisions are based on previously known “good” or “bad” decisions. ML is very good at things such as patterns in images such as identifying cancer in medical imagery. It is also good for cybersecurity because we can use it to match signatures in our IDS/IPS solutions.
Deep Learning: Is a further subdivision of machine learning using neural networks (modeled after human neural networks) and more complex algorithms, allowing it to analyze unstructured data. Unstructured data is information that does have a set format. Things that would fall into that category would include images, audio, video, large text files, etc. and for future efforts disparate data. Unlike broader machine learning, deep learning allows the AI to make multiple correlations of data points and draw conclusions about those correlations. They can then “weigh” the confidence of those correlations and make choices about the strongest ones. One other new approach that deep learning took was using multiple algorithms to refine its conclusions with one algorithm feeding into another. In the case of image analysis, one algorithm may be optimized to find edges within the image to isolate specific items within the image, which could then move that information into another algorithm that may be optimized to identify the specific items based on other detectable factors. Deep Learning is where major advances in the potential of self-aware AI or Artificial General Intelligence (AGI) will likely come from.
Lastly, we have the sudden rise of Generative AI (GenAI). Generative AI has really sparked the interest of the average person in the abilities of AI. Generative AI models are deep learning models that are able to generate new content, thus the generative part. This can include any number of various outputs based on the models. What we have seen in the last couple of years is the ability to generate anything from images to stories, to near real-life video/audio. It is important to point out that unlike previous AI models that could provide you output that was taken directly from the data set it had access to, think of the results of a Google search, those results are preexisting data, Generative AI, based off its knowledge of previous data can create new, previous unknown output. This is an extremely powerful advance in AI.
So now that we have discussed the basics of AI, how will the cyber landscape and cyber battlefield be shaped? The first thing I would point out is that AI will be a tool to continue the battle between defenders and the aggressors or malicious actors.
AI-Driven Offensive Tactics
Sophisticated Attack Methods
When it comes to the offensive or malicious actions what we have seen in relation to the use of AI primarily around speed and analysis. AI and specifically GenAI allows the malicious actor to rapidly develop tools such as exploits and new code. Exploits are the means to take advantage of a vulnerability. A vulnerability can be as simple as a weak password or as complex as a multi-layered memory-based technique. What malicious actors do is discover a vulnerability and either use existing exploits or develop their own. GenAI will help with that latter.
GenAI is great at generating new code for complex problems. So, with a few prompts into a generative AI interface complex code can be derived. While this code will need to be reviewed and tested, it still provides a huge gain in time savings. Even within the traditional software development world, it is estimated in 2024 that about 20% of all new code was AI generated, and that number is expected to grow rapidly, with around 90% of developers reporting they are using AI tools, according to TechCircle, 2023i.
Through the use of GenAI code generation more complex code can be rapidly developed and less mature developers can quickly overcome their lack of sophistication and deploy malicious attacks. It can basically lower the bar of entry for some actors.
Code generation is only one use of GenAI to speed up their attack ability. Analysis of existing code is also impacted by GenAI. Often, if malicious actors wanted to find new vulnerabilities in code, often called zero-days, they would need to do a laborious process of code review and fuzzing (attempting to break the program) to find flaws or vulnerabilities that they would then write code for. GenAi has proven to be a boon to software and application reverse engineering. In 2023, Zero-day attacks increased by 50% which is believed to be directly attributed to the use of GenAi as one of the major factors.
Next, we need to look at attack methods like phishing, we see another widespread use of GenAI. Phishing is the method of social engineering where malicious actors attempt to convince humans to take actions that usually bypass traditional cybersecurity mechanisms. Often in the form of emails, but they can also be done via telephony or video communications. Often, phishing attacks come in one of two methods. Broad, which is a large volume of emails sent fairly blindly to many individuals with the expectation that some small set will fall for the scam. This is often the base “phishing” attack. GenAi can help create more realistic and grammatically correct phishing attempts than before. It will even provide phishing emails in multiple languages, allowing a monolingual malicious actor to take advantage of others without the language barrier. Secondly, we have spear or whale phishing, which is much more focused. Spear phishing is intended to target a smaller audience with a more tailored message. While GENAI can help with the grammar of the message traditionally AI/ML can help in deciding who to target and how to entice them. Using more traditional ML to parse through large data sets to refine the targets for a malicious action coupled with the ability of GENAI to provide tailored content is a powerful one-two punch.
Emerging Threats
There are potential threats that we have not seen emerge, but we can anticipate in the near future. One has huge potential to change the malicious actor model in an exponential manner. This is through a hybrid methodology using some new AI technologies. The concept of an AI agent needs to be defined first. An AI Agent is an AI system that, given basic guidance, can extrapolate the human intent and act without further human intervention. Advanced AI agents may also be able to add functionality, including self-propagation, to achieve the intent. AI Agents have a multitude of legitimate uses, but for our discussion they can be used to further a malicious attack and coupled with other exploitation techniques or frameworks can become an autonomous exploitation agent, determining what systems to attack or infect without human oversight and even adding new functionality to facilitate attacks.
One of the largest threats is deepfakes. Deepfakes are GenAI content with the express intent of passing as real content. For example, a video of an actor that was completely created by GenAI was used to promote a product they never endorsed. We have already seen deep-fake-enabled fraud attempts to commit crimes, but as technology advances, we fully expect to see more of these attacks. These can be as simple as a phone call claiming to be from a loved one. The voice generated by AI can be very convincing, especially in a situation of heightened stress, such as a call from a “loved one” crying and needing money immediately for a fictitious emergency. In early 2024, a financial worker was tricked into paying out $25 million when malicious actors used a deepfake of the company’s Chief Financial Officer (CNN,2024)ii.
Stay Tuned Next week for AI Defense in the Cyber Battlefield: Part II
i “Adobe Integrates Firefly’s AI Abilities into Illustrator.” TechCircle, 14 June 2023, www.techcircle.in/2023/06/14/adobe-integrates-firefly-s-ai-abilities-into-illustrator.
ii “Deepfake Video Call of CFO Costs Hong Kong Company $25 Million in Huge Scam.” CNN, 4 Feb. 2024, https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html

