The speed at which deepfake technology is advancing is alarming and should be deeply unsettling to anyone charged with protecting public safety, ensuring continuity of government, or leading during a crisis. What was once the stuff of science fiction is now an operational reality: AI-generated audio and video so convincing that it can derail emergency response, incite panic, and corrode institutional trust in seconds.
We must stop treating deepfakes as a future threat. They’re here and already being used to manipulate, deceive, and destabilize. This goes beyond cybersecurity or digital ethics; it seriously threatens crisis leadership and homeland security. A fake video of a mayor announcing an evacuation, a phony emergency alert about military activity, or a synthetic voice impersonating a trusted official isn’t science fiction. These are real tactics we have to be ready for. These aren’t far-off hypotheticals—they’re real risks we’re already beginning to face. These are plausible attack vectors in the modern information battlespace.
And we’re not without evidence.
In 2022, a deepfake video of Ukrainian President Volodymyr Zelenskyy was circulated on social media, falsely showing him telling Ukrainian troops to surrender to Russian forces1. Although the video was quickly debunked, it briefly gained traction before platform moderators could respond—and it exposed a significant vulnerability in wartime communications: the ease with which adversaries can inject falsehoods into the public domain during high-stress, high-conflict situations.
More recently, in 2024, a series of deepfake images emerged after Hurricane Helene purporting to show disaster survivors in distress2 . Images that were entirely AI-generated but circulated widely online, shaping emotional narratives and directing attention and resources based on fabrications. These weren’t harmless. They influenced public perception and created noise that emergency responders had to work around during an already chaotic response environment.
Even financial institutions have felt the impact. In early 2024, a multinational bank was defrauded $25 million after criminals used a deepfake audio of a company executive to authorize a fraudulent transfer3. Imagine this same tactic applied to emergency operations: a fake voice authorizing shelter closures, evacuations, or counterterrorism actions. The implications are chilling.
Deepfakes strike at the heart of what emergency managers and homeland security professionals rely on: clear, trusted, timely communication. In a field where seconds matter and confidence are everything, introducing doubt, even briefly, can be catastrophic. And yet, our national security infrastructure is still catching up. Our current posture is mainly reactive, and prevention, once again, is undervalued, despite the high cost of inaction.
So, what now?
Rapid Investment in Detection Capabilities: Federal agencies, state fusion centers, and local emergency management teams should have access to the best real-time tools for flagging and verifying synthetic media. These tools must be interoperable, automated, and embedded into operational workflows—not just tools for forensic analysts after an incident.
Develop and Rehearse Operational Countermeasures: We need to create disinformation contingency protocols—real-world response playbooks for when a deepfake confuses. Just like we prepare for power outages or cyberattacks, we must prepare for synthetic media events. This includes rumor control teams, trusted spokesperson redundancy, and multi-platform rapid response strategies.
Invest in Public Digital Literacy: Our adversaries—from lone actors to hostile nation-states— thrive on public confusion. We can’t afford a digitally naïve population when trust is the currency of stability. Teaching the public how to spot manipulated content and why it matters is as critical as distributing sandbags before a flood or stocking shelters during hurricane season
Institutionalize the Threat: Deepfakes must be treated as a persistent adversary, not a niche tech problem. This means updating continuity of operations (COOP) and continuity of government (COG) plans to account for synthetic disinformation. It means incorporating synthetic media threats into joint exercises, tabletop drills, and training curricula. And yes, it means investing in prevention—something that remains an uphill battle in a reaction-driven policy environment.
If we wait to act until a deepfake triggers a mass casualty incident, disrupts a coordinated
response, or undermines a national security decision, we will have already failed.
Crisis leadership in the 21st century must evolve to meet the challenges of a synthetic reality. The truth is no longer self-evident—it must be verified, defended, and communicated with even greater clarity. The stakes are no longer theoretical. We’re in it now. The question is: Are we ready to lead through the noise?
Endnotes
1 https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-warmanipulation-ukraine-russia
2 https://www.forbes.com/sites/larsdaniel/2024/10/04/hurricane-helena-deepfakes-floodingsocial-media-hurt-real-people/
3 https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
The views expressed are those of the Author and do not represent the FBI, the State of Illinois, any U.S. government agency, any University, or a private sector organization.


