PERSPECTIVE: Deepfakes and the Erosion of Trust in Homeland Security

The speed at which deepfake technology is advancing is alarming and should be deeply unsettling to anyone charged with protecting public safety, ensuring continuity of government, or leading during a crisis. What was once the stuff of science fiction is now an operational reality: AI-generated audio and video so convincing that it can derail emergency response, incite panic, and corrode institutional trust in seconds.

We must stop treating deepfakes as a future threat. They’re here and already being used to manipulate, deceive, and destabilize. This goes beyond cybersecurity or digital ethics; it seriously threatens crisis leadership and homeland security. A fake video of a mayor announcing an evacuation, a phony emergency alert about military activity, or a synthetic voice impersonating a trusted official isn’t science fiction. These are real tactics we have to be ready for. These aren’t far-off hypotheticals—they’re real risks we’re already beginning to face. These are plausible attack vectors in the modern information battlespace.

And we’re not without evidence.

In 2022, a deepfake video of Ukrainian President Volodymyr Zelenskyy was circulated on social media, falsely showing him telling Ukrainian troops to surrender to Russian forces1. Although the video was quickly debunked, it briefly gained traction before platform moderators could respond—and it exposed a significant vulnerability in wartime communications: the ease with which adversaries can inject falsehoods into the public domain during high-stress, high-conflict situations.

More recently, in 2024, a series of deepfake images emerged after Hurricane Helene purporting to show disaster survivors in distress2 . Images that were entirely AI-generated but circulated widely online, shaping emotional narratives and directing attention and resources based on fabrications. These weren’t harmless. They influenced public perception and created noise that emergency responders had to work around during an already chaotic response environment.

Even financial institutions have felt the impact. In early 2024, a multinational bank was defrauded $25 million after criminals used a deepfake audio of a company executive to authorize a fraudulent transfer3. Imagine this same tactic applied to emergency operations: a fake voice authorizing shelter closures, evacuations, or counterterrorism actions. The implications are chilling.

Deepfakes strike at the heart of what emergency managers and homeland security professionals rely on: clear, trusted, timely communication. In a field where seconds matter and confidence are everything, introducing doubt, even briefly, can be catastrophic. And yet, our national security infrastructure is still catching up. Our current posture is mainly reactive, and prevention, once again, is undervalued, despite the high cost of inaction.

So, what now?

Rapid Investment in Detection Capabilities: Federal agencies, state fusion centers, and local emergency management teams should have access to the best real-time tools for flagging and verifying synthetic media. These tools must be interoperable, automated, and embedded into operational workflows—not just tools for forensic analysts after an incident.

Develop and Rehearse Operational Countermeasures: We need to create disinformation contingency protocols—real-world response playbooks for when a deepfake confuses. Just like we prepare for power outages or cyberattacks, we must prepare for synthetic media events. This includes rumor control teams, trusted spokesperson redundancy, and multi-platform rapid response strategies.

Invest in Public Digital Literacy: Our adversaries—from lone actors to hostile nation-states— thrive on public confusion. We can’t afford a digitally naïve population when trust is the currency of stability. Teaching the public how to spot manipulated content and why it matters is as critical as distributing sandbags before a flood or stocking shelters during hurricane season

Institutionalize the Threat: Deepfakes must be treated as a persistent adversary, not a niche tech problem. This means updating continuity of operations (COOP) and continuity of government (COG) plans to account for synthetic disinformation. It means incorporating synthetic media threats into joint exercises, tabletop drills, and training curricula. And yes, it means investing in prevention—something that remains an uphill battle in a reaction-driven policy environment.

If we wait to act until a deepfake triggers a mass casualty incident, disrupts a coordinated
response, or undermines a national security decision, we will have already failed.

Crisis leadership in the 21st century must evolve to meet the challenges of a synthetic reality. The truth is no longer self-evident—it must be verified, defended, and communicated with even greater clarity. The stakes are no longer theoretical. We’re in it now. The question is: Are we ready to lead through the noise?

Endnotes

1 https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-warmanipulation-ukraine-russia
2 https://www.forbes.com/sites/larsdaniel/2024/10/04/hurricane-helena-deepfakes-floodingsocial-media-hurt-real-people/
3 https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk

The views expressed are those of the Author and do not represent the FBI, the State of Illinois, any U.S. government agency, any University, or a private sector organization.

Claire Moravec is a nationally recognized security and risk executive with over a decade of experience protecting people, assets, and operations across government, technology, and industry. A former FBI intelligence leader and state homeland security executive, she has built and scaled global programs in security, trust and safety, and crisis management. She currently serves as the Director of Intelligence Operations for Social Media Exploitation at Sentinel.

As Deputy Homeland Security Advisor to the Governor of Illinois and the State’s inaugural Deputy Director of Homeland Security, Claire led major initiatives in targeted violence and terrorism prevention, school and campus safety, critical infrastructure protection, statewide interoperability, and grants management. She also represented Illinois on the U.S. Secret Service Executive Steering Committee for the 2024 Democratic National Convention (NSSE), coordinating statewide public safety and interagency operations.

At the FBI, Claire served as an intelligence and operations leader and founding member of the Social Media Exploitation (SOMEX) team. She also served on the Joint Terrorism Task Force (JTTF) in both FBI Chicago and FBI New York, supporting complex counterterrorism investigations targeting al-Qa’ida, ISIS, and the Al-Nusra Front. In that capacity, she played a key role in the capture of two high-value terrorism targets by leveraging classified intelligence, human sources, and interagency partnerships. She was awarded the FBI Medal of Excellence (2017) for leading digital threat-disruption operations targeting nation-state actors within the Bureau’s National Covert Operations Section.

In the private sector, Claire advanced trust and safety efforts as Senior Manager of Response Operations at Snap Inc. (Snapchat), overseeing high-risk incident management and user protection. As Chief Operating Officer of a threat intelligence startup, she helped design and operationalize analytical frameworks for behavioral and insider threat assessment. Across sectors, she has advised on some of the nation’s most complex security challenges—translating evolving risk landscapes into actionable strategies that strengthen enterprise resilience and organizational trust.

Claire holds degrees from Columbia University and Loyola University Chicago, completed executive education through the Naval Postgraduate School’s Center for Homeland Defense and Security, and is a FEMA Vanguard Executive Crisis Leaders Fellow. She also serves as an adjunct professor at the University of California, University of Denver, and Saint Xavier University, teaching graduate and undergraduate courses in emergency management, criminology, and clinical social work.

The views expressed are solely those of Claire Moravec and do not reflect the official positions of the FBI, the State of Illinois, any U.S. government agency, university, or private sector organization with which she has been affiliated.

Related Articles

- Advertisement -

Latest Articles