Download the full article in PDF format here.
Abstract
This article presents insights from a graduate capstone project focused on the operational use of predictive artificial intelligence (AI) in emergency management (EM) during civil unrest. Using a simulated protest scenario near critical infrastructure, the project demonstrated how AI tools, such as time-series forecasting, geographic information system (GIS) mapping, and sensor fusion, can improve tactical decision-making, resource deployment, and situational awareness. Originally aligned with Executive Order (EO) 14110, the project now pivots on guidance from the National Institute of Standards and Technology (NIST) AI Risk Management Framework, following the rescission of EO 14110. The article outlines lessons in interagency coordination, civil liberties protection, and trust-building between AI outputs and human responders. Recommendations include practitioner-first design, pre-event data sharing protocols, and a call for federal AI playbooks tailored for EM professionals. As federal policy shifts, the article argues that enduring principles of transparency, proportionality, and ethical human-AI teaming must remain at the core of responsible innovation.
This article presents findings from a 2025 capstone project completed as part of the Master of Professional Studies in Homeland Security program at The George Washington University. The project examined how predictive artificial intelligence (AI) can enhance emergency management (EM) operations during protest-driven civil unrest, with a focus on protecting critical infrastructure and safeguarding civil liberties.
The capstone fused operational realism with policy and academic rigor, modeling an AI-enabled response plan for a hypothetical regional protest near a vulnerable energy facility. The central thesis asserts that predictive AI, when responsibly integrated at the tactical edge, can significantly improve situational awareness, coordination, and mitigation efforts during high-risk events.
As the federal AI policy landscape shifts, most recently with the rescission of Executive Order 14110 and the issuance of newer directives, this project remains rooted in enduring values: transparency, proportionality, and the ethical use of technology in public safety missions.
Operational Context
Between 2024 and 2025, the United States experienced a surge in protest activity driven by economic uncertainty, political polarization, and growing public distrust in government institutions. Often unstructured and dynamic, these demonstrations posed unique operational challenges for emergency managers, particularly when they occurred near soft targets, such as power substations or water treatment facilities.
In one modeled scenario, organizers announced a protest near a regional power plant. While expected to remain peaceful, risk indicators suggested potential for escalation, including flash crowds, misinformation-fueled disruption, or adversarial opportunism. Local EM teams had limited visibility into crowd trajectories and were reliant on reactive dispatching, often too late to prevent service interruption or emergency calls from overwhelming 911 systems. This environment called for a shift from static planning to dynamic, predictive risk mitigation.
Capstone Demonstration: AI-Enhanced EM Response
The capstone project modeled a predictive decision-support tool designed to integrate AI-driven forecasts into the Emergency Operations Center (EOC) planning cycle. It demonstrated the ability to simulate protest evolution, identify infrastructure risk zones, and recommend proactive deployment strategies.
Forecasting and Risk Modeling
Using historical protest data, weather patterns, social media sentiment, and geographic information system (GIS) overlays, the model forecasted likely crowd convergence points and escalation zones. Machine learning techniques, specifically random forest and long short-term memory (LSTM) time-series models, enabled the prediction of time-sensitive flashpoints.
Real-World Vignette
In one simulation, the model flagged a likely protest route that bypassed conventional barriers and approached a critical access point behind the power facility. Local responders initially focused on the primary ingress. They used the model’s recommendations to pre-stage law enforcement and public safety at the secondary location, successfully de-escalating the situation before it disrupted operations.
Situational Awareness and Tactical Planning
A custom dashboard fused traffic sensors, radio frequency (RF) anomaly detection (simulated via Radiant Expanse), and emergency call mapping into a real-time visual interface. This allowed incident commanders to simulate “what-if” scenarios, view AI-recommended deployments, and anticipate infrastructure stressors.
Tools, Inputs, and Frameworks Used
Lessons Learned from the Capstone
- Coordination is the Bottleneck
Technology is only as effective as the human relationships surrounding it. Lessons drawn from coursework in interagency cooperation and strategic change management proved vital. EM response required matrixed coordination with law enforcement, utilities, and local government, many of whom had never worked from a common operational picture.
2. Civil Liberties Require Operational Design
The project initially aligned with principles from Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Development and Use of AI” (White House, 2023). However, this EO was rescinded in January 2025.
Its replacement, “Preventing Woke AI in the Federal Government” (White House, 2025), redirects the focus to “truth-seeking” and “ideological neutrality.” While those themes reflect current administrative goals, they shift attention away from civil liberties concerns, such as privacy, bias mitigation, and risk transparency. To remain future-proof, the capstone was re-anchored to the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (2023), which continues to guide AI deployment across federal agencies, independent of changing EOs.
3. Operational Trust in AI Must Be Built
AI-generated recommendations were met with skepticism by EM staff unfamiliar with its logic. Trust was gained through:
- Explainable output (“why” this route was flagged)
- Confidence scoring
- Scenario prompts tied to National Incident Management System (NIMS) protocols
EM teams responded more favorably when models were embedded within their own tactical workflow rather than imposed from the outside.
What Must Happen Next
The capstone serves as a blueprint for how predictive AI can be responsibly integrated into emergency operations. To scale this work, the following steps are recommended:
- Create Practitioner-First AI Interfaces
Invest in tools designed for incident commanders and public safety officers, not just data scientists. Simple visualizations and NIMS-aligned prompts help drive adoption.
2. Establish Pre-Event Data Protocols
Agencies need pre-existing agreements for cross-jurisdictional data sharing prior to an event. Waiting until crisis onset leads to fragmentation.
3. Develop Federal AI Playbooks for EM
The Cybersecurity and Infrastructure Security Agency, Federal Emergency Management Agency (FEMA), and academic partners should co-develop operational playbooks for AI in emergency management. These would include:
- Protest flashpoint detection
- Infrastructure risk modeling
- Drone/RF anomaly triage
- Ethical guardrails in high-tempo environments
4. Launch a Federal AI-in-EM Working Group
A Department of Homeland Security– or FEMA-led task force could bring together state/local EM officials, policy advisors, and technologists to craft shared protocols and identify scalable pilot programs. This group should prioritize nonpartisan standards and align with civil liberties.
Conclusion
The rescission of EO 14110 underscores a key truth: policy may shift, but principles must endure. AI can empower emergency management professionals to plan more effectively, respond faster, and protect more efficiently but only when embedded in frameworks of trust, transparency, and interagency cooperation.
This capstone effort serves as a call to action: Let us move beyond theoretical debates about AI ethics and toward the operationalization of responsible innovation. Practitioner-scholars are uniquely positioned to lead this transformation from within the field, ultimately benefiting the field.
References
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF). https://www.nist.gov/itl/ai-risk-management-framework
The White House. (2023, October 30). Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
The White House. (2025, July 23). Executive Order 14139: Preventing Woke AI in the Federal Government. https://www.federalregister.gov/documents/2025/07/28/2025-14217/preventing-woke-ai-in-the-federal-government


