Continuing our three-part analysis of the homeland security landscape, Homeland Security Today now presents Part II of the Threat Forecast for 2026. Our Editorial Board, columnists, and community of subject-matter experts bring decades of practical experience defending and protecting America, and their insights reflect both enduring concerns and emerging challenges in their assessments of the threats facing our nation.
This year’s forecasts are delivered amidst significant transition: changes in leadership with Department of Homeland Security agencies, evolving federal priorities, and a threat landscape that continues to grow in complexity. Our experts examine familiar adversaries alongside newer risks, from the persistent threat of terrorism to the rapid advancement of technologies that both enable and challenge our security efforts.
Several themes emerge across these assessments: the enduring and evolving nature of terrorist threats, both foreign and domestic; the accelerating role of artificial intelligence, unmanned systems, and other emerging technologies in the hands of both defenders and adversaries; and the multidimensional challenges that cut across traditional categories, from infrastructure vulnerabilities to gaps in strategic foresight.
As addressed in Part I, this year’s forecasts address three interconnected areas of concern.
- Part I, Terrorism – examining threats from ISIS and its affiliates, lone actors and small cells, soft-target vulnerabilities, and the convergence of extremist movements
- Emerging Technology – addressing necessary policy frameworks for advanced air mobility, cybersecurity, and AI; unmanned aerial systems in law enforcement; identity and credential security; and the exploitation of AI by criminal organizations
- Systemic Risks – exploring infrastructure security, lifeline resilience, facilities protection, foresight gaps, and the intersection of major events with evolving threat streams
Some assessments appear here as excerpts; full versions are available via the links provided. We encourage you to read the comprehensive analyses our contributors have shared, and to share these articles alongside your own perspectives with us via LinkedIn (tag @GTSC’s Homeland Security Today). The homeland security mission depends on practitioners in the field, and we want to ensure this conversation reflects the full scope of our community’s knowledge and concerns.
Whether confronting a resurgent adversary or anticipating the next disruptive technology, these assessments offer a clear-eyed look at the challenges ahead and a foundation for the strategies we’ll need to meet them.
Part II, Emerging Technology
Leading Complexity: Necessity for Emerging Tech Policy Frameworks in 2026
Patricia Cogswell, Partner, Defense & Security, Guidehouse; Editorial Board Member, Homeland Security Today; former Deputy Administrator, Transportation Security Administration; former Assistant Director, Immigration and Customs Enforcement
[Excerpt]
In last year’s threat forecast, I wrote about the complexity of the threat environment – with its variety of threat actors and the number of threat vectors – as well as the potential for any of these threats to overwhelm the capacity of our systems and processes. I also addressed the capacity of leaders to manage one of these threats, much less when the threats come together to create cascading disruptions.
Building on that analysis, this year I want to examine how these dynamics are playing out in practice. For years, we’ve heard about the lack of coherent, flexible policy frameworks for advanced air mobility (AAM), cybersecurity, and artificial intelligence (AI), and the resulting conflicts in direction and expectations. The results of which leave us vulnerable to threats, especially those from nation-state competitors and transnational criminal organizations and slow innovation by U.S. governmental organizations, the private sector, and our international partners.
Those frameworks are now emerging – both in the United States and abroad – and the way these three domains intersect deserves closer examination.
Where We’ll See Agreement and Certainty
Expect alignment in foundational principles, such as safety, security, and ethical use, to continue to mature across all three areas. Aviation authorities globally are converging on baseline standards for AAM operations and certification, including airspace integration and pilot certification. Cybersecurity frameworks are similarly stabilizing – with NIST Cybersecurity Framework 2.0 (CSF 2.0) emphasizing governance, zero-trust architectures, supply chain risk mitigation, and resilience against ransomware – reflecting shared lessons from recent attacks. AI governance is also advancing, with transparency and accountability requirements – such as algorithmic audits and risk-based oversight – gaining traction as the EU’s AI Act enters its general application phase in 2026, and U.S. agencies implement OMB mandates for inventories and governance structures.
These developments are increasingly interconnected: cybersecurity assurance practices like software bills of materials and secure update pipelines are being embedded into AAM infrastructure and AI-enabled flight systems; AI risk frameworks are shaping aviation automation and predictive maintenance; and incident reporting obligations under Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) will apply to AAM operators and AI service providers. The result is a growing convergence of governance and assurance principles across the triangle of cybersecurity, AI, and AAM.
Where Gaps and Conflicts Will Persist
Despite progress, significant divergence remains. In AAM, operational rules for urban environments vary widely, particularly around airspace management and local permissions for vertiports and energy infrastructure, creating uncertainty for future cross-border commercial operations as well as future markets for cross-border AAM airframe sales. Authorities to counter unmanned aerial systems, both for governments and private sector, similarly haven’t progressed as needed, leaving gaps in security for airports and critical infrastructure.
Cyber defense maturity differs significantly across sectors, reflecting disparities in available funding as well as willingness to invest in protection and mitigation. Enforcement rigor also varies: some regions prioritize privacy or national security, while others lean on voluntary measures adopted by recognized private sector organizations. AI governance is perhaps the most fragmented. The EU’s AI Act imposes strict risk classifications and obligations for high-risk and general-purpose AI systems, while U.S. policy continues to favor a standards-based approach anchored in NIST frameworks.
To read the remainder of this threat forecast, including Cogswell’s insight about the impact on security and what security leaders should do, click here.
Assessing the Homeland Risk Landscape in a Fractured Era
Kenneth Bible, former Chief Information Security Officer, DHS; Editorial Board Member, Homeland Security Today
Once again, the new year begins with a couple of challenging events that, regardless of political persuasion, color an assessment of the greatest threats to our Homeland in 2026.
First, the American action to conduct a raid on the country of Venezuela to remove Nicolas Maduro from the presidency based on U.S. court indictments stunned many. The breathtaking scope of actions taken, and follow-on demands for the new President to cut ties with powerful nation-state allies triggered concerns across a number of sectors. Subsequently, a series of shootings of civilians associated with ICE and CBP agents in Minneapolis and Portland further exacerbated the high level of political polarization associated with the current Administration’s immigration policies.
Based on this polarization, self-radicalization and violent domestic extremism remain near the top of my risk assessment. While such extremism may be motivated by both far-right and far-left extremist groups, the growing concern is the lone actor motivated by political beliefs and influenced by false narratives. We can expect increasing efforts by foreign actors to exacerbate such false narratives, societal divisions, and undermine trust in federal institutions.
One mechanism of action of increasing concern is the continued proliferation of drone technology – the weaponization of which has become frighteningly more accessible to the lay person. Given the devastating effectiveness of such technology in the war between Russia and Ukraine, combined with the continued lack of effective regulations, and lagging definition of legal authorities on countering the threat, urgent action is needed.
Cybersecurity threats continue to weigh on both the public and private sectors in the United States. Nation-State actors such as China, Russia, North Korea, and Iran may increase cyber operations, particularly to disrupt critical infrastructure and undermine public confidence, as retaliation for the reported U.S. employment of cyber effects on Venezuelan power distribution during the operations earlier in the month. While visibility to these threats has increased, the lack of specific intelligence may blunt an effective response. And while Congress is more engaged in the topic – most notably due to the telcom impacts of Salt Typhoon – specific actions will likely take time, or only be mandated in light of a catastrophic incident.
As noted last year, an adversary able to impact critical infrastructure such as power and water systems could challenge the response capabilities of the largely commercial entities operating such infrastructure. And while some laudable voluntary efforts have emerged to mitigate the most egregious risks (namely the DEF CON Franklin Project), the role of the government at all levels to mitigate such risks remains undefined.
AI is Redefining the Criminal Threat Landscape—and Law Enforcement Must Respond
Doug Cook, Director, Defense & Security, Guidehouse; former Section Chief, Information Technology Applications and Data Division, FBI
The criminal environment confronting the United States in 2026 looks very different from the threats many agencies were built to address. Transnational criminal networks, synthetic drug producers, and cyber-enabled fraud groups have adopted artificial intelligence (AI) that is fundamentally reshaping how they operate and how quickly they adapt, reducing the effectiveness of the methods law enforcement uses to disrupt and dismantle them; we must adapt our model.
Across multiple threat domains, AI has become a force multiplier for our adversaries. Cartels are adapting quickly, using new alliances and technology-enabled logistics to move synthetic drugs and other contraband through shifting routes. These organizations already rely on AI-supported communications, encrypted platforms, and synthetic identities that make traditional targeting and attribution more difficult.
At the same time, the synthetic drug market has entered a more dangerous phase, with fentanyl increasingly mixed with compounds like xylazine and nitazene to evade scheduling laws and complicate forensic analysis. Criminal networks are now exploiting AI to design novel analogs, optimize synthesis, and conceal supply chains—accelerating innovation and detection evasion. These combinations raise lethality and create new challenges for investigators and prosecutors who must establish origin, composition, and intent.
But nowhere is the shift more visible than in the world of AI-driven fraud. Deep-fake technology now enables criminals to impersonate trusted executives, family members, or government officials with near-perfect realism. Some operations overseas have turned fraud into an industrial process, using AI-generated scripts, cloned voices, and synthetic personas to victimize individuals and institutions at scale. These schemes are no longer fringe threats; they can move millions in minutes.
The common thread across these developments is clear: AI has lowered the barrier to entry for complex crime, expanded the reach of sophisticated actors, and eroded many of the cues people traditionally rely on to assess what is real. Criminals understand this advantage, and they are exploiting it.
For law enforcement, meeting this moment requires a shift in mindset. Federal guidance has directed agencies to leverage AI to innovate and create streamlined acquisitions to support its adoption. AI tools can help identify deep-fake media, map criminal networks, analyze massive data sets, and accelerate digital evidence review. But these capabilities must be adopted carefully, with clear standards that preserve evidentiary integrity and public trust.
The threat landscape is evolving quickly, and AI is at the center of that evolution. Recognizing how adversaries use it and ensuring we are prepared to counter it is essential. The coming year will test our ability to adapt. It will also determine whether emerging technologies strengthen our defenses or widen the gap between the challenges we face and the tools we rely on to address them.
The Evolving Role of Unmanned Aircraft Systems in Law Enforcement: Opportunities and Challenges Ahead in 2026
Daniel Odom, Director, Defense & Security, Guidehouse; 2025 Homeland Security Today Market Maven Award Winner; former Section Chief, Technology and Data Innovation Section, Counterterrorism Division, FBI
As homeland security professionals prepare for 2026, the integration of unmanned aircraft systems (UAS), commonly known as drones, into law enforcement operations will reach new heights. Over 1,500 state and local public safety agencies already employ drones for tasks ranging from search and rescue to tactical surveillance and disaster response. Advancements in beyond visual line of sight (BVLOS) operations, propelled by the Federal Aviation Administration’s (FAA’s) proposed Part 108 rulemaking, promise to expand these capabilities significantly. Drones as “first responders” will enhance situational awareness, reduce risks to officers, and enable faster, more efficient public safety responses—particularly valuable for large-scale events like the FIFA World Cup.
The benefits are compelling: drones provide cost-effective aerial oversight compared to manned helicopters, improve officer safety, and deliver real-time intelligence in emergencies. In 2026, normalized BVLOS flights could support routine uses in crowd monitoring and rapid deployment over vast areas, all while complying with evolving FAA standards for detect-and-avoid technology and remote identification.
However, these advantages come with notable challenges. Privacy concerns remain paramount, as drones equipped with high-resolution cameras and sensors can inadvertently—or intentionally—capture sensitive data, raising Fourth Amendment issues. Public skepticism about data misuse persists, with civil liberties groups highlighting risks to free assembly and potential biases in deployment.
Looking to 2026, homeland security professionals must prioritize balanced implementation: robust policies for data retention, transparent community engagement, and strict warrant requirements for non-emergency surveillance. At the same time, the threat from malicious drones is growing. The Federal Bureau of Investigation’s new National Counter-Unmanned Training Center (NCUTC) in Huntsville, Alabama, will train agencies nationwide to detect, assess, and counter hostile drone activity.
In summary, 2026 will mark a pivotal year for UAS in law enforcement—offering transformative tools for safety while necessitating safeguards against abuse and emerging threats. Proactive planning, ethical guidelines, and investment in both offensive and defensive capabilities will ensure drones enhance, rather than erode, public trust and security.
When Trust Becomes The Attack Surface
Jennifer Ewbank, Founder, Andaman Strategic Advisors; Editorial Board Member, Homeland Security Today; former Deputy Director of CIA for Digital Innovation
For decades, homeland security professionals have focused on protecting systems: networks, facilities, borders, and databases. But in 2026, consequential threats to the homeland will increasingly target something more fragile and more human: trust itself. Synthetic identities generated or augmented by artificial intelligence (AI) are no longer cutting-edge tools used by sophisticated nation states alone. They are becoming scalable instruments for bypassing verification, impersonating authority, and quietly embedding risk inside the institutions Americans rely on every day.
These are not just better forgeries. AI-enabled synthetic identities combine convincing documents, realistic voices, facial imagery, behavioral patterns, and digital histories into cohesive personas that can pass many of today’s checks. When these identities are used to access benefits systems, infiltrate contractor ecosystems, manipulate customer service channels, or impersonate officials during moments of urgency, the result goes far beyond financial losses. It erodes confidence in the systems meant to help people in moments of need. In emergency response, public safety, healthcare, or disaster relief (domains where speed and trust matter most), this erosion can quickly become a homeland security problem.
From an intelligence perspective, this evolution is familiar, though the scale has changed dramatically. Over the course of 2025, the convergence of more powerful generative AI models and the rise of darkweb marketplaces offering deepfakes-as-a-service has transformed deception into a scalable commercial product. What once required time, technical skill, financial resources, and access can now be purchased and deployed on demand. Voice clones, synthetic video, forged documents, and fully packaged digital personas are being sold the way malware once was. Deception now moves through markets and platforms, enabling adversaries to commoditize impersonation and overwhelm defenses that have not yet adapted to this new reality.
Meeting this challenge in 2026 will require a shift in mindset as much as technology. Identity can no longer be treated as a solved problem or a static credential. Leaders across federal, state, local, and private-sector organizations should assume that convincing impersonation is the baseline threat environment, not an edge case. That means hardening identity workflows, building friction where consequences are highest, training frontline personnel to recognize manipulation cues, and designing verification processes that account for human psychology as well as technical assurance. Protecting the homeland has always required adapting to how adversaries think. In the age of synthetic identity, it will also require protecting how we decide whom and what to trust.
As synthetic identity becomes a baseline threat, homeland security leaders should reassess where trust is assumed rather than verified. High-consequence actions, such as access to benefits, emergency authorities, infrastructure controls, sensitive data, or vendor systems, should trigger stronger, layered verification that does not rely on a single source, such as voice, video, or documents alone. Organizations should treat identity workflows as critical infrastructure, subject to red-team testing and impersonation simulations. Equally important, frontline personnel should be trained to recognize manipulation cues and social engineering tactics, especially when under time pressure.
Stop Playing Defense: Identity and Authenticity Are Mission-Critical
Donna Roy, Strategic Advisor, National Security Segment, Guidehouse; 2024 Homeland Security Today Trailblazer; former CIO & COO, Consumer Financial Protection Bureau; former Executive Director, Information Sharing and Services Office, Department of Homeland Security
Michael Eder, Partner, Defense & Security, Guidehouse; former Principal at Grant Thornton
In 2026, one of the biggest threats to homeland security will be identity compromise and fake content at scale. The use of AI is not just accelerating attacks; it’s rewriting the playbook. If mission owners don’t act now, the homeland security community will be chasing symptoms, while adversaries own the narrative.
Adversarial AI is here, and it has changed the speed and scale of attacks impacting your mission. AI-driven campaigns that couple tailored social engineering with automated credential replay, session theft, and model-generated lures at machine speed cannot be combatted with legacy cyber hygiene practices and old multi-factor authentication (MFA) solutions.
Homeland Security Scenario: Securing FIFA 2026 and LA28
Imagine the opening match of FIFA 2026. Stadium gates stall as ticketing systems lock up, broadcast feeds flicker, and social media erupts with convincing but fabricated videos of crowd panic. In minutes, confidence collapses, operations seize, and public safety is at risk. This scenario isn’t hypothetical; it is possible. While working at the Department of Homeland Security (DHS), my team and I worked through the chaos while supporting security operations during the disruption of a major sporting event.
On February 3, 2013, at the Superdome in New Orleans, Super Bowl XLVII was set for an unforgettable halftime show. Beyoncé delivered a flawless performance, complete with a Destiny’s Child reunion, but moments later, the unexpected happened: the stadium went dark. The blackout struck just after the kickoff of the third quarter, shortly after Beyoncé’s set ended. Scoreboards, lighting, and signage all failed, leaving only emergency lights. Fans, players, and broadcasters were frozen in place. The outage lasted 34 minutes, sparking speculation and jokes that Beyoncé blew the grid.
Behind the scenes, this wasn’t just an inconvenience; it was a potential security crisis. We were lucky. We had a trusted platform with strong authentication and trusted content, the HSIN (Homeland Security Information Network). HSIN served as the secure backbone for real-time coordination, enabling law enforcement, emergency management and cyber professionals to share situational updates, verify that the outage wasn’t a cyberattack or terror incident, maintain operational continuity, and communicate with venue personnel.
Fast forward to now, the decade of sport and two of the highest profile events – FIFA 2026 and LA28 – and imagine the lights going out again. Even the most choreographed, high-profile events can face unpredictable disruptions. The 2013 Super Bowl blackout shows why strong authentication, interoperable credentials, and trusted content remain mission critical.
FIFA World Cup 2026 is, in the minds of our adversaries using AI, an attractive digital battleground. Spanning 104 matches across 16 venues in the United States, Canada, and Mexico, FIFA 2026 is the largest soccer tournament ever. Its reliance on digital systems for ticketing, stadium operations, transportation and more makes it a prime target for cybercriminals and state-sponsored actors. Imagine live broadcasts and emergency warnings dropping during an outage across multiple venues in different countries with a limited ability to communicate with fans and teams. Now imagine fake content spreading at speed, on fans’ phones, and on screens across the venues. Engineering trust in the complex battleground isn’t optional, and it is possible.
To read the remainder of this threat forecast, including Roy and Eder’s insight into what can be done, click here.

