Homeland Security Today is proud to share our Editorial Board and expert community’s 2024 Threat Forecast. In an election year, with tremendous risks and vulnerabilities facing the nation, we take stock each year by asking our cadre of experienced homeland security practioners what they would like to share with you, our community of readers. This year is our most comprehensive collection to date from a diverse group of professionals who have served both in and outside government.
This year’s piece is presented in three parts:
- Terrorism- experts discuss both external and internal threats from terrorists, terrorist groups, and lone wolves.
- Cyber & Advanced Technology – experts discuss the varied and persistent threats from cyber attackers and from rapidly advancing technology.
- Internal Threat – this year many of our experts cited the numerous threats to our nation and our democracy from internal threats.
This collection underscores the varied nature of the threats against our way of life, and the voracity of those who wish us harm – personally, economically, militarily. We hope this compilation provides some insight into what you already know, and alerts you to some challenges you perhaps have not considered. If you are in the homeland security community and would like to weigh-in on something you do not see here, please reach out to [email protected] with the RE line: Threats for 2024. Please provide a bio if you would like to be considered for publication.
Key takeaways:
- Internal dissension and disagreement puts America and its institutions at great peril, from our elections to foreign policy to border security.
- Misinformation is misunderstood and our ability to combat it will determine the outcome of many critical challenges facing the country.
- Misunderstanding the activity, strength, and strategy of foreign nation’s like Iran, China, Russia, North Korea, and foreign terrorist groups like Hezbollah, Hamas, Al-Qaeda, and ISIS leads to a heightened risk and threat environment.
- Our nation must devote more focus to strategic foresight and mitigate against “strategic surprise” by nurturing our people’s understanding, and mastery of, complexity.
- Allowing the potential of AI and quantum computing to benefit us while balancing the need for security will be pivotal to our future – collaboration is key to understanding the technology and its implications.
- Specific threats like lone wolf terrorism, drones, biological threats, and vulnerabilities like lack of preparedness for natural disasters, continue to increase.
- Ransomware is increasing exponentially and poses a considerable threat to critical infrastructure.
PART II: CYBER, AI, & ADVANCED TECHNOLOGY
Balancing Bureaucratic Oversight with Opportunity
As a technology and data professional, I served both as a contractor and a federal executive, for over four decades. While I am more excited than ever about technology innovation opportunities, I am worried that we, as a country, won’t move fast enough to adopt approaches that both protect the American Public’s interests and sustain a pace of innovation that matches or exceeds that of threat actors operating under less careful rules.
As I read and re-read the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence published in Oct of 2023, my concern increases for how much we need to do to ensure we get this balance right and the timeframe in which to do them. The EO establishes an aggressive and comprehensive set of Federal Government oversight mandates to provide the balance of innovation and safeguards. In particular, the EO established deadlines to build the framework the first half of 2024 that will determine how quickly we will be able to advance over the next few years. Especially complex are how this framework enables agencies to navigate cross jurisdictional issues, along with other agencies and industry partners, in order to accomplish their mission.
In my time at the Department of Homeland Security, I developed a deep appreciation for the collaboration and communication needed to operate within and across the Homeland Security Enterprise, as well as the complexity of providing actionable guidance to Federal, State, local, Tribal, Territorial, International Agencies, and private sector partners. I’ve seen how difficult the balance is – with overly prescriptive guidance or regulations, FSLTTIP entities may not be able to comply and enable innovation; with less direction, these same entities may be unable to make timely decisions or focus the resources needed.
In HSToday’s series reflecting on Twenty years after 9/11, I wrote about the risk that the volume of data available would exceed the collective ability of the Homeland Security community to analyze, find the most relevant data, and make analytical judgments needed for their missions. In the two years since, we now have ChatGPT and Generative Artificial Intelligence; providing opportunities to vastly improve our ability and speed. Many agencies are still working to establish the right combination of safeguards and permissions, however, to allow their workforce to leverage GenAI tools. Fortunately, DHS is out front, leading the way.
We need a faster pace to adoption. The EO can provide that impetus, driving planning, policymaking, risk assessment, and implementation. We need more people that can become comfortable with balancing risk in complex environments. Outpacing the threat will rely on our collective ability to collaborate and communicate, act quickly on risks that arise while we use innovations with the highest ability to secure our homeland.
Donna Roy
Former CIO & COO, Consumer Financial Protection Bureau
Former Executive Director, Information Sharing and Services Office (IS2O), U.S. Department of Homeland Security
Strategic Advisor, National Security Segment, Guidehouse
Without effective government oversight and regulations, AI projects that lack proper ethical controls and privacy protections will be deployed across all domains with potentially economically devastating and discriminating effects. AI systems developed by foreign governments and deployed as consumer products will be very difficult to detect and stop. That said, our initial focus should be on regulating and protecting our software supply chain. We are all fully aware of the consequences of false narratives and misinformation.
Industry experts are predicting rapid advances in Quantum Computing this year. Cyber defense and readiness can be enhanced with the computing power of Quantum Computers. Conversely, there will be bad actors, especially state sponsored cyber organizations, that will also have access to the same technology. Quantum Computing can potentially provide critical infrastructure networks with an enhanced hardened posture and provide governments with more cyber offensive capabilities. It is critically important that we lead the race in developing quantum technology, and just as important, protecting our intellectual property.
Antonio Villafana
Former Chief Information Officer, Office of Health Affairs, U.S. Department of Homeland Security
Former Nuclear, Biological, and Chemical Specialist, U.S. Army
Chief Information Officer, Virginia Department of Human Resource Management HSToday Editorial Board Member
For those hoping to see an ebb in cyberattacks in 2024, keep dreaming. Reality has an ugly habit of being far more unrelenting. Take 2023. Please. We will continue to see similar, significant activity in the cyber world in 2024, including developments in Artificial Intelligence, supply chain attacks, insider threat actors (both malicious and merely negligent), and foreign malign influence operations, including hybrid threats.
Perhaps most concerning is the ongoing threat of ransomware, which remains a top instigator of data breaches. A recent study revealed that the rate of ransomware attacks in financial services grew from 55% in 2022 to 64% in 2023. But it’s not just big banks that are targets. One of the most significant legacies of the Colonial Pipeline attack is that a relatively unsophisticated cyber gang could successfully turn a single compromised password into its personal ATM machine. As humans remain the main access point for cyber attacks, every organization is vulnerable, especially small- and mid-sized enterprises (SMEs).
In fact, schools, hospitals, cities, counties, and other government organizations are increasingly being extorted. While these public sector organizations are under-resourced concerning cybersecurity, many have access to funds sufficient to interest cyber hackers. The late Alan Paller, founder of the SANS Institute, used to say: “Washington loves to admire a problem.” His point remains a rallying cry: Let’s stop talking about cybersecurity and start acting.
Fortunately, there is a specific plan for SMEs to defend against ransomware. The Institute for Security and Technology has created a Blueprint for Ransomware Defense. Based on a subset of the Center for Internet Security’s Critical Security Controls, the Blueprint provides a clear path to protect against over 70% of the attack techniques associated with ransomware. There is much to be concerned about in defending our nation against cyber attacks. In 2024, let’s start implementing known, effective defenses that will make us stronger and more resilient.
Brian de Vallance
Former DHS Assistant Secretary for Legislative Affairs, HSToday Editorial Board Member
Artificial Intelligence (“AI”) seems to hold great promise, yet much potential harm. To sedulously avoid all the potential harm AI poses across the globe, there need to be strategies that worldwide governments can consider to help mitigate the potential harm caused by AI:
- Ethical AI Frameworks: Governments can establish ethical AI frameworks and guidelines that AI developers and users must follow. These frameworks should emphasize transparency, fairness, accountability, and the protection of human rights. International organizations and standards bodies, such as the IEEE and ISO, can contribute to the development of globaregulatinl AI standards.
- Regulation and Legislation: Governments can enact laws and regulations specific to AI systems, including those that address privacy, data security, discrimination, bias, and safety. These regulations should be flexible to accommodate the rapidly evolving nature of AI technology.
- Oversight and Monitoring: Governments can establish regulatory bodies or agencies responsible for overseeing AI development and deployment. These bodies can monitor AI systems for compliance with regulations, investigate violations, and impose penalties when necessary.
- Risk Assessment and Impact Studies: Governments can require organizations to conduct risk assessments and impact studies before deploying AI systems in critical areas such as healthcare, transportation, and criminal justice. These assessments can help identify potential risks and mitigate harm.
- Algorithmic Transparency and Explainability: Encourage transparency in AI algorithms and require developers to explain how their systems make decisions, especially in high-stakes contexts like autonomous vehicles, healthcare, and finance.
- Fairness and Non-Discrimination: Governments should promote the development of AI systems that do not discriminate against individuals or groups based on gender, race, ethnicity, or other sensitive attributes. Regulatory frameworks can help ensure fairness in AI decision-making.
- Education and Training: Invest in AI education and training programs to build a workforce capable of understanding, developing, and regulating AI technology effectively.
- International Collaboration: Foster collaboration with other countries to create a global approach to AI governance. International agreements and partnerships can help harmonize regulations and standards, making it more challenging for organizations to engage in harmful practices in one jurisdiction and evade responsibility in another.
- Research and Development: Invest in research on the safety and ethics of AI. Governments can fund research projects and establish research centers focused on AI ethics, safety, and security.
- Public Engagement: Governments should actively involve the public in discussions about AI policies and regulations. Public input can help shape ethical guidelines and ensure that AI systems serve the best interests of society.
- Enforcement and Accountability: Establish mechanisms to hold AI developers and users accountable for any harm their systems may cause. This includes penalties for non-compliance, data breaches, and discriminatory practices.
- Red Teaming and Ethical Hacking: Encourage independent auditing, red teaming, and ethical hacking of AI systems to identify vulnerabilities and potential harms.
It’s important to recognize that achieving these goals will require ongoing collaboration and adaptation as AI technology continues to evolve. The balance between fostering innovation and ensuring safety and ethical use of AI will be an ongoing challenge, and governments must continually reassess and update their strategies and regulations.
Tom Cellucci
Former Senior Counselor and First Chief Commercialization Officer, U.S. Department of Homeland Security
Chairman & CEO to several public and private sector organizations
Worldwide Elections: With 2024 dubbed the biggest year in election history, including elections in over 60 countries, representing about half the world’s population, there is increased geopolitical risk around the globe in an already uncertain environment (Israel/Gaza, Ukraine/Russia, China, Iranian Proxies against shipping channels and Israeli IT systems, andthe North Korean missile program – to name a few). Worldwide elections are trending to greater risks of disinformation and related impacts during administration turnover. Look for government elections and associated transitions to increase geopolitical risk in 2024.
International Terrorist Organizations: With worldwide Geopolitical and U.S. budget focus on nation states and their proxies, a coverage vacuum is created. This potentially enables expansion by traditional international terrorist organizations. Risks include continued expansion of ISIS in Iraq, Syria, and Afghanistan; Al-Shabaab influence in Somalia and Kenya; Hezbollah expansion from their operations in Lebanon and Syria, and Al-Qaeda affiliates operating around the world, particularly in Yemen (AQAP). Expect a renewed focus on international terrorist organizations in 2024.
Domestic Unrest: Even before the Israel-Hamas war, antisemitic and related threats were on the rise in the U.S. Disinformation could exacerbate tensions around any number of controversial topics (civil rights, abortion, governmental powers, immigration, etc.).
Anticipate higher domestic security requirements in 2024 as U.S. law enforcement and security capabilities, already taxed by rising levels of crime, will likely be further stretched.
Ongoing Cyber Threats: Criminal and state actor adversaries will continue to exploit highly complex physical and technology supply chains, and related vulnerabilities for data exfiltration and disruptive effect. Iran Revolutionary Guard Corps affiliates sought to exploit operational technology in U.S. Water treatment plants late last year. The threat of ongoing cybersecurity attacks with a focus on critical infrastructure is a trend that is unlikely to abate.
Lee Kair
Former Assistant Administrator for Security Operations at the U.S. Transportation Security Administration
Principal, The Chertoff Group
HSToday Editorial Board Member
Read the other 2024 Threat Forecasts here.
- Terrorism – experts discuss both external and internal threats from terrorists, terrorist groups, and lone wolves.
- Internal Threat – this year many of our experts cited the numerous threats to our nation and our democracy from internal threats.