In 2023, we saw China, Russia, Iran and North Korea’s traditionally loose relationships strengthen as these actors found common cause—aimed at countering U.S. and western global relations. Throughout 2024, the wars in Ukraine and the Middle East further strengthened the resolve of these actors. As the Office of the Director of National Intelligence put it in the 2024 Threat Assessment, “An ambitious but anxious China, a confrontational Russia, some regional powers, such as Iran, and more capable non-state actors are challenging longstanding rules of the international system as well as U.S. primacy within it.” While kinetic warfare has dominated the headlines as China, Russia, Iran, and their proxies have benefited from this global alliance through common military coordination, a much broader and unseen war has taken place through today’s cyber and virtual battlefields. These actors have increased focus and resources on influence campaigns strategically, wielding disinformation and misinformation as weapons against the U.S. and its allies. They are achieving startling and effective results.
The New Virtual Battlefield
In this day and age, most Americans spend a significant portion of their waking hours connected to the internet. Unsurprisingly, given this statistic, much of the information we receive in terms of world events and policy is accessed virtually, with, according to Pew Research Center, nearly 50% of Americans getting their news from social media sites such as X, TikTok, Facebook, and Instagram. Americans are not unusual; this metric held for 24 of 40 countries in a 2023 study. At the same time, bots account for nearly half of all internet globally, with so-called “bad bots” responsible for a third.” This massive population access and system dominated by technological vulnerability has set the stage for rapid and effective influence campaigns aimed at radicalization and Western instability.
Bad actors are capitalizing upon disinformation capabilities and internet platform weaknesses to attack an array of U.S. democratic institutions and response capabilities. Over the past few years, disinformation campaigns have sought to disrupt disaster response with the aim of magnifying humanitarian and economic plight. U.S. policy has been the target of influence campaigns to sway popular sentiment for responses to Ukraine, the South China Sea, and the Middle East. The World Economic Forum has identified the significant risks disinformation has to the global economy through political coercion campaigns, among other factors. Lastly, the unity of the nation has been a strategic target of bad actors to weaken our democratic institutions and enhance their positioning in the global ecosystem.
One of the most egregious examples of disinformation being used as coordinated warfare occurred as part of the Hamas terrorist attack on October 7, 2023. According to Cyabra, a social media monitoring firm, in a single day after the October 7 terrorist attacks on Israel, roughly one in four accounts on Facebook, Instagram, TikTok, and X posting about the conflict appeared to be fake and promulgated disinformation to recharacterize the atrocities of the attack. Most shockingly, many of the profiles were created more than a year before the attack and later activated when the attack started. This magnified the psychological terror of the attack, showing careful planning and orchestration well in advance.
Recent events have shown the threat vector continues at full force. On September 4, the Departments of Justice, State, and Treasury Office of Foreign Assets Control released a series of law enforcement actions related to a Russian influence campaign aimed at sowing discord and unrest in the United States and abroad. Specifically, a Russian state-controlled media outlet “co-opted online commentators by funneling them nearly $10 million to pump pro-Russia propaganda and disinformation across social media to U.S. audiences.” The sophisticated effort penetrated both technological safeguards as well as U.S.-based media influencers. As a result, the U.S. took a series of actions including indicting two Russian employees, releasing a series of sanctions, introducing a new visa restriction policy, making Foreign Missions Act determinations, and announced a $10 million Rewards for Justice offer.
As can be seen by the range of examples, bad actors are capitalizing upon our ever-changing technological and behavioral ecosystem. In order to address this evolving threat, we must continue to invest in and evolve our capabilities.
Holistic Challenges Require Holistic Solutions
To address these threat vectors, a comprehensive collaborative strategy is needed.
First, U.S. leadership across all branches of government, responsible media, and civil society must continue to call out bad actors and their influence campaigns, explaining to the American public how these bad actors seek to spread disinformation and misinformation, and their motives for doing so. Undeniable evidence shows that China, Russia, Iran, and its proxies have been using disinformation campaigns to harm the U.S. and allied interests. Think tanks are nearly unanimous in identifying this growing problem, including the Council on Foreign Relations, the Center for Strategic and International Studies, the Brookings Institution, the Carnegie Endowment for International Peace, and many others. It is time to unite all capabilities across the public and private sector to address this inherent vulnerability.
Second, addressing information requires a continued whole-of-government approach. U.S. economic, homeland security, foreign policy, and health interests are all targets of disinformation campaigns. Facilitating a cohesive and national strategy to address disinformation bringing together all facets of the government—defense, foreign policy, economy, health, and homeland security is critical to addressing the inherent threat. Bad actors are capitalizing upon the most challenging aspect of addressing disinformation, namely the argument that taking action infringes upon the First Amendment. This argument has caused partisan divisions often slowing or stymieing efforts to address disinformation, offering bad actors more opportunity to capitalize upon technological vulnerabilities. In order to address both issues, a collaborative public-private sector body should be established to mature the analysis and strategy across domains. In 2020, the Cyberspace Solarium Commission published an extensive report and recommendations on combating disinformation. Given the global volatility of the past two years and increased investment by bad actors, a similar effort should be relaunched.
Third, as we see by continued cyberattacks, such as Microsoft’s recent discovery and notification of Iranian efforts that hacked the campaigns of both Republican and Democratic candidates, private sector partners are on the front-lines of our security. Private sector partners are critical for both the identification and removal of dangerous and inflammatory disinformation content. Advancements have been made for “flagging” content and removal in some cases, though this is not unanimous across platforms and often action is taken after misinformation has reached significant viewership. Additionally, private-sector partners want to protect their integrity and stewardship of shareholder interests. It is in government and private-sector best interest to work together to ensure technology partners with platforms exposed to risk are assisted (e.g., X, Facebook, YouTube, and Instagram). Significant progress has been made, but until the threat is acknowledged, reduced, and more comprehensive partnership recognized and established, more needs to be done.
Lastly, meaningful work has been done to understand the dynamic approach necessary to combat disinformation. This is a well-researched and informed threat with specific strategies to address. For example, the Carnegie Endowment for Internation Peace released an extensive evidence-based strategy to counter disinformation, which includes journalistic, technology, cyber and AI techniques that can and should be integrated into the aforementioned “whole of government” approach. Many techniques are being used by elements of government, such as through the Department of Homeland Security Center for Prevention Programs and Partnerships (CP3), Department of State Global Engagement Center, Cybersecurity and Infrastructure Security Agency , the FBI Foreign Influence Task Force, and within the private and non-profit sector. Continued research, supported by bi-partisan and non-partisan institutions are needed to maximize investment in effective strategies.
Conclusion
The U.S. and world are in the crosshairs of escalated conflicts across Europe, the Middle East, and Asia. While significant focus has been put on traditional military and diplomatic responses, the virtual frontier is actively taken advantage of by bad actors. In order to ensure the stability and vitality of the nation, we must take a systemic approach to recognizing, partnering, and thwarting disinformation and misinformation campaigns aimed at weakening our domestic and foreign policy framework. This approach requires buy-in and leadership from all branches of government, a whole-of-government strategy and governance mechanism, greater private sector partnership collaboration, and the application of well-designed and documented techniques. All the mechanisms exist now, the U.S. just needs the leadership, strategy and collaboration to elevate and effectively address this critical threat.