Intelligence professionals – even those who live in a world of classified and compartmentalized information – rely heavily upon open-source information. Open-source information is publicly available and includes news stories, government and academic literature, social media posts, public forum chatter, and more. Open-source information is relatively easy to obtain and is a convenient way for analysts to understand the background details of issues, determine trends, and garner more specific information about a wide range of topics.
In the past decade, there has been a massive increase in available open-source information, due largely to the rise of media tech giants. These organizations have increased the availability and ease of obtaining information, but they have not contributed to an increase in the reliability of information.
In recent months, it has become evident that the same algorithms created by the FAANG companies (Facebook, Amazon, Apple, Netflix, and Google), which are designed to let millions of users easily share information, are also being used to squelch, hide, and censor certain stories and even entire sources because either an activist or a special interest group doesn’t agree with their message. These actions are an anathema to good intelligence analysis as they intentionally skew information from both a qualitative and quantitative standpoint.
While many predicted the information age would bring renewed critical thinking, learning, and debate in the war of ideas, instead we are witnessing a war against ideas – or rather, against ideas we don’t like.
Much like how intelligence has expanded beyond the realm of the federal government in the last two decades, information warfare is now being waged in political, social, and commercial arenas. Intelligence professionals and researchers of all types need to understand what this warfare looks like and what language is being used to define it so they can create accurate intelligence products in spite of it.
How Automation Suppresses or Promotes Information
The war on information has entered an alarming new phase where boosting or killing ideas is now possible thanks to automation. Algorithms – cleverly hidden within a database, search engine, or social media platforms – are coded by technicians and can provide the ability to do things like shadow ban, which is to covertly prevent a message or a user from reaching a wide audience, often done without their knowledge.
Activists are also exploiting the algorithms that power search engines in order to promote websites that are favorable to their business or cause and squelch those that are unfavorable. While most engineers work hard to improve the speed, depth, and accuracy of their search engine capabilities (technically known as search engine optimization or SEO), it’s becoming apparent that every algorithm can be broken or manipulated with the right amount of resources and motivation.
So while there are numerous recent and highly visible stories about certain websites or persons having their opinions and messages downplayed or banned outright, these are just vigorous attempts to artificially promote favorable messages.
Astroturfing is a relatively new term defined as an attempt by a special interest group to portray an idea or cause as having wide-ranging support or appeal when it does not. Astroturfers set up fake accounts as followers or increase traffic to sites that are favorable to their cause by using automated webcrawlers or bots. This makes it appear as if many people are interested or aligned to that viewpoint. All of these techniques are attempts to manipulate public opinion, as well as the algorithms that determine their importance and ranking.
How Analysts Can Stay Updated with Emerging Manipulative Techniques
New techniques, such as shadow-banning and astroturfing, are constantly emerging and can have major implications on how analysts conduct research and analysis. Therefore, it’s important for analysts to know about these techniques and understand their design and purpose.
One way to do that is to map these terms to clarify whether they are intended to promote or demote an idea or message based on the bias or perspectives of activists or special interest groups. By listing and displaying these terms in a chart like the one below, analysts can more clearly understand each term according to their function:
This is not all-inclusive list, as any list of techniques can change and evolve over time.
The key to understanding many of these techniques is that they are not new at all; they’ve only become automated. Prior to the increase of open-source information and social media use, attempts to promote favorable ideas or downplay unfavorable ideas were conducted using more traditional techniques. For example, newspaper editors, television producers, news reporters, and advertisers routinely censored or promoted paid content. Pundits, politicians and activists spun media stories, called for boycotts against those who supported things they didn’t like, and applied other external pressures to organizations to change their actions or messages. Throughout history, the need to inform the public has always taken a back seat to the need to persuade them.
For analysts, it’s important to understand new and emerging manipulation techniques, regardless of the format, in order to mitigate their effects on information collection, research, and analysis.
The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.