As insider threat initiatives proliferate across industry and government, professionals are increasingly being relied upon to make quick and accurate assessments of people, their intentions and surrounding circumstances. Annual Insider Threat Training and Vigilance campaigns promote workforce awareness and encourage employees to report regardless of how certain they may be.
Does more reporting lead to faster and more accurate insider threat detection?
As history shows, accurately judging an employee’s level of trustworthiness is not as straightforward as it seems. Incorrect determinations of trust, whether being over- or under-confident, can pose significant and long-term ramifications for the individual, the organization, and national security. In 2015, Xiaoxing Xi, one of the world’s leading experts on superconducting thin films and interim chairman of the Physics Department at Temple University in Philadelphia, was arrested for espionage. Despite being cleared of all charges four months before his court date, Xi was left with no savings, enormous legal fees, and a damaged reputation.
Incidents such as these are not hard to imagine. Years of scientific research have shown that our intuition is anything but rational.   When there is no standard by which to identify and differentiate threats from non-threats, we are naturally inclined to respond based on what we believe to be true, rather than what is objectively true. Preconceptions, expectations and emotional tone can easily override critical thinking and bias what we focus on, who we focus on, and what we think may be occurring. Be it in medicine, the courtroom, military, DNA analysis, or fingerprint analysis, intuitive decisions lead to negative outcomes.
While insider threat reporting in the workplace is minimal, a system that lacks consistency and relies on personal judgments cannot accurately detect threats or assess risk. As the dragnet expands to detect insider threats, so should the responsibility of those involved to do everything possible to collect the most accurate information. Understanding the limits of objectivity when we judge others, and adopting strategies to overcome their effects, are arguably two of the most important and overlooked areas facing security and insider threat professionals today.
This is the first of a three-part series.
More Reporting Isn’t (Necessarily) Better Reporting
It should be safe to assume that if someone initiates a report, the information is well-informed, accurate, and based on real evidence. However, this is not always the case.
In October 2014, “Sherry” Chen, a hydrologist employed by the National Oceanic and Atmospheric Administration (NOAA), was falsely accused of illegally accessing information about the nation’s dams and covering up a meeting with a high-ranking foreign official. Even after the federal charges were dropped, Chen was unable to work based on allegations of “untrustworthiness, misrepresentation, misuse of a federal database, and lack of candor” in the workplace. Years later, an administrative judge on the case explained that the false charges and allegations were due to “mishandling of the situation on a number of different levels.”
How do such serious allegations come about, if not based on credible evidence? A review of court documents suggests that a number of erroneous factors – such as an over focus on negative events, lack of consideration of positive evidence, inattention to source credibility, and a failure to consider more plausible explanations for the behaviors – likely contributed to the outcomes.
Four Reasons Why We Fall for False Reporting
The human tendency to assign too much weight to first impressions can compromise the accuracy of our observations, interpretations, and ultimate reporting about an individual. No matter who we are, we tend to believe information that confirms what we already think, put more stock in people whom we view as similar, and base our judgments on information that’s easily accessible and imperfect, rather than seek out comprehensive and complete evidence.
There are many reasons why insider threats are hard to detect, and some of them are based on faulty reasoning. Improve insider threat detection by challenging your thinking.
Reason 1: We think we have a good idea about whom to trust and whom not to trust
In an office environment, there are hundreds of factors that affect how we feel about someone and whether we trust them. However, trust is subjective. Whether and to what degree we have confidence in a co-worker is based on how likeable, benevolent, honest, and accepted by others he or she appears at that particular moment in time.   As cases of espionage show, the more likeable a person appears, the less likely his or her behavior is to be scrutinized or reported. Conversely, people who are disliked not only garner more negative attention (making them more susceptible to being reported), but are more likely to have their actions misinterpreted.
The fact that we are quick to trust, when there is no obvious reason not to, leads to biased decision-making. Unless insider threat professionals are made aware of how natural biases can affect observations, some individuals may be reported for reasons unrelated to risk, while others who may in fact pose a real risk will be overlooked.
Reason 2: We assume if someone goes to the trouble of reporting, it must be true
Sound investigative principles require that all source information be vetted prior to further use and application. However, given time constraints, minimal training, and our propensity to trust, reports are often accepted at face value.
Unless there is an obvious reason to question it, we often accept the premises that:
- The directions, including the words and phrases, and criteria are clear and objective and tied to a conceptual framework.
- The source understood exactly what information to collect and how to identify it.
- The source of the report is well-informed, unbiased and did not have a stake in the outcome.
- The report contains accurate observations of what really happened and is not informed by opinion.
- Alternative explanations were ruled out before making the report, and
- The individual is certain that there is something of concern and would not have made the report otherwise.
While this may be true for some types of reports, accepting insider threat reporting at face value increases the likelihood of false accusations (even when believed by the reported to be true).
Reason 3: We think that human behavior is pretty simple
Even in the most controlled circumstances, forecasting how someone is likely to act at a future time is an incredibly complex process. There are literally thousands of variables to choose from, all of which could theoretically be correlated with an individual’s decisions and actions and result in an infinite number of responses — making the act of prediction highly error prone. Someone who may appear to be a low risk today may encounter unforeseen life circumstances that may overwhelmingly increase his likelihood of risk in the future.
Moreover, every person responds to influencing factors differently. Even the same person cannot be expected to respond the same way twice. Given the many complexities of human behavior, accurately determining who will remain steadfast overtime – based on minimal and subjective reporting – is a significant challenge.
Reason 4: We trust that our common sense will lead to the right conclusion even when the directions are vague
On the face of it, allowing professionals to make common-sense judgments seems rational. However, the more unstructured and discretion-based a system is, the more difficult it is to ensure accountability and transparency. When a task is ill-defined or there is systemic pressure to get results, our tendency toward bias is even more pronounced. Opinion-based systems in the workplace, such as 360-degree feedback programs intended to assess performance and increase productivity, penalize people who may be different, less social or not well-liked. Vague questions, lack of specific guidance, and opinion-based judgments lead to misinterpretation and can harm employees and the system in the long run.
For example, a behavior that is associated with counterintelligence and security problems is “extensive use of copy, facsimile, or computer equipment to reproduce or transmit classified material that may exceed job requirements.” However, what is “extensive” to one person may not be to another, especially if the observer is unaware of the person’s workload and responsibilities.
Vague indicators such as these can lead to greater numbers of reports, and more false positives, delaying focus on more evidence-based threats.