49.7 F
Washington D.C.
Friday, April 26, 2024

Why ‘See Something, Say Something’ Isn’t Enough to Detect the Next Insider Threat

If You See Something, Say Something has been an effective mantra for transportation safety and homeland security since 9/11. But is it the right catchphrase for insider threat awareness?

We know we should report the ticking briefcase in the corner of the subway station, but what about the co-worker whose schedule has changed?

When individuals notice something that is overtly out of the norm, directly relates to impending violence, or has a high likelihood of being “bad” regardless of the situation, saying something is almost always better than not. A 2018 report on Mass Attacks in Public Spaces by the U.S. Secret Service confirms what most people already suspect: there are concrete, observable risk indicators that if reported can potentially lead to significant risk mitigation. A car abandoned in the middle of a busy intersection with thick black smoke around it, or the teenager who withdraws from friends, quits his job, spends hours on violent jihadi websites and purchases a ticket to the Middle East suggest that the likelihood that a problem is brewing is high. See Something, Say Something works well in situations where it would be hard to generate a benign, alternative explanation.

What happens, however, when the risk indicator requires a great deal of inference, like when asked to report on behaviors that indirectly relate to threats? When there are equally plausible explanations for what is occurring, or when it’s unclear what the problematic behavior is or looks like, See Something, Say Something isn’t enough.

Even if employees under-report security-related incidents in the workplace and healthcare settings, having free rein to report any behavior carries its own risks – and can easily lead to unwarranted investigations that waste time, resources and harm reputations of individuals and companies. Vague terminology, inconsistent decision rules, and inattention to source credibility can be problematic when it comes to insider threat reporting.

This is the second article in a three-part series on investigating insider threats. Part three will focus on strategies for evaluating and improving the quality of insider threat reporting.

Four Reasons Why It Will Be Harder to Catch the Next Insider Threat

***

See Something, Say Something’ glosses over context

Anyone adept at exploiting others knows that to maintain access, you have to fit in. As supported by psychological research, people tend to trust others in the absence of a reason not to trust. Someone who doesn’t exhibit “concerning” behavior is usually not thought to be a concern.

Unlike threats to public safety, the “something” that we see in the workplace is often hard to discern and can rarely be interpreted on its own. To understand whether a behavior may be indicative of an insider threat, a clear understanding of the circumstances in which the behavior arose is necessary.

This is especially important given that accurate risk determinations depend on synthesizing multiple indicators from different sources and across time. More indicators, regardless of how many are used, only add up to a valid threat picture if each one has clearly been interpreted and contextualized.

Past risk indicators are rarely effective in detecting future threats

Because we often forget that behavior is situation-dependent, applying lessons learned from past cases can bias what we look for and identify as threats. For example, since “unexplained wealth” is relevant to a subset of espionage cases, the Center for Development of Security Excellence (CDSE) suggests that the following type of incident is reportable:

During a meeting, a coworker shows off an expensive new watch. When asked about affording such a luxury, he becomes uncomfortable and offers no explanation.

The idea that the individual is an insider threat, was illegally paid, and has used the money to buy an “expensive watch” (or received it as an improper gift) is possible – but assumes a great deal of the reporter. To be accurate, the reporter would need to know the brand and cost of the watch, and that it was purchased by the owner and was not a gift. Whether the owner exhibited discomfort is unknowable, and the idea the owner was owed a response is untrue. This type of scenario is overly specific, prone to misperception and would require a great deal of investigative work if it were to be taken at face value.

When guidance is vague, ‘something’ could just as well be ‘anything’

The ambiguities in the National Industrial Security Program Operating Manual (NISPOM), which requires all contract employees to report any “adverse information” about cleared employees, makes it less likely what is “seen” and “said” will be meaningful:

Adverse information consists of any information that negatively reflects on the integrity or character of a cleared employee, that suggests that his or her ability to safeguard classified information may be impaired, or that his or her access to classified information clearly may not be in the interest of national security.”

The lack of clarity and detail in the guidance means that there are few restrictions on what is reported and how it’s reported. When reporting instructions are unclear, evaluation criteria are non-existent, and terminology is overly broad, professionals make sense of risk on their own terms – which in turn, compromises the credibility of what is reported.

For example:

  • Because words like “impaired” or “integrity” are vague and opinion-based, people can disagree on whether these terms apply in a specific instance.
  • The assumption that everyone is equally qualified to assess everyone else – regardless of each individual’s training, education, position, mental health, history of misconduct or relationship to the person being reported – is doubtful.
  • The idea that complex issues can be determined by simple observations – such as determinations whether a person’s “access to classified information” is or is not in the “interest of national security” – is highly questionable.

Knowing ‘something’ is good – knowing how that ‘something’ is known is even better

There is no doubt that See Something Say Something works in the extreme, such as when an individual is an infrequent visitor in an office and is suddenly seen taking an item that – classified or not – does not belong to him or her. However, if a report regards a co-worker who belongs in the office, then considering how the information was obtained is important.

No one interprets the world the same way. Since we are mostly unaware of our own biases and misperceptions, people frequently (and in good faith) misconstrue what they “see.” Errors in observations increase when a task is more complicated, the facts are overly ambiguous, or a judgment is being made from secondhand information.

The Counterintelligence Reporting Essentials (CORE), developed by PERSEREC, provides guidance and examples of cases that represent “clear violations” and where “no judgment is required” by the person reporting. However, while many of the behaviors on the list are undeniably of concern, in practical terms some may be harder to report than others.

For example:

“. . . you see someone removing classified material from the work area without appropriate authorization, either by physically taking it home or on travel, or by e-mailing or faxing it out of the office . . . “.[1]

Knowing whether a fellow coworker has “appropriate authorization,” or is acting outside the scope of his or her authority is often unknowable. Employees who work on classified projects in shared office spaces often have different levels of access, different responsibilities, and different schedules, and do not sit in direct sight of one another. Accurately identifying that a co-worker is emailing or faxing the wrong person requires the observation be made in close proximity. Because of the difficulty and focused attention required to obtain this information, at times the insider threat concern may actually be more relevant to the reporter of the information, rather than the subject of the report.

***

As companies push to implement government policies and guidelines to protect against insiders, it is important to take a closer look at what professionals are being asked to do and whether they are positioned to make fair and accurate determinations. While a See Something, Say Something approach encourages awareness and reminds people of the importance of reporting, it is not enough when it comes to insider threat detection.

[1] Wood, S., Crawford, K. S., & Lang, E. L. (2005). Reporting of counterintelligence and security indicators by supervisors and coworkers (No. PERS-TR-05-6). DEFENSE PERSONNEL SECURITY RESEARCH CENTER MONTEREY CA. B-8.
author avatar
Judy Philipson, Ph.D.
Dr. Judy Philipson is President of Behavioral Sciences Group LLC. A recognized expert in threat detection, risk assessment, operational tradecraft, and investigative methods, she has over 20 years of experience providing advice, analysis, and training to clients in the Intelligence Community, Department of Defense, Law Enforcement and Homeland Security. From 2009-2014, she served as Social Influence Advisor to the Counterterrorism Center at the Central Intelligence Agency. She is a Senior Associate Fellow at Narrative Strategies, LLC, a consultant to the Insider Threat Management Group, and a Lecturer in Criminology and Criminal Justice at the University of Maryland, College Park. She has a Ph.D. in clinical psychology from Drexel University where she focused on forensic populations and issues relating to deception and risk assessment.
Judy Philipson, Ph.D.
Judy Philipson, Ph.D.
Dr. Judy Philipson is President of Behavioral Sciences Group LLC. A recognized expert in threat detection, risk assessment, operational tradecraft, and investigative methods, she has over 20 years of experience providing advice, analysis, and training to clients in the Intelligence Community, Department of Defense, Law Enforcement and Homeland Security. From 2009-2014, she served as Social Influence Advisor to the Counterterrorism Center at the Central Intelligence Agency. She is a Senior Associate Fellow at Narrative Strategies, LLC, a consultant to the Insider Threat Management Group, and a Lecturer in Criminology and Criminal Justice at the University of Maryland, College Park. She has a Ph.D. in clinical psychology from Drexel University where she focused on forensic populations and issues relating to deception and risk assessment.

Related Articles

Latest Articles