PERSPECTIVE: Use AI and Machine Learning to Mold a Modern Security Clearance Process

The security clearance process is broken, plagued by backlogs and delays that threaten the homeland security of the United States. A recent 41.5 percent decrease in the backlog of security clearances, from 725,000 to 424,000 cases, tells a misleading story about the efficiency of the system. The decrease was due to a growth in hiring, but it remains that clearance processing time has not improved, and that the backlog is double the identified healthy goal of 200,000 cases. In March 2019, Director of the National Counterintelligence and Security Center William Evanina and Deputy Director of the Office of Personnel Management (OPM) Michael Rigas recognized that to truly reform, the security clearance system needs an overhaul that is rooted in innovation.

Future screening and vetting systems must be enabled by machine learning (ML) and artificial intelligence (AI) to be efficient and to optimize human performance. ML/AI-enabled technology can provide the key to reform, while maintaining privacy and security. ML/AI eliminate false positives through faster screening and provide continuous vetting mechanisms that save time and resources. Evanina and Rigas made three recommendations in their vision of reform, all of which can be enabled by ML/AI capabilities:

  • First, the expansion of the Continuous Evaluation program;
  • Second, an accommodation of the evolution of how people interact with technology; and
  • Third, the founding of a Trusted Information Provider Program.

Evanina and Rigas believe that these three solutions can streamline efforts and improve the clearance process. I agree. Enabled by ML/AI, these advances can provide dedicated investigators and analysts with tools to properly vet the government’s current and potential workforce, which is imperative to national security in an era marked by identity fraud, deep fakes, and cyber threats.

Continuous Evaluation

Continuous Evaluation (CE) technology reviews the backgrounds of individuals assigned to sensitive materials to clear them for continued security clearance eligibility. Currently, periodic reviews are done every five to 15 years, a practice that results in time and data gaps that can lead to missed national security or insider threats. Even if you spent your full budget on analysts, you would still not be able to scour all publicly available electronic information (PAEI) with both the accuracy and speed needed.

The success of screening and vetting entities relies on the ability of screeners, analysts, and law enforcement professionals to successfully make sense of massive amounts of data. Digital activity leaves behind an echo – examples include airline travel, social media activity, or online news and blog posts. These echoes form measurable identities that give a timely reflection and context of people’s behavior in the physical and cyber worlds. This kind of PAEI makes Continuous Evaluation processes more successful. By using Continuous Evaluation that employs PAEI, the government can evolve from the resource-heavy five- and 10-year re-evaluation assessments, resulting in an improvement in the quality of the process while creating a nimbler labor force.

What does PAEI look like now?

Available data include all publicly available electronic information created by each of us in the course of our daily lives. However, clearance processes are not currently equipped to handle most of these data. PAEI needs to be employed for monitoring and clearing processes as younger employees, who have deeper digital footprints and communicate in different ways than their colleagues, begin to enter government service. Despite the need to process various forms of PAEI, Evanina admitted that the government does not have the ability to use this information to scale in government security clearance processes.

Applying ML/AI to PAEI can be game-changing for analysts and public security. Ultimately an online presence is an indication of a person’s behavior, both digitally and in the physical world. By identifying patterns of behaviors associated with illicit activity, like money laundering, terrorist activity, or trafficking, ML/AI can learn precise measurements of illicitness. Not only can this technology use PAEI to measure potential threat, but it can also measure the absence of threat. This allows analysts and investigators to better adjudicate cases, while improving privacy and civil liberties by only further investigating individuals who have shown patterns of illicit activity.

Mistrust

Mistrust within government clearance processes occurs on both the individual and systems levels. Government institutions mistrust employees that are not vetted at all, and various government agencies mistrust the vetting of other agencies. The proposal of a Trusted Information Provider Program is intended to streamline efforts and reduce case buildup.

The ideal Trusted Information Provider Program would partner with government and private entities to avoid duplicated work. Currently, some federal departments are able to enroll in the DoD’s Continuous Evaluation program, but this doesn’t exist across all departments, and it is not required in those where it is available. For example, former DHS Under Secretary of Homeland Security for Intelligence and Analysis Charles said that employees who already had security clearances from intelligence and defense agencies were required by DHS to be subjected to additional vetting processes. In short, an employee who works in the government security sphere who already has a clearance is required to be cleared again just to make a lateral move. This is not only duplicitous, it also actively threatens the national security of the United States.

If a Trusted Information Provider was an organization that provided an AI/ML-enabled screening and vetting tool, all three problems identified by Rigas and Evanina could be solved by the use of one tool. AI- and ML-enabled technologies offer the transformative reform necessary for a modern security clearance process, enabling the government to screen and vet more accurately. The world is changing, and the way we use data to secure it must also change. Ultimately, the use of AI and ML in the government security clearance process will increase the safety and security of the U.S. government and its efforts at home and abroad.

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email HSTodayMag@gtscoalition.com. Our editorial guidelines can be found here.

Gary M. Shiffman, Ph.D., brings passion, creativity, and scientific rigor to the mission of making the world a more difficult place for illicit actors and networks. He works primarily through the application of behavioral science to the world of organized violence. In 2012, Dr. Shiffman founded the software company, Giant Oak (​www.giantoak.com​), to bring government-funded research and development to U.S. national security, and to compliance professionals in the financial services industry. He seeks to democratize advanced analytics so that organizations, regardless of size, can benefit from best practices in countering illicit activities. Dr. Shiffman teaches at Georgetown University and his second book, ​The Economics of Organized Violence: How Behavioral Science Can Transform our View of Crime, Insurgency and Terrorism ​is forthcoming in late 2019 from Cambridge University Press. Dr. Shiffman’s past professional experiences have qualified him as an expert on the intersections of big data, business, national security, and social science. Prior to his work at Giant Oak, Dr. Shiffman served as Managing Director of the Chertoff Group, Senior Vice President and General Manager of the Risk Management Solutions at L-3, and Chief of Staff at U.S. Customs and Border Protection. Additionally, Dr. Shiffman has served as a Senior Policy Advisor to the U.S. Senate Leadership, advised international law firms on anti-terrorism and homeland defense issues, and served in policy, planning, and operational positions in the U.S. Department of Defense. Dr. Shiffman proudly served his country and is a decorated U.S. Navy Gulf War veteran. He earned his Ph.D. in Economics from George Mason University, his M.A. in Security Studies from the School of Foreign Service at Georgetown University, and his B.A. in Psychology from the University of Colorado.

Leave a Reply

Latest from Biometrics & ID Management

SIGN UP NOW for FREE News & Analysis on topics of your choice across homeland security!

BEYOND POLITICS.  IT'S ABOUT THE MISSION. 

Go to Top
Malcare WordPress Security