35 F
Washington D.C.
Tuesday, March 19, 2024

PERSPECTIVE: Use AI and Machine Learning to Mold a Modern Security Clearance Process

The security clearance process is broken, plagued by backlogs and delays that threaten the homeland security of the United States. A recent 41.5 percent decrease in the backlog of security clearances, from 725,000 to 424,000 cases, tells a misleading story about the efficiency of the system. The decrease was due to a growth in hiring, but it remains that clearance processing time has not improved, and that the backlog is double the identified healthy goal of 200,000 cases. In March 2019, Director of the National Counterintelligence and Security Center William Evanina and Deputy Director of the Office of Personnel Management (OPM) Michael Rigas recognized that to truly reform, the security clearance system needs an overhaul that is rooted in innovation.

Future screening and vetting systems must be enabled by machine learning (ML) and artificial intelligence (AI) to be efficient and to optimize human performance. ML/AI-enabled technology can provide the key to reform, while maintaining privacy and security. ML/AI eliminate false positives through faster screening and provide continuous vetting mechanisms that save time and resources. Evanina and Rigas made three recommendations in their vision of reform, all of which can be enabled by ML/AI capabilities:

  • First, the expansion of the Continuous Evaluation program;
  • Second, an accommodation of the evolution of how people interact with technology; and
  • Third, the founding of a Trusted Information Provider Program.

Evanina and Rigas believe that these three solutions can streamline efforts and improve the clearance process. I agree. Enabled by ML/AI, these advances can provide dedicated investigators and analysts with tools to properly vet the government’s current and potential workforce, which is imperative to national security in an era marked by identity fraud, deep fakes, and cyber threats.

Continuous Evaluation

Continuous Evaluation (CE) technology reviews the backgrounds of individuals assigned to sensitive materials to clear them for continued security clearance eligibility. Currently, periodic reviews are done every five to 15 years, a practice that results in time and data gaps that can lead to missed national security or insider threats. Even if you spent your full budget on analysts, you would still not be able to scour all publicly available electronic information (PAEI) with both the accuracy and speed needed.

The success of screening and vetting entities relies on the ability of screeners, analysts, and law enforcement professionals to successfully make sense of massive amounts of data. Digital activity leaves behind an echo – examples include airline travel, social media activity, or online news and blog posts. These echoes form measurable identities that give a timely reflection and context of people’s behavior in the physical and cyber worlds. This kind of PAEI makes Continuous Evaluation processes more successful. By using Continuous Evaluation that employs PAEI, the government can evolve from the resource-heavy five- and 10-year re-evaluation assessments, resulting in an improvement in the quality of the process while creating a nimbler labor force.

What does PAEI look like now?

Available data include all publicly available electronic information created by each of us in the course of our daily lives. However, clearance processes are not currently equipped to handle most of these data. PAEI needs to be employed for monitoring and clearing processes as younger employees, who have deeper digital footprints and communicate in different ways than their colleagues, begin to enter government service. Despite the need to process various forms of PAEI, Evanina admitted that the government does not have the ability to use this information to scale in government security clearance processes.

Applying ML/AI to PAEI can be game-changing for analysts and public security. Ultimately an online presence is an indication of a person’s behavior, both digitally and in the physical world. By identifying patterns of behaviors associated with illicit activity, like money laundering, terrorist activity, or trafficking, ML/AI can learn precise measurements of illicitness. Not only can this technology use PAEI to measure potential threat, but it can also measure the absence of threat. This allows analysts and investigators to better adjudicate cases, while improving privacy and civil liberties by only further investigating individuals who have shown patterns of illicit activity.

Mistrust

Mistrust within government clearance processes occurs on both the individual and systems levels. Government institutions mistrust employees that are not vetted at all, and various government agencies mistrust the vetting of other agencies. The proposal of a Trusted Information Provider Program is intended to streamline efforts and reduce case buildup.

The ideal Trusted Information Provider Program would partner with government and private entities to avoid duplicated work. Currently, some federal departments are able to enroll in the DoD’s Continuous Evaluation program, but this doesn’t exist across all departments, and it is not required in those where it is available. For example, former DHS Under Secretary of Homeland Security for Intelligence and Analysis Charles said that employees who already had security clearances from intelligence and defense agencies were required by DHS to be subjected to additional vetting processes. In short, an employee who works in the government security sphere who already has a clearance is required to be cleared again just to make a lateral move. This is not only duplicitous, it also actively threatens the national security of the United States.

If a Trusted Information Provider was an organization that provided an AI/ML-enabled screening and vetting tool, all three problems identified by Rigas and Evanina could be solved by the use of one tool. AI- and ML-enabled technologies offer the transformative reform necessary for a modern security clearance process, enabling the government to screen and vet more accurately. The world is changing, and the way we use data to secure it must also change. Ultimately, the use of AI and ML in the government security clearance process will increase the safety and security of the U.S. government and its efforts at home and abroad.

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.

PERSPECTIVE: Use AI and Machine Learning to Mold a Modern Security Clearance Process Homeland Security Today
Gary M. Shiffman, Ph.D.
Gary M. Shiffman is an applied micro-economist and business executive working to combat organized violence, corruption, and coercion. He received his BA from the University of Colorado in Psychology, his MA from Georgetown University in National Security Studies, and his PhD in Economics from George Mason University. His academic work is complimented by his global operational experiences, including his service as a U.S. Navy Surface Warfare Officer in the Pacific Fleet with tours in the Gulf War; as an official in the Pentagon and a Senior Executive in the US Department of Homeland Security; as a National Security Advisor in the US Senate; and as a business leader at a publicly-traded corporation. Currently, Dr. Shiffman serves as the CEO of Giant Oak, Inc., and the CEO of Consilient, both of which are machine learning and artificial intelligence companies building solutions to support professionals in the fields of national security and financial crime. He dedicates time to Georgetown University’s School of Foreign Service, teaching the next generation of national security leaders. Dr. Shiffman recently published The Economics of Violence: How Behavioral Science Can Transform Our View of Crime, Insurgency, and Terrorism with Cambridge University Press in 2020.
Gary M. Shiffman, Ph.D.
Gary M. Shiffman, Ph.D.https://www.giantoak.com/
Gary M. Shiffman is an applied micro-economist and business executive working to combat organized violence, corruption, and coercion. He received his BA from the University of Colorado in Psychology, his MA from Georgetown University in National Security Studies, and his PhD in Economics from George Mason University. His academic work is complimented by his global operational experiences, including his service as a U.S. Navy Surface Warfare Officer in the Pacific Fleet with tours in the Gulf War; as an official in the Pentagon and a Senior Executive in the US Department of Homeland Security; as a National Security Advisor in the US Senate; and as a business leader at a publicly-traded corporation. Currently, Dr. Shiffman serves as the CEO of Giant Oak, Inc., and the CEO of Consilient, both of which are machine learning and artificial intelligence companies building solutions to support professionals in the fields of national security and financial crime. He dedicates time to Georgetown University’s School of Foreign Service, teaching the next generation of national security leaders. Dr. Shiffman recently published The Economics of Violence: How Behavioral Science Can Transform Our View of Crime, Insurgency, and Terrorism with Cambridge University Press in 2020.

Related Articles

- Advertisement -

Latest Articles