44.7 F
Washington D.C.
Friday, March 29, 2024

PERSPECTIVE: How to Effectively Use Data to Catch Insider Threats in Trusted Workforce 2.0

Terry Albury. Reality Leigh Winner. Daniel Hale. Who are they? Each one a cleared insider who betrayed the country’s trust by providing classified information to outside sources.

In fact, within the past two years the Department of Justice has received reports on an unprecedented increase of classified and proprietary information leaks. This disturbing trend, combined with an achingly slow security clearance investigative process, has helped usher in the federal government’s Trusted Workforce 2.0 (TW 2.0) initiative – a bold and welcome move in support of government and industry’s need to attract and retain a reliable workforce. The continued security clearance backlog – just below 400,000 as of July – poses an unsustainable risk the executive and legislative branches of government and industry are no longer willing or able to tolerate.

Moving firmly into the 21st century, the information security industry and the U.S. government are quietly embracing the endless potential of technology as a means to identify potential security risks and to shore up classified information leaks. TW 2.0’s new framework for the security clearance process aligns the investigative criteria for security, suitability, and credentialing requirements for each stage of an individual’s security clearance investigation. Notably, it also introduces a strong automated data and technology component to the process. Continuous Evaluation (CE), the near real-time identification of significant security issues, will become the heart of TW 2.0.

Heavily based on automated record checks and reliant on good data management, CE scans specific public and proprietary databases to flag potential security issues in a timely manner. This information, coupled with flexible analytics, such as Machine Learning (ML)-based algorithms, Artificial Intelligence (AI) and behavioral science methods, is a powerful tool for the government and industry to more effectively and efficiently screen potential and current employees.

AI engines and ML are used to both inform and determine patterns, trends and potentially intent from textual, visual and contextual data. Ultimately the goal is to apply ML/AI/data to the whole employment/security lifecycle of each individual including ongoing risk assessments on commercial companies in the public sector supply chain.

With the right investment in time and capital, the time it takes to complete initial application and onboarding processes could be reduced to less than 24 hours. Once an individual is enrolled in CE, this will also ensure a personnel security process that focuses on continual vetting and evaluation to flag potential risks, such as insider threats, far more quickly than the old processes.

So, what do ML and AI bring to the table? ML is the study of algorithms along with statistical models that are highly precise. Combined with processing power from computers (think AI), ML successfully performs a specific task without human interaction but instead relies purely on patterns and inference, in addition to behavioral tendencies. From this, a mathematical model consisting of ever increasing pools of sample data, sometimes called “Training Data,” allows the system to further mature and make more accurate predictions about individual outcomes. AI has differing models, linear (like IBM’s Watson) and others that are intuitive. Both models are needed because the linear model generates results that are repeatable and can stand up against lawsuits and administrative challenges related to decisions on hiring and security clearances.

A program that provides access to trended financial and credit data can enhance CE’s effectiveness as it continues to expand and improve. Historically, the government pulled raw credit reports and paid for them both in the investigative stage of the clearance process and then again once enrolled in CE. Contrast that with a “push” model in which data would only be “pushed” (via triggers) to the government as the change occurs. The “push” model creates efficiencies and also saves money when integrated and better aligning the investigation mission within the TW 2.0 framework. In every sense, the “pull” approach is reactive and may delay informing the government of important changes. However, the “push” model, by its very nature, is proactive in informing the government of significant changes that may indicate a person is heading toward financial distress.  It is inefficient to have an outside system analyze credit data, when first-line analysis accesses the right data, at the right time, and in a way that best informs the government quickly, securely, and in the format they require.

For example, individuals are placed under continuous monitoring and as such, listeners (aka bots) are placed on their files to monitor the rules/triggers requested by the client. These rules are flexible, completely customizable, and can be established on any data element identified in credit data; the thresholds are easy to modify and change. Triggers provide immediate notifications, or flags, to the government when a threshold for a trigger is exceeded. Credit reports do not need to be repeatedly pulled by the client at regular increments, as done in the “pull” model, since triggers deliver credit data only when a flag is set (meaning that credit data is only “pushed” to the client when there is a change in a data element that matters to the client’s particular organization). Further, only the “push” mode is truly continuous, providing immediate notification when key financial security indicators are identified in credit flies. In summary, triggers provide near real-time data on ONLY the information relevant to the agency’s established security criteria and accompanying guidelines by which an individual must adhere to maintain a clearance. Triggers also remove the burden of manual and cumbersome analysis on the government.

Executive Order 13467, as amended on April 24, mandates that DoD take on the responsibility for conducting initial and periodic background investigations government-wide by Sept. 30. With the Defense Counterintelligence and Security Agency (DCSA) poised to take on this monumental task, there is a strong opportunity for furthering its engagement with and support of the TW 2.0 framework and CE’s groundbreaking capabilities.

Human cycles are best used for strategic work; software and machine learning are more than up to the task of helping eliminate the security clearance logjam and making CE more effective. Wise use of these tools will result in a robust, expedited security clearance process and ensure public safety – a “win-win” for everyone.

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.

PERSPECTIVE: How to Effectively Use Data to Catch Insider Threats in Trusted Workforce 2.0 Homeland Security Today
Jonathan McDonald
Jonathan McDonald, Executive Vice President, Public Sector, TransUnion, leads TransUnion’s Public Sector business which provides a suite of mission-critical solutions to help U.S. federal, state and local government agencies manage risk and reduce costs. He has hands-on experience managing large technology programs within various government agencies. At the state level, he headed a number of programs focused on entity verification, non-obvious relationship analytics, fraud detection and investigations solutions. He's a former U.S. Marine and has a bachelor's degree in statistics and technology, and an MBA from the University of Maryland Robert H. Smith School of Business.
Jonathan McDonald
Jonathan McDonald
Jonathan McDonald, Executive Vice President, Public Sector, TransUnion, leads TransUnion’s Public Sector business which provides a suite of mission-critical solutions to help U.S. federal, state and local government agencies manage risk and reduce costs. He has hands-on experience managing large technology programs within various government agencies. At the state level, he headed a number of programs focused on entity verification, non-obvious relationship analytics, fraud detection and investigations solutions. He's a former U.S. Marine and has a bachelor's degree in statistics and technology, and an MBA from the University of Maryland Robert H. Smith School of Business.

Related Articles

Latest Articles