33.3 F
Washington D.C.
Tuesday, December 7, 2021
spot_img

How the TSA Can Help Protect Its AI Systems from Attack

TSA must mitigate potential vulnerabilities sometimes difficult to see within complex AI and ML algorithms.

The Department of Homeland Security (DHS) has begun implementing an ambitious plan to incorporate artificial intelligence (AI) and machine learning (ML) algorithms to more efficiently and accurately screen airplane passengers and their luggage. The Transportation Security Administration (TSA) will use synthetic training – which uses large amounts of data to train machines and generates recommendations in just minutes – to significantly reduce security delays at airports, improve detection of suspicious weapons, and improve the safety of air travel.

But how can the TSA improve and ensure the safety of the systems? How can they protect these AI and ML systems from potential vulnerabilities or cyber threats?

Poison Attacks

Performing synthetic training for optimal ML performance at great speed will require the TSA to cultivate information from massive data sets. The task will become even more complex when technologies like facial detection are incorporated. These technologies can be 99.97 percent accurate but require a great deal of processing power.

An adversary who gets access to the system can easily manipulate the pixels in an image or insert code designed to compromise the entire ML process (a tactic known as a poisoning attack). Manipulation in any way – whether intentional or not – could create, at best, inconveniences for passengers. At worst, it could put people’s lives in danger.

Defend Against Potential AI and ML Vulnerabilities

As such, the TSA must mitigate potential vulnerabilities sometimes difficult to see within complex AI and ML algorithms. There are several ways to do this.

Ensure only authorized personnel have access to data models.

Maintaining data integrity is critically important and requires limiting the number of people who have access to data sets. Therefore, the TSA must actively monitor who has access to data and change access management rights as necessary to ensure only authorized individuals have authority over the training of data sets. This practice also helps protect data sets from unauthorized users who may attempt to modify the data or algorithms.

Train models to recognize and flag attack methods.

Incorporate potential hacker tactics in model training. For example, train the algorithm to recognize if a user attempts to alter an image in a certain way for the desired outcome, like turning a handgun into a benign object. The machine can begin to recognize and flag anomalies, alerting security professionals to attempted manipulations.

Regularly inspect data sets for possible anomalies.

Routinely examine ML training data sets to ensure they’re pure. Since having accurate and trusted data is the most important component of any AI or ML algorithm, it’s recommended to exercise this practice frequently. Set a weekly or even daily schedule to retrain and evaluate models to ensure their reliability and efficacy.

Use AI and ML to Continuously Monitor and Respond to Network Threats

The previous section is about monitoring threats from a micro level – right at the point of data collection or processing. But it’s equally important to monitor threats from a macro level by keeping close tabs on the infrastructure supporting this massive, data-intensive undertaking.

To truly protect AI and ML algorithms, the TSA must continuously monitor activities across the entire breadth of its network infrastructure, including the cloud systems used to house and crunch the data sets. The organization must identify potential anomalies within these systems in real time and trace them back to their sources for quick remediation, preferably through AI- and ML-driven automation.

Indeed, AI and ML can be enormously beneficial in identifying and responding to possible threats when a risk arises. AI and ML systems can detect unauthorized behaviors and anomalies. The more anomalous activity the systems detect, the better they become at responding to these issues, to the point where they can counter and block attacks before they come to fruition. As such, AI and ML aren’t just good for baggage and passenger screening processes – they can effectively protect the algorithms and data used to enhance them.

Protecting Passengers Requires Protecting Data

Twenty years after 9/11, the DHS and TSA are still looking for ways to improve our nation’s airport security. In doing so, it’s also critical to bolster the safety of the security systems in use. This will require setting up multiple defenses – one at the data level and another at the infrastructure level – to keep their AI and ML algorithms secure and reliable.

Single Post Template - Magazine PRO Homeland Security Today
Brandon Shopp
Brandon Shopp is the vice president of product strategy for security, compliance, and tools at SolarWinds. He served as director of product management since November 2011, assuming the title and responsibilities of senior director of product management in July 2013. Prior to SolarWinds, Shopp was the vice president of product management at AlienVault, from August 2016 until February 2018, and the senior director of products at Embarcadero Technologies, from July 2015 until August 2016. Shopp has a proven success record in product delivery and revenue growth, with a wide variety of software product, business model, M&A, and go-to-market strategies experience. Shopp holds a B.B.A. from Texas A&M University.

Related Articles

STAY CONNECTED

- Advertisement -
Single Post Template - Magazine PRO Homeland Security Today

Latest Articles