Facial recognition has become a common technology. It can be found in action every day, as millions of people unlock their personal devices with just a glance and social networks identify users in photos before they are tagged. But the many uses of facial recognition reach far beyond these instances. It is a tool that is being used increasingly by law enforcement, the Department of Homeland Security (DHS) and the Transportation Security Administration (TSA) to help investigations, strengthen security measures and, in light of COVID-19, eliminate the need for close human interaction.
Although it offers several benefits in terms of security and identity verification, facial recognition technology is a highly polarizing issue – one that has recently come under attack for racial bias – and poses significant privacy concerns. Several major players in facial recognition have suspended or removed their programs for law enforcement in response to racial profiling and bias repeatedly demonstrated by the technology.
These are not the only issues with facial recognition. The technology still needs to be perfected and requires further development to eliminate – or greatly reduce – security vulnerabilities. While the use of facial recognition and related technology in airports and other security checkpoints can provide less physical contact and potentially fewer infections during the pandemic, it may also increase the attack surface of a new target.
McAfee’s Advanced Threat Research (ATR) team reflected on the growth of facial recognition technology and the critical decisions it enables. Could flaws in the underlying systems of the technology be manipulated to bypass the target systems? We considered this question and wanted to know if facial recognition systems, specifically systems that emulate a passport scanner for identity verification, were more, or less, susceptible to error than humans.
The McAfee ATR team set out to see if a carefully crafted passport-style ‘adversarial image’ could be incorrectly classified as a targeted individual. To run the tests, we implemented a physical system like those used to verify passport identification in airports. Using machine learning, we created an image that looked like one person to the human eye but was identified as someone else by our system’s facial recognition algorithm. This allowed us to trick the system into incorrectly validating the wrong individual. (More information about McAfee’s ATR study on facial recognition can be found on the McAfee Labs website.)
Had our tests been a real scenario, the passport-scanner would have identified the system attacker – an individual on a no-fly list – as a different person with no flying restrictions. The attacker would have successfully passed through security and passport identification unnoticed, and consequently be permitted to board their flight – clearly an unacceptable outcome.
If critical tasks historically performed by humans, such as identity authentication, are going to continue to be handed off to evolving technology, we must ensure a framework is in place to determine acceptable bounds for resilience and performance under adverse conditions.
The purpose of our research was not to denigrate facial recognition technology. Without research, there can be no progress. Our goal is to show that reliance on automated systems and machine learning such as facial recognition could provide malicious cyber actors the opportunity to bypass passport identification and other critical systems if the inherent security flaws of these technologies go unchecked.
It’s important that security specialists work closely with vendors and implementors of critical systems, employing data science and security research to close any gaps that could weaken these systems. We look to the community for a standard that can measure the reliability of machine learning systems in the presence of the above adversarial examples. Facial recognition is an evolving technology that needs perfecting in a number of ways, and we must ensure that security is one of them.