Academics at Cardiff University in Wales, UK, have conducted the first independent academic evaluation of Automated Facial Recognition (AFR) technology across a variety of major policing operations.
The project by the Universities’ Police Science Institute evaluated South Wales Police’s deployment of Automated Facial Recognition across several major sporting and entertainment events in Cardiff city over more than a year.
The study found that while AFR can enable police to identify persons of interest and suspects where they would probably not otherwise have been able to do so, considerable investment and changes to police operating procedures are required to generate consistent results.
Researchers employed a number of research methods to develop a rich picture and systematically evaluate the use of AFR by police across multiple operational settings. This is important as previous research on the use of AFR technologies has tended to be conducted in controlled conditions. Using it on the streets and to support ongoing criminal investigations introduces a range of factors impacting the effectiveness of AFR in supporting police work.
The technology works in two modes:
Locate is the live, real-time application that scans faces within CCTV feeds in an area. It searches for possible matches against a pre-selected database of facial images of individuals deemed to be persons of interest by the police.
Identify, on the other hand, takes still images of unidentified persons (usually captured via CCTV or mobile phone camera) and compares these against the police custody database in an effort to generate investigative leads. Evidence from the research found that in 68% of submissions made by police officers in the Identify mode, the image was not of sufficient quality for the system to work.
Over the period of the evaluation, however, the accuracy of the technology improved significantly and police got better at using it. The Locate system was able to correctly identify a person of interest around 76% of the time. A total of 18 arrests were made in ‘live Locate deployments during the evaluation, and in excess of 100 people were charged following investigative searches during the first 8-9 months of the AFR Identify operation (end of July 2017-March 2018).
The report suggests that it is more helpful to think of AFR in policing as ‘Assisted Facial Recognition’ rather than a fully ‘Automated Facial Recognition’ system. ‘Automated’ implies that the identification process is conducted solely by an algorithm, when in fact, the system serves as a decision-support tool to assist human operators in making identifications. Ultimately, decisions about whether a person of interest and an image match are made by police operators. It is also deployed in uncontrolled environments, and so is impacted by external factors including lighting, weather and crowd flows.
Deputy Chief Constable Richard Lewis from South Wales Police said the report provides a balanced perspective of the technology and will hopefully help to demystify some of the misunderstandings and misinformation that have proliferated across the press.
AFR on trial
As part of the evaluation study, an attempt was made to test the system in a relatively controlled way. This involved researchers enrolling images of themselves on a watchlist and then deliberately passing through the ‘field of vision’ of the AFR camera to see if it matched them. This experiment was conducted as part of the Wales v Italy rugby game deployment on the 11th March 2018. The purpose was to determine what factors (e.g. walking speed/ direction, clothing and image quality), impact upon the AFR platform’s ability to match an individual to their watchlist image.
The trial involved seven volunteers having their ‘custody’ photographs enrolled onto the AFR watchlist for the event. Each individual then carried out multiple ‘walk-throughs’ in the areas covered by the cameras under various conditions. As a result, there were over 90 ‘walk-throughs’. The conditions included different walking speeds and directions, and ‘on phone’ stroll with head at an angle, a group walk-through, and various props such as scarves, hats, glasses and facial hair.
The match rate was very high – between 76% and 81%. For some volunteers, the recognition score was higher when walking head on towards the camera, while for others it was higher when walking diagonally. It is not clear why the results showed such a pattern, but does suggest that there is no direct relationship between walking direction and recognition.
The researchers were interested in the impact of ‘props’ on the system. These were common accessories that people would likely wear at various times of the year. In particular, two different types of glasses were worn by different individuals: one ‘normal’ pair and one set that deliberately obscured the eyes and bridge of the nose. Additionally, a false moustache was used by male volunteers and a headscarf for the women, were also included. A baseball cap and glasses appear most likely to hinder facial recognition. The cap obscured the forehead casting a shadow on the rest of the face (if pulled down far enough, so the system was completely unable to ‘read’ the face). The glasses (especially the larger pair) obscured the eyes.
The trial also found that the quality of custody images was important. One bad image (where the subject’s eyes were partly closed and head at an angle) was included in the trial and this volunteer was missed on several occasions by the system.
The research and accompanying trial shows that facial recognition has a vital role to play in law enforcement security, but also highlight the challenges both system and user face, which may not be as problematic in more controlled situations like transport hubs.