Ten years ago this April, a pair of homemade bombs exploded at the finish line of the Boston Marathon, killing three people and injuring hundreds. The Federal Bureau of Investigation (FBI) collaborated with other law enforcement agencies to launch one of the biggest manhunts in U.S. history, including the largest aerial fleet ever assembled by the FBI. Agents pored over videos and photographs, searching for clues and running down leads.
At the outset, they didn’t know who or what they were looking for. They recruited backup from around the country to sift digital evidence. Those efforts led to the identification and apprehension of the bombing suspects, a pair of brothers.
A decade later, the Boston Marathon bombing stands as a watershed moment in law enforcement, a turning point in the way agencies analyze data collected from many sources. Although the manhunt was successful, marshalling human capital on such a large scale to analyze data is unfeasible in most cases. In the aftermath of the bombing, automated tools are changing the way law enforcement organizations use data. Investigators also discovered that a failure to share information about the brothers prior to the bombing may have prevented law enforcement from stopping them. This is but one example of many wherein the sharing of information was challenging and affected the outcome of risk management.
In the intervening years, technological advancements have made it easier than ever for law enforcement and public safety organizations to extract from data the intelligence that supports their missions. Artificial intelligence and machine learning (AI/ML) tools are capable of generating actionable intelligence that helps law enforcement to predict, stop, respond to, and solve crimes – and keep the public safe. Operating on a scale that far exceeds human capabilities, AI/ML analyzes vast quantities of data to discern patterns and insights that otherwise would remain hidden.
Today, law enforcement agencies’ adoption of these tools is at a tipping point – in use but not widely adopted at the enterprise level. Depending on what happens next, AI/ML could catalyze big changes in law enforcement agencies’ collective skill sets, processes, and business models.
As law enforcement and public safety communities increase their use of AI/ML, it’s important that they do so in a way that maximizes the benefits of this powerful technology while sidestepping potential pitfalls. These are not plug-and-play tools. Successfully deploying AI/ML involves a number of considerations: acquiring the right technology, training staff, and avoiding cultural impediments through active change management. AI is not magic; it requires a human element to manage the results. Organizations must take pains to avoid unintended consequences that could impede the missions of law enforcement and potentially harm citizens.
Most important, agencies should understand that AI/ML tools are force multipliers – not force replacements. The best use of AI/ML technology is to inform decisions made by humans, not to make decisions for them.
AI/ML has potential to reinvent the way law enforcement agencies do their work. By completing labor-intensive tasks faster and more accurately than humans, the tools promise to deliver greater efficiency, faster investigations, and deeper insights. Liberating human workers from the drudgery of mundane tasks enables them to focus on higher-level work.
AI/ML excels at ingesting and analyzing massive amounts of data from disparate sources – body cameras, CCTVs, Ring doorbells, iPhone video, weather data, disparate data sets, etc. – and making sense of it. It analyzes data to understand why things happened, which helps law enforcement organizations solve crimes. AI/ML tools further excel at predictive analysis, extrapolating from historical data to develop crime-prevention models.
An AI algorithm designed to extract meaning from digital images could, if supplied data from multiple sources, accurately predict the route a getaway car will take upon leaving the scene of a bank robbery. (Though the prediction would vary depending on weather conditions, time of day, mode of transportation, and other variables.) Facial recognition tools could identify robbers caught on camera and, aided by a network of cameras and sensors, track their movements and distinguishing markings. Even if the robbers concealed their faces, law enforcement could identify potential matches using gait analysis – essentially a fingerprint in motion – that catalogues the signature ways in which people move.
Proliferation of 5G networks and edge computing has been a boon to using AI/ML capabilities in real time. Tools can learn to review video, recognize car types, and understand when vehicles run red lights or travel the wrong direction on a one-way street. Integrated with shot-spotter technologies they may identify subjects and relay information in real time to responding officers. AI/ML can flag an inordinate number of cars gathered at a park. Depending on the types of sensors providing data, AI/ML tools could identify a vehicle that is speeding, stolen, or flagged in an Amber Alert.
Roadblocks to implementation of AI/ML in law enforcement include cultural inertia, poor data literacy among workers, insufficient budget, and a lack of common technical standards. The lack of consistent communication among law enforcement agencies, including challenges in sharing data, is an impediment, as well.
The good news is that law enforcement has a lot of data to share. Challenges to collaboration stem from the autonomy of independent law enforcement agencies. Every state, county, city, town, and parish has its own database technology stack.
Connecting data is hard, but technology exists to make sharing easier. More challenging, perhaps, is aligning cultures of sharing, policies, and governance around data sharing. If you don’t have similar data that’s collected in a way that the schemas are the same, data sets can’t be brought together easily.
Inadequate data sharing has consequences. As with the Boston Marathon bombers, the assailant in last year’s Highland Park mass shooting, outside of Chicago, had come to the attention of law enforcement before the shooting that killed seven people. Local law enforcement officers had removed weapons from the shooter’s house before the July 2022 shooting, yet state police later approved a license required to buy a firearm in Illinois. The shooter had also posted online photos of himself with a gun inside a school.
An AI-enabled social media monitoring program could have aggregated and analyzed relevant data, potentially triggering greater scrutiny by police.
Last fall, a man of color was driving a Jeep outside Atlanta when he was stopped and arrested by a police officer, who told the driver that he was wanted on charges of using stolen credit cards outside of New Orleans. The arrested man had never been to Louisiana, but he did resemble a suspect whose image had been caught on a surveillance video. A facial recognition algorithm made a bad match. The man who drove the Jeep spent a week in jail before being released.
The episode illustrates AI/ML’s potential for misuse – and the necessity of taking every precaution when adopting these tools. Facial recognition based on a grainy surveillance video is an effective means of compiling a pool of suspects, but it shouldn’t be used to make a positive identification. Two-factor identification is a best practice, according to experts.
Similarly, law enforcement must be vigilant to ensure that AI/ML tools don’t inadvertently perpetuate cultural biases that could be harmful to groups or individuals. AI/ML is only as good as the data it learns from. Police records, for example, can reflect biases that lead to higher rates of arrest in certain geographical areas or among particular groups of people. Feeding that information into an AI/ML tool could keep the bias alive.
AI/ML has tremendous potential to help law enforcement agencies fulfill their missions. Adopting those tools will be a multi-stage process. Getting the full benefit will take years. What’s important is to get started. Failure to adopt, manage, and mature AI/ML will put public safety and national security at risk.
To begin, look for low-risk opportunities to use AI/ML and get immediate results. Use tools to more efficiently handle large volumes of mundane tasks that are prone to error when done by humans. Use the technology to look for patterns that would elude human workers.
Finally, raise your organization’s data literacy. In the near future, proficiency with data and AI/ML technology will be as commonplace as spreadsheets and presentation decks are today.
Winston Chang, Snowflake; Steve Ly, ServiceNow; and Noel Hara, NTT Data Services, contributed to this article.