47.6 F
Washington D.C.
Thursday, March 28, 2024

PERSPECTIVE: Exploring Potential, Navigating Challenges for Al and Homeland Security

Artificial intelligence (AI) is an emerging and disruptive technology that is in the process of radically changing how data is used and exploited for a range of societal activities including many aspects of homeland security – from intelligence support to investigations, enforcement operations, and emergency response for disasters, epidemics (pandemics), and emergencies.

Revolutionary Potential and Challenges

AI is revolutionizing business processes in all sectors of the economy. In short, AI involves software agents and algorithms that automate computational processes traditionally performed by humans. This form of AI is referred to as “narrow AI,” where computer or intelligent systems have been taught or have learned – through machine learning – to perform specific tasks. These tasks involve recognizing patterns such as images or social network architecture, including recognizing online terrorist activity such as a UK Home Office project to detect ISIS propaganda. Obviously, this can be beneficial in detecting potential threats or aiding in the diagnosis of disease. This is much different from Artificial General Intelligence (AGI), where the machines actually develop adaptable “human-like” intellect.

This “robotic” intelligence is for now still in the realm of science fiction. That said, narrow AI has the potential to revolutionize all facets of homeland security operations. Yet, the consequences of this technological advance – and the underlying power of algorithms – are still largely unknown. Exploration of the consequences and potentials is essential. These ramifications entail profound legal, ethical, policy, and doctrinal issues.

Algorithms, Machine Learning and Neural Networks

AI consists of a complementary range of applications. These are made possible through “machine learning,” which allows large amounts of data that allow a computer to “see” many variations and patterns that enable its software to carry out specific tasks such as understanding speech, interpreting photos, or detecting threat communications. “Neural networks” are inter-connected algorithms that feed and share data among each other using models to recognize patterns and provide “expert” advice or automate tasks. Programs using algorithms, which encompass heuristic guides to understanding a situation or pattern of data, are used to enable machine learning.

Algorithms are initially developed and tested using the heuristic input of human experts (supervised learning) and employing game theory, but as the Al “learns” the domain it creates its own framework for analysis and continued machine learning (unsupervised and reinforcement learning). Generative Adversarial Networks (GANs) are one machine learning approach (algorithmic model) in which competing neural networks attempt to solve problems and develop novel responses to problems they are used to address.

AI Potential in Homeland Security

AI has many potential applications for homeland security. Software-based AI tools can help process, distribute and analyze data from a broad range of emergency response data and communications systems. These could provide alerts and warnings, perform user-defined tasks, and exploit data from a range of sensors (i.e., CCTV, chemical, biological, and radiological detectors, gunshot detectors, sound and human speech – natural language). Monitoring computer networks to detect cyberthreats and achieve information surety is a key cybersecurity component of homeland security. A range of both structured and unstructured data and applications (APIs – application processing interfaces, commonly known as Apps) can extend the sensing capability to edge devices (smartphones, tablets, and connected sensors arrayed in the IoT – Internet of things).

These apps could reside and be exploited for all homeland security operations. Intelligence analysts performing terrorism early warning and investigative support, cyberattack recognition for information surety, pattern detection for epidemics, aiding diagnosis of disease (in Europe, AI-aided dispatch is assessing heart attack potentials during ambulance response), and consequence assessment (such as projecting the potential fate and transport of CBRN agents, tipping points for epidemics, fire weather and burn potentials for wildland fires, etc.) and disaster response. In addition the apps can be used for mission and incident action planning (including alternative course of action development) and deployment planning – including deployment of police enforcement and high-visibility preventive patrols in transit, airport, aviation, and maritime (port) settings.

AI also has the potential to enable integrating human-decision support with robotic and automated platforms such as sensors and drones (unmanned aerial systems and vessels). In the future these interfaces are likely to include unmanned sentry and weapons systems. There is significant debate within the military about the use of AI and emerging lethal automated weapons systems (LAWS) and humanitarian responsibilities under the laws of armed conflict/ International Humanitarian Law (IHL). These human rights controversies can be expected to carry over into counterdrug, countergang, counterterrorism, and border security operations.

Privacy and civil liberties concerns over the exploitation of identity data including digital and DNA signatures (i.e., Identity Intelligence – I2) are also evident and must be addressed. Eliminating biases, which can compromise the accuracy and predictive potential of AI while compromising privacy and civil liberty concerns (especially Fourth Amendment considerations), and ensuring transparency is essential to successful AI application. False positives, such as those encountered during AI-driven facial recognition exploitation of CCTV in the UK,  constitute another controversial possibility that must be addressed, likely through continued refinement of heuristics and emphasis on human oversight of interpretation.

AI could also potentially be exploited by a range of adversaries and threat vectors. These include hostile states, violent/armed non-state actors, transnational organized crime groups, and hybrid actors. These threats include manipulating data, fraud, and deception and demand the development of AI surety and “Counter-AI” operations.

Avoiding the Hype: Standards and Training

While AI has many potential benefits, few operational mangers in the homeland security and public safety disciplines have experience in exploiting and leveraging its potential. Essentially, the rapid pace of AI development demands new skills and approaches for homeland security practitioners. Al is an emerging and evolving capability and many will seek to jump on the trend before understanding its complexities and potential biases. Analysts and decision-makers who will use AI derivatives for operational decisions such as targeting or resource deployment need to know the limitations of AI software, its underlying concepts, and ethical and legal dimensions. While baseline standards for developing AI are emerging—such as the IEEE Standards on Artificial Intelligence and Automated Systems (P7000 series) – few applied standards exist and homeland security practitioners should become engaged in the standards development process to ensure operational relevance and ethical and legal deployment and use of AI systems.

Doctrine and training in AI integration for agency executives, commanders, managers, supervisors and operators must also be developed. Optimally, this standards requirement and doctrine development will be coordinated by the Department of Homeland Security incorporating input from response disciplines through professional organizations, such as the InterAgency Board, the International Association of Law Enforcement Analysts (IALEA), the International Association of Chiefs of Police, (IACP) and the International Association of Fire Chiefs (IAFC). Civil-military interoperability is also essential. International interoperability could be ensured by coordinating with Interpol. Use of AI tools for mission planning and decision-making must be integrated into wargaming and homeland security response exercises at tactical, operational, and strategic levels for all emergency response and intelligence disciplines.

Conclusion

The rapid emergence of AI has left significant gaps in the legal, ethical, and operational frameworks needed to make these tools an effective and positive tool for protecting society. Harnessing the great positive of these tools while ensuring accountability and countering the threats posed by criminal and actors exploiting these tools will require a concerted effort among all sectors of the homeland security community.

 

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.

PERSPECTIVE: Exploring Potential, Navigating Challenges for Al and Homeland Security Homeland Security Today
Dr. John P. Sullivan
Dr. John P. Sullivan was a career police officer, now retired. Throughout his career he has specialized in emergency operations, terrorism, and intelligence. He is an Instructor in the Safe Communities Institute (SCI) at the University of Southern California, Senior El Centro Fellow at Small Wars Journal, and Contributing Editor at Homeland Security Today. He served as a lieutenant with the Los Angeles Sheriff’s Department, where he has served as a watch commander, operations lieutenant, headquarters operations lieutenant, service area lieutenant, tactical planning lieutenant, and in command and staff roles for several major national special security events and disasters. Sullivan received a lifetime achievement award from the National Fusion Center Association in November 2018 for his contributions to the national network of intelligence fusion centers. He has a PhD from the Open University of Catalonia, an MA in urban affairs and policy analysis from the New School for Social Research, and a BA in Government from the College of William & Mary.
Dr. John P. Sullivan
Dr. John P. Sullivan
Dr. John P. Sullivan was a career police officer, now retired. Throughout his career he has specialized in emergency operations, terrorism, and intelligence. He is an Instructor in the Safe Communities Institute (SCI) at the University of Southern California, Senior El Centro Fellow at Small Wars Journal, and Contributing Editor at Homeland Security Today. He served as a lieutenant with the Los Angeles Sheriff’s Department, where he has served as a watch commander, operations lieutenant, headquarters operations lieutenant, service area lieutenant, tactical planning lieutenant, and in command and staff roles for several major national special security events and disasters. Sullivan received a lifetime achievement award from the National Fusion Center Association in November 2018 for his contributions to the national network of intelligence fusion centers. He has a PhD from the Open University of Catalonia, an MA in urban affairs and policy analysis from the New School for Social Research, and a BA in Government from the College of William & Mary.

Related Articles

- Advertisement -

Latest Articles