35 F
Washington D.C.
Tuesday, March 19, 2024

Understanding AI Risk: I Promise This Column Wasn’t Written by ChatGPT (Yet)

There is an emerging policy imperative for governments to mitigate AI risk but that has generally come on the back of the belief that this is still an emerging technology as opposed to a reality where steps need to be taken now to address future risks.

When talking about emerging technologies in a security context, it is useful to begin with a discussion of both opportunities and risks to maintain a balanced risk-management approach to security. This is because society-shifting technologies almost always have significant benefits in terms of improving efficient delivery of functionality, availability and access, and the ability to scale solutions. Emerging technologies also usually have inherent benefits in the way that security operations are delivered as well. Think about the major technological revolutions of the last 25 years – smart phones, social media, cloud computing, connected cities, autonomous vehicles and satellite-based services, just to name a few – and that rule holds. Each has brought significant societal benefits, including enabling enhanced security operations. But each also has come with a cost as well, not the least of which is creating novel risks for the homeland and the American people.

The same is undoubtedly true with Artificial Intelligence (AI), which hit the consumer mainstream with OpenAI’s ChatGPT earlier this year. ChatGPT is an example of generative AI, which generates new content based on algorithms and training data.

The Department of Homeland Security’s 2020 AI Strategy (https://www.dhs.gov/sites/default/files/publications/dhs_ai_strategy.pdf) defines artificial intelligence as “automated, machine-based technologies with at least some capacity for self-governance that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” AI is often coupled with the idea of machine learning, which recognizes that it is digital machines that enable scalable artificial intelligence. While it is a subject of debate to pinpoint the exact onset of artificial intelligence and what it is – and isn’t – in a world of marketing around techno-wizardry, suffice to say that we are amidst, or on the cusp of, the AI revolution. As such, it is important to develop a frame for how to think of AI in terms of homeland security.

On the opportunity side, artificial intelligence has already begun to enhance security. AI-driven algorithms have improved decision-making around screening, enabled deeper transparency into supply chain risks, scaled front-line cyber defenses and enabled self-healing systems, which are part of resilience. While these evolutions are still not deployed at scale, you can imagine a world where front-line security decision-making is dramatically increased by the ability to provide enhanced visibility and set priorities for evaluating risks based on continuous learning, while also routinizing and learning from security interventions – particularly in cyberspace. As the DHS Strategy states, AI will let DHS “more effectively or efficiently accomplish our mission to secure the homeland.” The same will be true from state and local governments and corporate security operations. (We have highlighted some of these use cases previously in Homeland Security Today.)

Enhanced mission accomplishments are not the end of the story of AI’s impact on homeland security, however. There is the other side of opportunities associated with emerging technologies: emerging risks. So, how should homeland security professionals think about the risks associated with AI?

Understanding AI Risk: I Promise This Column Wasn’t Written by ChatGPT (Yet) Homeland Security TodayHomeland security risk management doctrine suggests that the early steps in risk management are to define the decision context, identify the risk, and then assess it. For now, I think security professionals are struggling with that risk management process for AI, partially because the decision context is unclear. There is an emerging policy imperative for governments to mitigate AI risk but that has generally come on the back of the belief that this is still an emerging technology as opposed to a reality where steps need to be taken now to address future risks.

As for risk identification, I group AI risks into four areas: 1) algorithmic integrity; 2) risks to civil liberties and privacy; 3) AI generated tactics, techniques, and procedures by adversaries, and; 4) the unknown.

The first, algorithmic integrity, is related to the underlying instructions given to the AI for how to operate. It is essentially the framework and, at times, guardrails around continuous learning and is the area that is most frequently cited as needing transparency and oversight. Algorithms, however, can be targets of actors who want to undermine AI functionality or sow distrust and chaos. Thus, protecting those algorithms from integrity attacks is key to AI systems.

The second risk is related to the algorithms and is the idea that the way that AI operates – in terms of risk prioritization, for example – could undermine core values. There is a risk that AI has an underlying bias or goes too far into utilizing information that is intended to be private. It may not be immediately obvious that this is occurring and, therefore, AI risks enabling technology to impinge on important freedoms.

Another risk category is the traditional risk of new technologies associated with adversaries using innovation for malevolent purposes. Whether it is AI-designed malware, or offensive-oriented AI designed to innovate attacker frameworks and dynamically identify vulnerabilities, a quicker-learning adversarial system can destabilize standing security practices and more rapidly identify gaps in current security practices.

The final risk is the unknown – the elements of AI that are most associated with science fiction. That is of the AI taking on a “life of its own.” In general, researchers believe that this risk is low for currently used generative AI as opposed to Sentinent AI, which is AI that possesses consciousness or the ability to perceive and understand the world. While there is no sentient AI currently in existence, some have expressed concerns about the potential existential risks of developing such advanced AI systems. The fear is that a sentient AI could potentially become too intelligent and autonomous, leading to unintended consequences and the potential for the AI to act against human interests.

While those are identified risks, I am not aware of any credible risk assessment with quantification or qualification of AI risk, so the idea of data-driven prioritization of homeland security risks is a long way away.

And ChatGPT itself agrees that risks to AI are not well-known. I asked ChatGPT+ the question, “Do you think AI risks are well understood?” Its response:

“Based on the current research and discussions within the AI community, it appears that the risks associated with AI are not yet fully understood. While there is a growing awareness of the potential risks, there is still much work to be done to fully understand the complex nature of these risks and the best ways to mitigate them.”

Now, ChatGPT does not have the benefit of sitting in rooms having these discussions with risk-management professionals, but, having done some of that, I don’t have any reason to disagree with its conclusion. When risks are not well-understood, of course, risk mitigation is difficult.

The policy instruments that have been in place thus far generally call for more oversight and transparency of algorithms, greater efforts to understand technical capabilities, and continuous evaluation of functionality. These are reasonable starting points, but it will be important to evaluate the degree to which they are successful and look for additional strategies. To support this, I would like to see some foundational research done on what metrics an AI risk assessment should use and what the strategic outcomes in terms of risk mitigation should be for homeland security priorities.

There is always a sense that emerging risks are too complicated to understand and there are too many unknowns. Narrowing those unknowns and creating a framework for risk management is always a good start.

(* – Although the title is accurate, I do need to thank ChatGPT for serving as a research partner and providing active edits to strengthen the piece.)

Understanding AI Risk: I Promise This Column Wasn’t Written by ChatGPT (Yet) Homeland Security Today
Bob Kolasky
Bob Kolasky is the Senior Vice President for Critical Infrastructure at Exiger, LLC a global leader in AI-powered supply chain and third-party risk management solutions. Previously, Mr. Kolasky led the Cybersecurity and Infrastructure Security Agency’s (CISA) National Risk Management Center. In that role, he saw the Center’s efforts to facilitate a strategic, cross-sector risk management approach to cyber and physical threats to critical infrastructure. As head of the National Risk Management Center, Mr. Kolasky had the responsibility to develop integrated analytic capability to analyze risk to critical infrastructure and work across the national community to reduce risk. As part of that, he co-chaired the Information and Communications Technology Supply Chain Risk Management Task Force and led CISA’s efforts to support development of a secure 5G network. He also served on the Executive Committee for the Election Infrastructure Government Coordinating Council. Previously, Mr. Kolasky had served as the Deputy Assistant Secretary and Acting Assistant Secretary for Infrastructure Protection (IP), where he led the coordinated national effort to partner with industry to reduce the risk posed by acts of terrorism and other cyber or physical threats to the nation’s critical infrastructure, including election infrastructure. . Mr. Kolasky has served in a number of other senior leadership roles for DHS, including acting Deputy Under Secretary for NPPD before it became CISA and the Director of the DHS Cyber-Physical Critical Infrastructure Integrated Task Force to implement Presidential Policy Directive 21 on Critical Infrastructure Security and Resilience, as well as Executive Order 13636 on Critical Infrastructure Cybersecurity.
Bob Kolasky
Bob Kolasky
Bob Kolasky is the Senior Vice President for Critical Infrastructure at Exiger, LLC a global leader in AI-powered supply chain and third-party risk management solutions. Previously, Mr. Kolasky led the Cybersecurity and Infrastructure Security Agency’s (CISA) National Risk Management Center. In that role, he saw the Center’s efforts to facilitate a strategic, cross-sector risk management approach to cyber and physical threats to critical infrastructure. As head of the National Risk Management Center, Mr. Kolasky had the responsibility to develop integrated analytic capability to analyze risk to critical infrastructure and work across the national community to reduce risk. As part of that, he co-chaired the Information and Communications Technology Supply Chain Risk Management Task Force and led CISA’s efforts to support development of a secure 5G network. He also served on the Executive Committee for the Election Infrastructure Government Coordinating Council. Previously, Mr. Kolasky had served as the Deputy Assistant Secretary and Acting Assistant Secretary for Infrastructure Protection (IP), where he led the coordinated national effort to partner with industry to reduce the risk posed by acts of terrorism and other cyber or physical threats to the nation’s critical infrastructure, including election infrastructure. . Mr. Kolasky has served in a number of other senior leadership roles for DHS, including acting Deputy Under Secretary for NPPD before it became CISA and the Director of the DHS Cyber-Physical Critical Infrastructure Integrated Task Force to implement Presidential Policy Directive 21 on Critical Infrastructure Security and Resilience, as well as Executive Order 13636 on Critical Infrastructure Cybersecurity.

Related Articles

Latest Articles