55.2 F
Washington D.C.
Saturday, April 20, 2024

PERSPECTIVE: 6 Considerations to Prepare for Artificial Intelligence Surprises

Not too long ago many executives, business strategists, commanders and policy makers alike were reading up on all things cyber. They asked many questions like: What does cybersecurity mean for my organization? Should cybersecurity be a part of my strategy? How does the cyber element enable us to reach our goals? Achieve our mission? Policy makers also had questions like: What parts of cyber need regulation? How can cyber policy empower businesses? Should there be policy regulating cyber warfare? While many cyber questions remain unresolved and many answers remain contested, there has been significant advancement in the thinking about cybersecurity.

Today, executives, business strategists, commanders and policy makers are reading up on Artificial Intelligence (AI). The wave of questions are similar to the cyber questions from a few years ago. However, one question that tends to be an afterthought is the question of due diligence and quality control with new technology. This is an issue that should be at the forefront in the discussion of algorithmic design. Due diligence on AI should be part of the technological rollout and oversight of its use.

This article hopes to broaden the AI mindset of readers by outlining six artificial intelligence due diligence and quality control items that those in leadership should be aware of as they plan to incorporate AI into their organization and process supply chain.

                                                      1. Overconfidence

                                                      2. More Data ≠ Solution

                                                      3. Adversarial AI

                                                      4. Hollowing Out of Decision-Making

                                                      5. Algorithmic Regimes

                                                      6. Unknown Unknowns

1.      Overconfidence

In data we trust.

In his book “Future Crimes” about the cybersecurity state of affairs and crime former FBI Special Agent Marc Goodman has a chapter called “In Screen We Trust.” In this chapter he talks about blind trust in everything we see on screens and warned that cyber attacks can also come in the form of data manipulation, which means what we see on the screen may not be true.

With artificial intelligence becoming our second eyes, ears and cognitive processor, there is a risk that the next blind trust would be in data. In a New York Times-hosted chat with Thomas Friedman and Yuval Harari on the future of humanity, Harari explained that “we have created such a complicated world that we’re no longer able to make sense of what is happening.” Harari was referring to the many devices coming online, the way our lives are on many platforms and the rise in technological advances relating to our genome. There is a tremendous amount of information to process in the 21st Century and our human brain capacity to understand and process it all is not up to par. Artificial intelligence has a big role to play in assisting us perceive the world around us and serve as a decision-making infrastructure, but it is important to manage our expectations and confidence in the technology.

In November, three airports in Hungary, Latvia and Greece became the testing area of the AI-powered lie detector developed by iBorderCtrl (a $5 million project). This technology serves as a virtual border guard avatar that will assess the faces of travelers and determine if they are lying or not to the standard border control questions. Should the avatar’s algorithm determine that the passenger is lying, then it will change its tone and become more skeptical in the interrogatory questions with the passenger. The scientists who created this AI are expecting to achieve an 85 percent success rate, although it has only been tested on 32 people. This poses many questions and concerns: How does such a small testing sample account for different cultural expressions, mannerisms and races? Who were these 32 people? How was the success rate determined? Will this discriminate against a certain population? Have any measures been taken to avoid this?

Confidence in algorithmic accuracy needs to be tempered with an understanding that the designers of AI are humans and humans inherently have human bias. This bias can be derived from their cultural upbringing, personal experiences, groupthink and many other factors. Mathematician and author of Weapons of Math Destruction Dr. Cathy O’Neil cautioned, “Algorithms are being presented and marketed as an objective fact. A much more accurate description of an algorithm is that it is an opinion embedded in math.”

As leaders look to incorporate and rely on AI they should ask their engineers and third-party AI companies to provide them with an expected range of accuracy supplemented with an explanation of what this means for the organizational bottom line. Leaders can then determine how much risk they are willing to absorb for the advantages that the AI will bring.

2.      More Data ≠ Solution

Just give it more data…

There seems to be a general enthusiasm about how AI and Machine Learning (ML) can manage on their own and all they need is data, lots of data. For the most part, this is a reasonable understanding of the situation as the more information that can be processed the better understanding of the situation and the better insights a human or algorithm can gain. There is a tremendous amount of knowledge that will be unlocked in the fields of medicine as algorithms churn through massive amounts of genomic data, X-rays, test results etc. However, there are circumstances where “more data” is not the solution or the answer to biased AI.

This is particularly an issue in the areas where the data collected is already biased because of existing and pervasive social discrimination. One such example is data relating to gender and salaries, career progressions and resumes. Due to social inequalities women have struggled to reach leadership positions and there is a notable lack of female representation in STEM areas. With regards to salary it is common for women to earn less than men for the same job and the same experience. According to the U.S. Census Bureau, women make 80.5 cents to the dollar a man makes. No amount of data can correct this bias. It is important to recognize when this is the case and when measures need to be taken to algorithmically correct for inherently biased data.

Amazon faced this problem with the AI recruitment tool they were using to filter through resumes and select the top candidates. However, the top candidates who were selected discriminated against women. Because most of the candidates on which the AI was trained were men, the “Amazon’s system taught itself that male candidates were preferable”. While they did edit the algorithm to be gender-neutral they could not guarantee that it wouldn’t devise another way to filter candidates that was discriminatory. In the end, they stopped using it. While Amazon did not elaborate on the details, this case gives enough food for thought to developers and users of artificial intelligence.

It is important to consider that there are occasions where more data does not correct for social bias, and that more effort and testing will need to be made to reduce this bias.

3.      Adversarial AI

AI can have blind spots….

Scientists and engineers work endlessly to make sure that their algorithms function as intended and as expected. They even attempt to control for potential errors that may occur. For example, those who work with facial recognition try to make sure that their algorithms work even when the person has glasses on or grows a beard or wears a hat. These scenarios are common, and while it may reduce the accuracy of the algorithm these are elements that are incorporated into the design.

However, AI designers have to also consider malicious actions that will be taken against their algorithms. This falls in the category of Adversarial AI, which is at the intersection of machine learning and computer security. What the AI designers do to mitigate this is use adversarial examples to test their AI to see how it would respond; the results allow them to calibrate the AI to control for this.

Unfortunately, scientists and engineers are not able to think of every possible human attack on their algorithm, or how their algorithm will be used in ways not intended.

Adversarial examples are a practical concern with AI development, and there have already been several highly cited examples such as the 3D-printed turtle that fooled a neural net into believing it was a rifle, and research that examined how AI would respond to graffiti on road signage – the result was discouraging as it was not able to recognize a stop sign when the words ‘love’ and ‘hate’ were written on its face.

Leaders need to recognize that Artificial Intelligence is not infallible, and it is susceptible to malicious attacks that play on its algorithmic perception of the world, and trained response mechanism.

Just as cyber insurance came about due to the rise in cyber attacks, leaders will be put in a similar situation with regards to adversarial attacks, as they can be seen as the next type of zero-day attacks.

4.      Hollowing Out of Decision-Making

Let’s define human agency before it is lost…

As Artificial Intelligence becomes more accurate and more woven into operational processes, it will increasingly make autonomous decisions that humans feel comfortable to delegate to AI, and in other cases it will assist humans in making decisions with assessments and predictive analysis. In this sense, artificial intelligence will become a “decision-making infrastructure.”

While there are many tasks that will be beneficial to delegate to AI, particularly when it has a high accuracy rate, leaders and users of AI across the organizational supply chain will need to thoughtfully assess if, when and where the use of AI may inadvertently create a hollowing out of decision-making.

Due diligence in this circumstance calls on organizational leaders and strategists to review their organization’s decision-making chain. They will have to define, justify and purposefully explain the reduction of human agency as decisions get delegated to algorithms. Ultimately, we all will have to get better at understanding when and how decision-making assistance will best support us, and when it will add an unacceptable layer of unexplainable outsourced decision-making. This requires internal reflection on the values of an organization and its mission.

However, in this circumstance it is not just the algorithmic design that will need to be scrutinized, but also the full autonomous supply chain including how data is autonomously and what is collected as much as what is not collected will matter as we relinquish human agency in certain decisions.

Such questions are being rigorously discussed at the United Nations Convention on Certain Weapons Group of Governmental Experts meeting on Lethal Autonomous Weapons Systems. Member nations debated degrees and types of human involvement throughout the life-cycle of autonomous weapons; while discussions continue, this is a useful example of due diligence in assessing the hollowing-out of decision-making due to AI.

5.      Algorithmic Regimes

Multi-cultural algorithms exist and are among us.

On a daily basis we use artificial intelligence, whether it is to conduct a search online, book a flight, interact with our phone’s voice assistant, or even as we scroll through our social media feeds. It is taken for granted that the algorithms that power these activities have all been created by humans and that these humans were born and raised in a certain country and environment and to a certain culture that affected their perspective on the world. Many assume that algorithms are “objective” when in reality they reflect their creators in ways we don’t know, understand or realize.

Sociologist Dr. Polina Aronson explores what she calls emotional regimes in relation to the algorithmic design of AI. In her essay ”Can Emotion Regulating Tech Translate Across Cultures,” she talks about an image that went viral in Russian social media that was a screenshot of Siri and the Russian Yandex equivalent with the same conversation. The user wrote “I feel sad” and Siri responded “I wish I had arms so I could give you a hug,” while the Yandex version responded “No one promised you it was going to be easy.” This was an excellent illustrative moment of how the culture of technologists is woven into the technology they design and is also a reflection of how they view and perceive the world. There is nothing wrong with either answer; however, they reflect two different emotional perspectives on how to respond to sadness. Artificial emotional intelligence is being predominantly developed in a few pockets of the world, while the rest of the world consumes it. In these circumstances due diligence will mean recognizing that the AI designed for human-machine interaction (particularly when dealing with emotions) does not necessarily translate across cultures. As organizations look at who their market is for this type of emotional AI, they will have to be considerate of cultural differences.

6.      Unknown Unknowns

The AI surprises to come will surprise us.

Developers of artificial intelligence diligently work to control for errors in the data, human bias and changes in the context of which the AI is used. The algorithms are researched and tested for accuracy and reliability. However, despite all this, there are unexpected unknown unknowns that will arise.

One example of this is the Microsoft chat bot Tay, which was designed to be a friendly “teenage girl.” Tay had a Twitter account and autonomously tweeted and interacted with others. During the interaction with others on Twitter and absorbing data through the various tweets addressed toward Tay, the chat bot became racist and a Hitler supporter. This all took place in less than 24 hours, Microsoft had to shut down Tay’s chat bot account only 16 hours after it was released.

Tay is simple example of a well-intentioned algorithmic design gone wrong — and there will be other unexpected unknown unknown AI surprises in the future. To best prepare, addressing and mitigating the adverse AI surprises and algorithmic expectations should be managed through a continuous and open conversation between leaders of organizations and designers of AI.

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.

author avatar
Lydia Kostopoulos
Dr. Lydia Kostopoulos’ (@LKCYBER) work lies in the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Formerly the Director for Strategic Engagement at the National Defense University, a Principal Consultant for PA and higher education professor teaching national security at several universities, her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series [#ArtAboutAI], and a game about emerging technology and ethics called Sapien2.0 which is expected to be out early 2019. http://www.lkcyber.com
Lydia Kostopoulos
Lydia Kostopoulos
Dr. Lydia Kostopoulos’ (@LKCYBER) work lies in the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Formerly the Director for Strategic Engagement at the National Defense University, a Principal Consultant for PA and higher education professor teaching national security at several universities, her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series [#ArtAboutAI], and a game about emerging technology and ethics called Sapien2.0 which is expected to be out early 2019. http://www.lkcyber.com

Related Articles

- Advertisement -

Latest Articles