63.6 F
Washington D.C.
Tuesday, September 21, 2021
spot_img

NIST Proposes Approach for Reducing Risk of Bias in Artificial Intelligence

In an effort to counter the often pernicious effect of biases in artificial intelligence (AI) that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases — and  is requesting the public’s help in improving it.

NIST outlines the approach in A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), a new publication that forms part of the agency’s broader effort to support the development of trustworthy and responsible AI. NIST is accepting comments on the document until Aug. 5, 2021, and the authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in coming months . This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI.

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear,” said NIST’s Reva Schwartz, one of the report’s authors. “We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.”

AI has become a transformative technology as it can often make sense of information more quickly and consistently than humans can. AI now plays a role in everything from disease diagnosis to the digital assistants on our smartphones. But as AI’s applications have grown, so has our realization that its results can be thrown off by biases in the data it is fed — data that captures the real world incompletely or inaccurately.

Moreover, some AI systems are built to model complex concepts, such as “criminality” or “employment suitability,” that cannot be directly measured or captured by data in the first place. These systems use other factors, such as area of residence or education level, as proxies for the concepts they attempt to model. The imprecise correlation of the proxy data with the original concept can contribute to harmful or discriminatory AI outcomes, such as wrongful arrests, or qualified applicants being erroneously rejected for jobs or loans.

The approach the authors propose for managing bias involves a conscientious effort to identify and manage bias at different points in an AI system’s lifecycle, from initial conception to design to release. The goal is to involve stakeholders from many groups both within and outside of the technology sector, allowing perspectives that traditionally have not been heard.

“We want to bring together the community of AI developers of course, but we also want to involve psychologists, sociologists, legal experts and people from marginalized communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. “We would like perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation.”

The NIST authors’ preparatory research involved a literature survey that included peer-reviewed journals, books and popular news media, as well as industry reports and presentations. It revealed that bias can creep into AI systems at all stages of their development, often in ways that differ depending on the purpose of the AI and the social context in which people use it.

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts,” Schwartz said. “Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected.”

Because the team members recognize that they do not have all the answers, Schwartz said that it was important to get public feedback — especially from people outside the developer community who do not ordinarily participate in technical discussions.

“We know that bias is prevalent throughout the AI lifecycle,” Schwartz said. “Not knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”

Comments on the proposed approach can be submitted by Aug. 5, 2021, by downloading and completing the template form and sending it to ai-bias@list.nist.gov. More information on the collaborative event series  will be posted on this page.

Read more at NIST

Homeland Security Todayhttp://www.hstoday.us
The Government Technology & Services Coalition's Homeland Security Today (HSToday) is the premier news and information resource for the homeland security community, dedicated to elevating the discussions and insights that can support a safe and secure nation. A non-profit magazine and media platform, HSToday provides readers with the whole story, placing facts and comments in context to inform debate and drive realistic solutions to some of the nation’s most vexing security challenges.

Related Articles

STAY CONNECTED

- Advertisement -

Latest Articles