27.3 F
Washington D.C.
Saturday, December 7, 2024

NIST Introduces ARIA Program to Evaluate and Verify AI Capabilities and Impacts

The National Institute of Standards and Technology (NIST) has launched a groundbreaking new program designed to assess and verify the capabilities and impacts of artificial intelligence (AI). This initiative, known as Assessing Risks and Impacts of AI (ARIA), aims to provide organizations and individuals with a comprehensive understanding of AI technologies’ validity, reliability, safety, security, privacy, and fairness once deployed.

The ARIA program’s introduction follows several significant announcements by NIST, including the 180-day mark of the Executive Order on trustworthy AI and the strategic vision unveiled by the U.S. AI Safety Institute.

“To fully grasp the implications of AI on our society, we must evaluate its functionality in realistic scenarios — and that’s the core mission of this program,” stated U.S. Commerce Secretary Gina Raimondo. “Through ARIA and our broader efforts to support the Commerce Department’s role under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are utilizing every resource to mitigate AI risks while maximizing its benefits.”

Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director, emphasized the program’s practical applications. “The ARIA program is tailored to meet the evolving needs as AI technology becomes more widespread,” she said. “This initiative will bolster the U.S. AI Safety Institute, enhance NIST’s extensive collaboration with the research community, and establish dependable methods for testing and evaluating AI’s real-world functionality.”

ARIA builds upon the AI Risk Management Framework released by NIST in January 2023, operationalizing the framework’s recommended risk measurement techniques. The program aims to develop new methodologies and metrics to quantify how well AI systems maintain safe functionality within societal contexts.

“Assessing impacts involves more than evaluating a model in a controlled environment,” explained Reva Schwartz, lead of NIST’s ARIA program. “ARIA will evaluate AI systems in real-world contexts, considering interactions between humans and AI technologies in everyday settings. This approach provides a comprehensive view of the overall effects of these technologies.”

The outcomes of the ARIA program will support NIST’s broader initiatives, including those through the U.S. AI Safety Institute, to establish a foundation for safe, secure, and trustworthy AI systems. This program represents a significant step forward in understanding and managing the risks and benefits associated with AI.

Matt Seldon
Matt Seldon
Matt Seldon, BSc., is an Editorial Associate with HSToday. He has over 20 years of experience in writing, social media, and analytics. Matt has a degree in Computer Studies from the University of South Wales in the UK. His diverse work experience includes positions at the Department for Work and Pensions and various responsibilities for a wide variety of companies in the private sector. He has been writing and editing various blogs and online content for promotional and educational purposes in his job roles since first entering the workplace. Matt has run various social media campaigns over his career on platforms including Google, Microsoft, Facebook and LinkedIn on topics surrounding promotion and education. His educational campaigns have been on topics including charity volunteering in the public sector and personal finance goals.

Related Articles

- Advertisement -

Latest Articles