69.5 F
Washington D.C.
Wednesday, April 24, 2024

First National Artificial Intelligence Advisory Committee Report Lays Out ‘Logistical to Innovative’ Road Map for Governance, R&D

Recommendations include creating the role of a Chief Responsible AI Officer, an AI Research and Innovation Observatory potentially at the NSF, and a United States Digital Service Academy.

The National Artificial Intelligence Advisory Committee recommended in its year-one report to the president a full-court press to stay ahead of the risks and opportunities posed by rapidly evolving technologies including generative AI and encouraged stronger guidance and guardrails to govern the acquisition and use of AI.

The Commerce Department established the NAIAC in September 2021 in response to the National AI Initiative Act of 2020, with the goal of advising the president and other federal agencies on a range of issues related to AI and the National Institute of Standards and Technology (NIST) providing administrative support. The committee worked with the National AI Initiative Office (NAIIO) in the White House Office of Science and Technology Policy to recruit high-level members for NAIAC.

In April 2022, the appointment of 27 members from academia, industry, nonprofits and civil society was announced; that May, 26 members convened for the first NAIAC meeting. Last week, NIST announced a new Public Working Group on Generative AI to gather input on guidance in the development of generative AI, support NIST testing and evaluation of the technology, and explore future opportunities for the use of generative AI. Applications for the new working group are being accepted through July 9.

In a letter to the president at the outset of the report, committee Chair Miriam Vogel of EqualAI, Inc., and Vice Chair James Manyika of Google stressed that “the world has changed dramatically” in the year since the committee was brought together, as AI “now dominates the public discourse, catalyzing both excitement and concern across the globe.”

“Direct and intentional action is required to realize AI’s benefits, reduce potential risks, and guarantee equitable distribution of its benefits across our society,” Vogel and Manyika wrote. “With the acceleration of AI adoption comes a parallel imperative to ensure its development and deployment is guided by responsible governance. Such governance begins with a crucial first step: alignment on standards and best practices. And because its training and use has no physical borders, its governance must be workable and understandable for users throughout society, operating in the wide landscape of legal jurisdictions.”

“A framework for AI governance must start by evaluating an AI system’s potential risks and benefits in a particular use case and for a particular audience. Only then can we determine whether and how to proceed with its development or deployment and ensure that AI systems are worthy of our trust,” they added, noting the potential of public trust being diminished by AI errors or bias as well as the potential of AI misuse “to cause significant harm, like cyber intrusions or the spread of misinformation.”

For its first year, the committee focused on leadership in trustworthy artificial intelligence, leadership in research and development, supporting the U.S. workforce and providing opportunity, and international collaboration. The report is written not only for the president but members of Congress, AI innovators and policymakers, and stakeholders in a national conversation on AI governance.

The report outlines 14 objectives for engaging with AI “from the logistical to the innovative.” The first is to “operationalize trustworthy AI governance” with an approach that protects against risks “while allowing the benefits of values-based AI services to accrue to the public.” To achieve this, the committee advocates supporting public and private adoption of the NIST AI Risk Management Framework.

The next objective is to “bolster AI leadership, coordination, and funding in the White House and across the U.S. government,” with the recommended actions to “empower and fill vacant AI leadership roles in the Executive Office of the President,” “fund NAIIO to fully enact their mission” as well as fund NIST’s AI work, and create the role of a Chief Responsible AI Officer (CRAIO) perhaps in the Office of Management and Budget or NAIIO by executive order. NAIAC also recommends the establishment of an Emerging Technology Council – potentially led by the vice president and composed of cabinet and key White House leaders – to “coordinate and drive technology policy across the U.S. government and ensure that the opportunities and challenges associated with these technologies are addressed in a holistic and ethical manner.”

With the objective of organizing and elevating AI leadership in federal agencies, NAIAC recommends an executive order ensuring AI leadership and coordination at each department and agency – with each agency having a senior-level official (either existing chief technology officers and/or chief information officers, or newly appointed chief AI officers) “sufficiently resourced and empowered to determine whether an AI tool is appropriate to adopt in the first place — and if so, institute oversight for AI development, deployment, and use within the agency.” The committee also recommends continuing to implement congressional mandates and executive orders on AI with increased appropriations for OMB, the Office of Personnel Management (OPM), and the General Services Administration (GSA).

The next objective focuses on empowering small and medium-sized organizations for trustworthy AI development and use, creating a multi-agency task force that includes industry stakeholders and representatives from the Small Business Administration (SBA), NIST, GSA, and the NSF Directorate for Technology, Innovation and Partnerships. “All of this entity’s efforts — best practices, validation measures, voluntary standards, training materials, and so forth — should be made freely available to the public using standard open-source and Creative Commons (CC) licenses. This would ensure that the translational efforts provide maximal public benefit,” the report states. “…Industry guidance and insights will be critical to ensure that the translational knowledge and capabilities produced by this entity are relevant and useful in the development of more trustworthy AI.”

NAIAC said the objective of ensuring “AI is trustworthy and lawful and expands opportunities” should “ensure sufficient resources for AI-related civil rights enforcement” to assist the U.S. government in identifying and reducing “potential algorithmic discrimination.” In the R&D plank, the objective of supporting sociotechnical research on AI systems includes developing a research base and community of experts focused on this and the objective of creating an AI Research and Innovation Observatory, potentially at the NSF, intends to “inform stakeholders across the government of progress to help steer the co-evolution of AI technology and policy, maximizing the impact of the U.S. government’s investments in AI.”

The objective of creating a “large-scale national AI research resource” acknowledges that “there is no perfect design for this type of large-scale national research resource; different plans could be proposed, each with distinct pros and cons” and recommends that diverse stakeholders be involved in such an endeavor. “NAIAC cautions against the further centralization of power within industry in the attempt to create a national AI cloud, and supports, in the strongest terms, the NAIRR’s approaches to create a distributed and rotational resourcing model to promote a true alternative to our current AI landscape,” the report adds.

In a quest to “modernize federal labor market data for the AI era,” the committee advocates supporting Labor Department efforts as “AI-driven tools coupled with real-time labor market data can enable workers to not only adapt to a changing workplace, but also thrive.” For the objective of scaling an AI-capable federal workforce in the face of “an acute digital talent shortage,” the report recommends developing an approach to train the current and future federal workforce for the AI era, training a new generation of AI-skilled civil servants through the creation of a United States Digital Service Academy (“an accredited, degree-granting university helping to meet the AI talent needs of agencies across the federal government in the mold of the U.S. military service academies”), and potential creation of a Digital Service Academic Compact with accredited U.S. colleges and universities “in the mold of the Community College of the Air Force.” NAIAC also encourages investing in AI opportunities for the federal workforce and boosting short-term federal AI talent including with potential creation of a civilian National Reserve Digital Corps, and reforming immigration policies “to attract and retain international tech talent” given “numerous insurmountable obstacles for immigrants to stay after enjoying our first-class higher education institutions, or provide the critical tech skills necessary for our AI economy to thrive.”

In the plank of international cooperation, the committee recommends continuing to cultivate international collaboration and leadership on AI such as funding NIST, “in coordination with the Department of State, to internationalize the AI Risk Management Framework (AI RMF) through formal translations, workshops at strategic multilateral institutions, and technical assistance to foreign governments.” The report also suggests creating a multilateral coalition for the State Department and NOAA “to accelerate AI for climate efforts,” expanding international cooperation on AI diplomacy, and expanding international cooperation on AI R&D with the recommendation that the National Science Foundation and State Department establish the U.S.-based Multilateral AI Research Institute (MAIRI).

“We spent the first year of a three-year term understanding the effort and resources required to advise on pressing opportunities and concerns about AI,” the report says. “As a result, we are optimistic and energized about the impact we expect to offer in the coming years. We are also keenly aware that we have much more to learn about current and planned government activities and interests involving AI, and much more work to do to realize and achieve our mandate.”

Noting the opportunities and challenges posed by generative AI, NAIAC plans to realign its working groups for 2023-2024 and “will consider the various mechanisms available on a shorter time frame, given the pace of AI development and deployment.”

“We plan to focus our work on both existing areas and new issues, including Generative AI — both the opportunities and guardrails,” NAIAC said of its work ahead. “We will consider how AI can be used to create social solutions. We will also explore how work and our workforce will be impacted by AI, and how to ensure more people can equitably benefit from these systems. We will also continue to explore opportunities for international collaboration and sustained U.S. leadership in AI and other emerging technologies.”

author avatar
Bridget Johnson
Bridget Johnson is the Managing Editor for Homeland Security Today. A veteran journalist whose news articles and analyses have run in dozens of news outlets across the globe, Bridget first came to Washington to be online editor and a foreign policy writer at The Hill. Previously she was an editorial board member at the Rocky Mountain News and syndicated nation/world news columnist at the Los Angeles Daily News. Bridget is a terrorism analyst and security consultant with a specialty in online open-source extremist propaganda, incitement, recruitment, and training. She hosts and presents in Homeland Security Today law enforcement training webinars studying a range of counterterrorism topics including conspiracy theory extremism, complex coordinated attacks, critical infrastructure attacks, arson terrorism, drone and venue threats, antisemitism and white supremacists, anti-government extremism, and WMD threats. She is a Senior Risk Analyst for Gate 15 and a private investigator. Bridget is an NPR on-air contributor and has contributed to USA Today, The Wall Street Journal, New York Observer, National Review Online, Politico, New York Daily News, The Jerusalem Post, The Hill, Washington Times, RealClearWorld and more, and has myriad television and radio credits including Al-Jazeera, BBC and SiriusXM.
Bridget Johnson
Bridget Johnson
Bridget Johnson is the Managing Editor for Homeland Security Today. A veteran journalist whose news articles and analyses have run in dozens of news outlets across the globe, Bridget first came to Washington to be online editor and a foreign policy writer at The Hill. Previously she was an editorial board member at the Rocky Mountain News and syndicated nation/world news columnist at the Los Angeles Daily News. Bridget is a terrorism analyst and security consultant with a specialty in online open-source extremist propaganda, incitement, recruitment, and training. She hosts and presents in Homeland Security Today law enforcement training webinars studying a range of counterterrorism topics including conspiracy theory extremism, complex coordinated attacks, critical infrastructure attacks, arson terrorism, drone and venue threats, antisemitism and white supremacists, anti-government extremism, and WMD threats. She is a Senior Risk Analyst for Gate 15 and a private investigator. Bridget is an NPR on-air contributor and has contributed to USA Today, The Wall Street Journal, New York Observer, National Review Online, Politico, New York Daily News, The Jerusalem Post, The Hill, Washington Times, RealClearWorld and more, and has myriad television and radio credits including Al-Jazeera, BBC and SiriusXM.

Related Articles

Latest Articles