In a landmark move to marry innovation with integrity, the Biden administration has introduced a comprehensive strategy directing federal agencies to judiciously employ Artificial Intelligence (AI) technologies. This pioneering initiative, announced by the Office of Management and Budget (OMB), mandates a dual focus on amplifying the benefits of AI while diligently mitigating its potential risks.
Central to this directive is the establishment of “binding requirements” for agencies, obligating them to conduct thorough evaluations of AI tools prior to deployment. These measures aim to ensure that AI applications do not compromise the rights or safety of American citizens. Vice President Kamala Harris emphasized the administration’s commitment to responsible AI utilization, stating, “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people.”
The OMB has set a deadline of December 1, 2024, for federal bodies to implement robust safeguards against risks such as algorithmic bias and invasion of privacy. “These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB wrote in a fact sheet. This includes mandatory transparency about AI usage, comprehensive risk assessments, and the introduction of mechanisms for public oversight.
Illustrating the administration’s approach, Harris outlined specific scenarios where AI’s impact would be closely scrutinized. For instance, the Transportation Security Administration will offer opt-outs from AI-based facial recognition, and the Veterans Administration must demonstrate the absence of racial bias in AI-powered diagnostic tools.
OMB’s directive encompasses an array of AI applications, spanning from internally developed projects to those acquired from external contractors. Agencies falling short of adhering to these guidelines will be required to discontinue the use of non-compliant AI systems, barring exceptional circumstances.
This initiative also anticipates the establishment of AI governance boards across agencies, tasked with overseeing AI adoption and ensuring adherence to the new standards. The creation of such boards, along with the designation of chief AI officers, signifies a structured approach to AI oversight at the highest levels of federal administration.
OMB Director Shalanda Young articulated the broader vision of this policy, highlighting AI’s potential to “improve public services” while maintaining a conscientious stance on its deployment. This includes facilitating cross-agency collaboration and easing the integration of AI technologies in a manner that safeguards public interest.
The administration’s proactive stance on AI extends to workforce development, with plans to introduce at least 100 AI professionals into federal roles by this summer. This move is part of a broader “AI talent surge,” as called for by President Joe Biden, underscoring the government’s commitment to leading by example in the ethical deployment of AI technologies.
Furthermore, the OMB is seeking public and industry input on fostering a competitive AI vendor ecosystem and incorporating risk management practices into federal contracts. This consultation process reflects the administration’s dedication to collaborative governance and the promotion of AI that serves the public good, setting a precedent for responsible technological stewardship in the digital age.