31.3 F
Washington D.C.
Friday, December 13, 2024

Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI

The companies commit to internal and external security testing of their AI systems before their release. 

In July, the Biden-Harris Administration secured voluntary commitments from seven leading AI companies to help advance the development of safe, secure, and trustworthy AI.

Today, U.S. Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and senior administration officials are convening additional industry leaders at the White House to announce that the Administration has secured a second round of voluntary commitments from eight companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—that will help drive safe, secure, and trustworthy development of AI technology.

The Administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to take decisive action to keep Americans safe and protect their rights.

Today, these eight leading AI companies commit to:

Ensuring Products are Safe Before Introducing Them to the Public

  • The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
  • The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First

  • The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
  • The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.

Earning the Public’s Trust

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system. This action enables creativity and productivity with AI to flourish but reduces the dangers of fraud and deception.
  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. These reports will cover both security risks and societal risks, such as the effects on fairness and bias.
  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the potential magnitude and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
  • The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.

In developing these commitments, the Administration consulted with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. These commitments complement Japan’s leadership of the G-7 Hiroshima Process, the United Kingdom’s Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI.

The Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.

Read more at the White House

Homeland Security Today
Homeland Security Todayhttp://www.hstoday.us
The Government Technology & Services Coalition's Homeland Security Today (HSToday) is the premier news and information resource for the homeland security community, dedicated to elevating the discussions and insights that can support a safe and secure nation. A non-profit magazine and media platform, HSToday provides readers with the whole story, placing facts and comments in context to inform debate and drive realistic solutions to some of the nation’s most vexing security challenges.

Related Articles

Latest Articles