Nice one, Joe, get those non-binding voluntary pledges in early
Seven top AI development houses have promised to test their models, share research, and develop methods to watermark machine-generated content in a bid to make the technology safer, the White House announced on Friday.
Leaders from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI vowed to work toward tackling safety, security, and trust issues in artificial intelligence, we’re told.
„Artificial intelligence offers enormous promise and great risk,“ the Biden-Harris administration said [PDF]. „To make the most of that promise, America must safeguard our society, our economy, and our national security against potential risks.
„The companies developing these pioneering technologies have a profound obligation to behave responsibly and ensure their products are safe.“
Those orgs have agreed to voluntarily ensure products are safe in high-risk areas, such as cybersecurity and biosecurity, before they are made generally available, by conducting internal and external security audits. Some of that testing will reportedly be performed by independent experts.