Home United States USA — Science OpenAI’s Sam Altman To Congress: Regulate Us, Please!

OpenAI’s Sam Altman To Congress: Regulate Us, Please!

131
0
SHARE

While generative AI, the flavor of artificial intelligence behind ChatGPT, has the potential to transform fields such as healthcare, physics, biology, and climate mode.
In a wide-ranging and historic congressional hearing Tuesday, the creator of the world’s most powerful artificial intelligence called on the government to regulate his industry.
“There should be limits on what a deployed model is capable of and then what it actually does,” declared Sam Altman, CEO and cofounder of OpenAI, referring to the underlying AI which powers such products as ChatGPT. He called on Congress to establish a new agency to license large-scale AI efforts, create safety standards, and carry out independent audits to ensure compliance with safety thresholds.
The hearing, run by Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, heard testimony from Mr. Altman, renowned AI researcher Gary Marcus, and IBM’s chief privacy and trust officer, Christina Montgomery. It was an amicable question and answer session that ran nearly three hours but highlighted the gap between legislators who lack understanding of AI and AI experts who have no experience legislating.“We need to know more about how the models work.”
The three AI experts unanimously expressed the urgency of finding a balance between advancing AI research and ensuring adequate safeguards and regulations. While generative AI, the flavor of artificial intelligence behind ChatGPT, has the potential to transform fields such as healthcare, physics, biology, and climate modeling, it can also be used to spread disinformation, and exacerbate societal inequalities.
As AI continues to evolve and become an increasingly integral part of our lives, the hearings marked a significant step towards comprehensive understanding and governance of the AI landscape. But they also highlighted the lack of understanding, even by AI researchers themselves, about how the most powerful generative systems do what they do.
“We need to know more about how the models work,” said Marcus.
Generative AI in the form of large language models such as GPT-4, the model behind the latest iteration of ChatGPT, have read more text than any single person possibly could and can sound human in their response. But even their creators know little about how the AI actually reasons any more than a lion tamer truly understands what a lion thinks.
On the issue of liability for any harm caused by AI tools, the AI experts themselves suggested that companies responsible for deploying AI products should be held accountable.
Altman acknowledged that if someone is harmed by their AI tool, they should be able to sue the company. He does not believe Section 230, which protects social media companies from liability for content posted on their channels, is the correct framework for AI companies.

Continue reading...