Home United States USA — Science European Union lawmakers approve of world leading artificial intelligence law

European Union lawmakers approve of world leading artificial intelligence law

91
0
SHARE

European Union lawmakers have approved the Artificial Intelligence Act, which acts as a global signpost for other nations in regulating artificial intelligence technology.
European Union lawmakers gave final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year.
Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
« The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential, » Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote.
Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.
Here’s a look at the world’s first comprehensive set of AI rules:
HOW DOES THE AI ACT WORK?
Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a « risk-based approach » to products or services that use artificial intelligence.
The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.

Continue reading...