Home United States USA — software AI Action Summit review: Differing views cast doubt on AI’s ability to...

AI Action Summit review: Differing views cast doubt on AI’s ability to benefit whole of society

180
0
SHARE

Governments, companies and civil society groups gathered at the third global AI summit to discuss how the technology can work for the benefit of everyone in society, but experts say competing imperatives mean there is no guarantee these visions will win out
During the third global artificial intelligence (AI) summit in Paris, dozens of governments and companies outlined their commitments to making the technology open, sustainable and work for “public interest”, but AI experts believe there is a clear tension in the direction of travel.
Speaking with Computer Weekly, AI Action Summit attendees highlighted how AI is caught between competing rhetorical and developmental imperatives.
They noted, for example, that while the emphasis on AI as an open, public asset is promising, there is worryingly little in place to prevent further centralisations of power around the technology, which is still largely dominated by a handful of powerful corporations and countries.
They added that key political and industry figures – despite their apparent commitments to more positive, socially useful visions of AI – are making a worrying push towards deregulation, which could undermine public trust and create a race to the bottom in terms of safety and standards.
Despite the tensions present, there is consensus that the summit opened more room for competing visions of AI, even if there is no guarantee these will win out in the long run.
The Paris summit follows the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023, and the second AI Seoul Summit in South Korea in May 2024, both of which largely focused on risks associated with the technology and placed an emphasis on improving its safety through international scientific cooperation and research.
To expand the scope of discussions, the AI Action Summit was organised around five dedicated work streams: public service AI, the future of work, innovation and culture, trust in AI, and global governance.
During the previous summit in Seoul, tech experts and civil society groups said that while there was a positive emphasis on expanding AI safety research and deepening international scientific cooperation, they had concerns about the domination of the AI safety field by narrow corporate interests.
In particular, they stressed the need for mandatory AI safety commitments from companies; socio-technical evaluations of systems that take into account how they interact with people and institutions in real-world situations; and wider participation from the public, workers and others affected by AI-powered systems.
However, despite the expanded scope of the AI Action Summit, many of these concerns remain in some form.AI Action Summit developments
Over the course of the two-day summit, two major initiatives were announced, including the Coalition for Environmentally Sustainable AI, which aims to bring together “stakeholders across the AI value chain for dialogue and ambitious collaborative initiatives”; and Current AI, a “public interest” foundation launched by French president Emmanuel Macron that seeks to steer the development of the technology in more socially beneficial directions.
Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.
European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.
The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.
This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.
However, the UK and US governments refused to sign the joint declaration. While it is still not clear exactly why, a spokesperson for prime minister Keir Starmer said at the time that the government would “only ever sign up to initiatives that are in UK national interests”.
Throughout the course of the event, AI developers and key political figures from the US and Europe – including US vice-president JD Vance, Macron, and European commissioner Ursula von der Leyen – decried regulatory “red tape” around AI, arguing it is holding back innovation.
Vance, for example, said “excessive regulation of the AI sector could kill a transformative industry”, while both Macron and European Union (EU) digital chief Henna Virkkunen strongly indicated that the bloc would simplify its rules and implement them in a business-friendly way to help AI on the continent scale. “We have to cut red tape – and we will,” von der Leyen added.
There were also several developments in the immediate wake of the summit. This includes the EU gutting its AI liability directive, which focused on providing recourse to people when their rights have been infringed by AI systems, and the rebranding of the UK’s AI Safety Institute to the AI Security Institute (AISI), which means it will no longer consider bias and freedom of expression issues, and focus more narrowly on security of the technology.AI at a crossroads
Out of those Computer Weekly spoke with, many identified a clear tension in the direction of travel set for the technology over the course of the summit.
For example, although key political figures were espousing rhetoric about the need for open, inclusive, sustainable and public interest AI in one breath, in the next they were decrying regulatory red tape, while committing hundreds of billions to proliferating the technology without clear guardrails.
For Sandra Wachter – a professor of technology and regulation, with a focus on AI ethics and law, at the Oxford Internet Institute it is unclear which red tape political figures such as Macron, Vance and von der Leyen were even referring to.
“I often ask people to list the laws that are standing in the way of progress,” she said. “In many areas, we don’t even have laws, or when we do have laws they aren’t good enough to actually address this, so I don’t see how any of this is holding AI back.”
Highlighting common AI booster rhetoric championed by political and industry figures, Wachter said she would like to see the conversation flipped on its head: “If my technology is so beneficial, if my products are so good for everyone, why wouldn’t I guarantee its safety by holding myself to account?”
Commenting on the EU’s decision to quietly rescind its AI liability directive in the wake of the summit, Wachter said that while other avenues still exist to challenge harmful automated decision-making, the decision represents a worrying potential sea change for AI regulation.
“It worries me a lot because it’s been done under the ‘We need to foster innovation’ banner, but what type of innovation? For whom? Who wins if we have biased, unsustainable, misleading, deceptive AI?” she said, adding that it is not clear to her how the lives of every citizen will be improved by people not being able to get their day in court if AI has harmed them.
“Is it the eight billionaires, or the other eight billion people? It’s very clear that most people will not benefit from a system that isn’t tested, that isn’t safe, that is racist, and that destroys the planet … so this idea that regulation is holding back innovation is completely misguided.”
Wachter added that AI is an “inherently problematic technology, in that its problems are rooted in how it works”, meaning that if it is going to be used for any kind of greater good, “then you have to make sure that you hold those negative side effects back as much as possible”.
Further warning against the dangers of creating a false dichotomy around innovation and regulation, Linda Griffin, vice-predicant of global affairs at Mozilla, said: “We should be very sceptical of claims against regulation.”
She added that she personally finds the anti-regulation rhetoric worrying: “Innovation, growth and profits for a handful of the biggest companies in the world does not mean innovation and growth for the rest of us.”
Gaia Marcus, director of the Ada Lovelace Institute (ALI), also came away from the summit “feeling like we’re at something of a crossroads when it comes to AI development and deployment”, arguing that governments need to build out the incentives to make sure any AI systems deployed in their jurisdictions are both safe and trustworthy.
She added that it was especially important to ensure alternative models and systems are built outside the walled gardens of big tech, so governments “won’t be paying extortionate rents to a few technology companies for a generation”; and for any incentives introduced to ensure the safety of general-purpose AI systems at the bottom of the technology stack, which everything else is built on top of.
Commenting on the current inflection point of AI, Marcus said: “One path is really about winners and losers, about pushing corporate interests or a narrow set of national interests ahead of the public interest, which we’d say is a path to nowhere, and then the other path is about nations working together to build a world where AI works for people in society.”
For international cooperation to be successful, Marcus said that – in the same way there are shared standards and norms around aviation or pharmaceuticals – it is key to create “shared infrastructure for building and testing AI systems”.
She added: “They’ll be no greater barrier to the transformative potential of AI than fading public confidence”, and that like-minded countries which recognise the costs of unaddressed risks must find other forums to continue building the safety agenda. “For a summit that was framed around action, we really wanted to see governments urgently coming together to start building the incentives, institutions and alternatives that will enable broad access and enjoyment of the benefits of AI.

Continue reading...