Home United States USA — IT Scarlett Johansson's AI row has echoes of Silicon Valley's bad old days

Scarlett Johansson's AI row has echoes of Silicon Valley's bad old days

82
0
SHARE

A clash with the actor has echoes of the macho tech giants of the past, says Zoe Kleinman.
“Move fast and break things” is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg.
Those five words came to symbolise Silicon Valley at its worst – a combination of ruthless ambition and a rather breathtaking arrogance – profit-driven innovation without fear of consequence.
I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed both she and her agent had declined for her to be the voice of its new product for ChatGPT – and then when it was unveiled it sounded just like her anyway. OpenAI denies that it was an intentional imitation.
It’s a classic illustration of exactly what the creative industries are so worried about – being mimicked and eventually replaced by artificial intelligence.
Last week Sony Music, the largest music publisher in the world, wrote to Google, Microsoft and OpenAI demanding to know whether any of its artists’ songs had been used to develop AI systems, saying they had no permission to do so.
There are echoes in all this of the macho Silicon Valley giants of old. Seeking forgiveness rather than permission as an unofficial business plan.
But the tech firms of 2024 are extremely keen to distance themselves from that reputation.
OpenAI wasn’t shaped from that mould. It was originally created as a non-profit organisation that would invest any extra profits invested back into the business.
In 2019, when it formed a profit-oriented arm, they said the profit side would be led by the non-profit side, and there would be a cap imposed on the returns investors could earn.
Not everybody was happy about this shift – it was said to have been a key reason behind original co-founder Elon Musk’s decision to walk away.
When OpenAI CEO Sam Altman was suddenly fired by his own board late last year, one of the theories was that he wanted to move further away from the original mission. We never found out for sure.
But even if OpenAI has become more profit-driven, it still has to face up to its responsibilities.
In the world of policy-making, almost everyone is agreed that clear boundaries are needed to keep companies like OpenAI in line before disaster strikes.
So far, the AI giants have largely played ball on paper. At the world’s first AI Safety Summit six months ago, a bunch of tech bosses signed a voluntary pledge to create responsible, safe products that would maximise the benefits of AI technology and minimise its risks.
Those risks, originally identified by the event organisers, were the proper stuff of nightmares. When I asked back then about the more down-to-earth threats to people posed by AI tools discriminating against them, or replacing them in their jobs, I was quite firmly told that this gathering was dedicated to discussing the absolute worst-case scenarios only – this was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.

Continue reading...