Home United States USA — software Snoop Dogg, sentient AI and the ‘Arrival Mind Paradox’

Snoop Dogg, sentient AI and the ‘Arrival Mind Paradox’

160
0
SHARE

Don’t be so trusting: Personified AI agents seem fun and helpful, but their widespread deployment is a real threat and we are unprepared.
Like many longtime technologists, I am deeply worried about the dangers of AI, both for its near-term risk to society and its long-term threat to humanity. Back in 2016 I put a date on my greatest concerns, warning that we could achieve Artificial Superintelligence by 2030 and Sentient Superintelligence soon thereafter.
My words got attention back then, but often for the wrong reasons — with criticism that I was off by a few decades. I hope the critics are correct, but the last seven years have only made me more concerned that these milestones are rapidly approaching and we remain largely unprepared.The likely risks of superintelligence?
Sure, there’s far more conversation these days about the “existential risks” of AI than in years past, but the discussion often jumps directly to movie plots like Wargames (1983), in which an AI almost causes a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an autonomous weapons system evolves into a sentient AI that turns against us with an army of red-eyed robots. Both are great movies, but do we really think these are the likely risks of a superintelligence? 
Of course, an accidental nuclear launch or autonomous weapons gone rogue are real threats, but they happen to be dangers that governments already take seriously. On the other hand, I am confident that a sentient superintelligence would be able to easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, a superintelligence will simply manipulate humanity to meet its own interests. 
I know that sounds like just another movie plot, but the AI systems that big tech is currently developing are being optimized to influence society at scale. This isn’t a bug in their design efforts or an unintended consequence — it’s a direct goal. 
After all, many of the largest corporations working on AI systems have business models that involve selling targeted influence. We’ve all seen the damage this can have on society thanks to years of unregulated social media. That said, traditional online influence will soon look primitive. That’s because AI systems will be widely deployed that can be used to target users on an individual-by-individual basis through personalized interactive conversations. Hiding behind friendly faces
It was less than two years ago that I wrote pieces here in VentureBeat about “AI micro-targeting” and the looming dangers of conversational manipulation. In those articles, I explored how AI systems would soon be able to manipulate users through interactive dialog. My warning back then was that corporations would race to deploy artificial agents that are designed to draw us into friendly conversation and impart influence objectives on behalf of third-party sponsors. I also warned that this tactic would start out as text chat, but would quickly become personified as voice dialog coming from friendly faces: Artificial characters that users will come to trust and rely upon. 
Well, at Meta Connect 2023, Meta announced it will deploy an army of AI-powered chatbots on Facebook, Instagram and WhatsApp through partnerships with “cultural icons and influencers” including Snoop Dogg, Kendall Jenner, Tom Brady, Chris Paul and Paris Hilton.
“This isn’t just gonna be about answering queries,” Mark Zuckerberg said about the technology. “This is about entertainment and about helping you do things to connect with the people around you.”
In addition, he indicated that the chatbots are text for now, but voice-powered AIs will likely be deployed early next year.

Continue reading...