Home United States USA — software Google’s use of AI to mimic humans is unethical and bad UX

Google’s use of AI to mimic humans is unethical and bad UX

316
0
SHARE

Google CEO Sundar Pichai took a giant leap in the wrong direction this week. At his company’s I/O developer conference, Pichai wowed the crowd by using a virtual assistant to fool people, making unsuspecting humans the target of laughter and raising serious ethical questions about future uses for AI…
Google CEO Sundar Pichai took a giant leap in the wrong direction this week. At his company’s I/O developer conference, Pichai wowed the crowd by using a virtual assistant to fool people, making unsuspecting humans the target of laughter and raising serious ethical questions about future uses for AI.
Pichai played for the audience a conversation between Google Assistant and a hair salon assistant:
The virtual assistant’s utterance of “Mm-hmm” generated peals of laughter from those eavesdropping on the equivalent of a prank call ( 1:56:36 of the video of the announcement). Listen to this poor sap of a hair salon receptionist! She’s completely falling for it! Let’s listen and see what happens next!
The call’s conclusion generated more laughter and applause from the ranks of I/O attendees and the Google CEO. “That was a real call you just heard,” Pichai said as the chuckling continued. He explained that the call was powered by a new technology called Google Duplex. “It brings together all of our investments over the years: natural language understanding, deep learning, text to speech.”
As if outwitting a hair salon receptionist weren’t enough, Pichai then played the same trick on a restaurant worker, with the juxtaposition of the employee’s heavily accented English and the Google Assistant’s use of “er” and “mm-hmm” provoking still more sniggering.
“Again,” Pichai said, shaking his head at the wonderful hilarity of it all, “that was a real call.”
Google’s drive to make Assistant’s voice patterns indistinguishable from humans raises the question: Why? Just because it can be done is not reason enough to do it.
I can imagine that user experience experts shook their heads at this deception. While it’s certainly more pleasant to interact with a human-sounding voice assistant than a mechanized robotic one, when I’m interacting with them, I want to know it. What did Google lose by not teaching its bot to introduce itself? When calling the hair salon, it could have said, “Hi, I’m Google Assistant calling on behalf of a client.”
It should have done so.
While that could be jarring for the receptionist — and the restaurant employee, and perhaps for you and me, assuming we eventually receive these calls — our acceptance of such interactions will grow with time. There’s little reason to think that AI calls won’t one day be as common as voice menus.
But our acceptance of seemingly autonomous voice assistants will depend on trust. And trust demands being able to distinguish when we’re talking to a human and when we’re talking to an AI.
A world where artificial intelligence-powered text, voice, and video is indiscernible from human-generated analogs is a scary one, and the technologies to create it are already here. It’s not what Alan Turing envisioned when he proposed his eponymous test.
The best way to prevent such a dystopian society — the so-called Infocalypse — is to apply the same ethics that govern human interaction to interactions with AI. Namely, honesty and truthfulness.
Pichai made two ethical mistakes at I/O. The first was turning the hair salon and restaurant employees into dupes. It was really no better than a phony phone call stunt done for the amusement of his developer guests.
Pichai’s second offense was encouraging the development of an AI that impersonates humans. Little good can come from this; it risks setting a dangerous precedent. Instead of Duplex, Google ought to name this new technology Deception.
UPDATE 5/11/2018: In response to criticism over the Duplex-powered Assistant not identifying itself as a bot, Google issued the following statement: “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”
This story originally appeared on Medium. Copyright 2018.
Blaise Zerega is the editor in chief of All Turtles, an AI startup studio, and the former EIC of VentureBeat.

Continue reading...