Voice-controlled assistants are proliferating, and opening them to third-party app makers is proving to be a key to success — for some more than others
„AI is the new UI“ may be a cliché now. But back in 2011 when Apple first released Siri, the capability to control a mobile device by talking to it through an intelligent assistant was revolutionary. Granted, Siri wasn’t as smart as HAL in the movie „2001: A Space Odyssey“ or Eddy, the shipboard computer in „The Hitchhiker’s Guide to the Galaxy,“ but it made enough of an impact on consumer technology to spawn a stream of similar intelligent assistants.
Siri was soon followed by Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant. And these will likely be joined soon by many others, including Samsung’s Bixby, which is based on technology Samsung acquired when it bought Viv, a company founded by the people behind Siri.
And just as the iPhone took off when Apple opened it up to third-party app makers, the key to the success of these intelligent assistants may well be the ability for third-party developers to access them and employ them as a user interface to their applications.
It’s an idea that’s not been lost on the technology companies behind these assistants. Here’s a look at how the „big four“ assistants — Alexa, Siri, Cortana, and Google Assistant — are being used now, where they differ, and what comes next.
Amazon has been particularly successful in driving third-party adoption of Alexa. The company first made its Alexa Skills Kit available to developers in June 2015, and six months later over 130 skills were available. (Skills, in Amazon parlance, are applications that can be accessed on one of Amazon’s Echo devices using Alexa as the user interface.) Since then, the development of Alexa skills has exploded. By September 2016 over 3,000 were available, and in February 2017 Amazon announced that the number of skills had burst through the 10,000 mark. That means over 10,000 applications use Alexa as their user interface.
Perhaps more significant is the availability of Alexa Voice Service (AVS). Released in 2015, AVS allows manufacturers to build Alexa into connected products that have a microphone and speaker. Chinese manufacturer Tongfang has announced plans to integrate Alexa into its Seiki, Westinghouse, and Element Electronics smart TVs using a microphone built into the remote controls, enabling owners to use Alexa to carry out actions such as searching channel listings and managing the TV settings.
Chinese phone manufacturer Huawei has also announced plans to build Alexa into its Mate 9 smartphone, and car makers such as Ford are planning to build Alexa into their vehicles to enable drivers to carry out actions such as playing music or setting destinations on the navigation system. Ford owners will also be able to use specially developed skills on Echo devices to carry out functions on their cars such as activating remote start or locking and unlocking doors.
When it comes to voice-enabling third-party devices, James McQuivey, an analyst at Forrester Research, says that Amazon has a huge advantage over its rivals. That’s because it has been working on Alexa for two years, and it can draw on its experiences with its AWS cloud. „If you are working on a washing machine then any problems have probably already been solved, Alexa has been tested, and someone may already have deployed it for that use,“ he says. „Amazon has realized that for Alexa to be deployed like this then it has to handle the cloud, security, and so on, and it has learned how to do that from AWS.