Home United States USA — software Who's Speaking? Speaker Recognition With Watson Speech-to-Text API Who's Speaking? Speaker Recognition...

Who's Speaking? Speaker Recognition With Watson Speech-to-Text API Who's Speaking? Speaker Recognition With Watson Speech-to-Text API

160
0
SHARE

IBM Watson’s Speech To Text API now supports real-time speaker diarization, which distinguishes between speakers. Learn how to take advantage of this feature.
Distinguishing between two people in a conversation is pretty difficult especially when you are hearing them virtually or for the first-time. Same can be the case when multiple voices interact with AI/Cognitive systems, virtual assistants, and home assistants like Alexa or Google Home. To overcome this, Watson’s Speech To Text API has been enhanced to support real-time speaker diarization.
On this post about building a popular chatbot using Watson services called “WatBot, ” there are a couple of requests to include SpeakerLabels setting into our code sample.
Real-time speaker diarization is a need we’ ve heard about from many businesses across the world that rely on transcribing volumes of voice conversations collected every day. Imagine you operate a call center and regularly take action as customer and agent conversations happen — issues can come up like providing product-related help, alerting a supervisor about negative feedback, or flagging calls based on customer promotional activities. Prior to today, calls were typically transcribed and analyzed after they ended. Now, Watson’s speaker diarization capability enables access to that data immediately.
To experience speaker diarization via Watson speech-to-text API on IBM Bluemix, head to this demo and click to play sample audio 1 or 2. If you check the input JSON specifically the highlighted line below; we are setting “speaker_labels” optional parameter to true. This helps us in distinguishing between speakers in a conversation.
A part of output JSON after real-time speech-to-text conversion:
You can see that a speaker label is getting assigned to each speaker in the conversation.
Refer to the WatBot repository to get a gist of how to enable or add speaker diarization to an existing Android app. Similarly, you can use other SDKs to achieve speaker diarization.
From integrating into chatbots to interacting with home assistants like Alexa, Google Home, and more, from call centers to medical services. The possibilities are endless.

Continue reading...