A new model called DolphinGemma can analyze sounds and put together sequences, accelerating decades-long research projects.
Google is collaborating with researchers to learn how to decode dolphin vocalizations « in the quest for interspecies communication. »
The DolphinGemma AI model, announced today, aims to decode the clicks, whistles, and squawks dolphins make to enhance « our potential connection with the marine world. »
Google trained the Gemini-backed model on a « vast, labeled dataset » of dolphin sounds compiled by the Wild Dolphin Project (WDP). The WDP has led the world’s longest-running underwater research project since 1985, Google says.
« I’ve been waiting for this for 40 years », says Dr. Denise Herzing, research director/founder at Wild Dolphin Project, in the video below. « Feeding dolphin sounds into an AI model like DolphinGemma will give us a really good look at if there are subtleties that humans can’t pick out. You’re going to understand what priorities they have, what they talk about.
Home
United States
USA — IT Google's AI Is Learning to Talk to Dolphins for 'Interspecies Communication'