Домой United States USA — software McAfee unveils Project Mockingbird to stop AI voice clone scams

McAfee unveils Project Mockingbird to stop AI voice clone scams

207
0
ПОДЕЛИТЬСЯ

McAfee has introduced Project Mockingbird as a way to detect AI-generated deepfakes that use audio to scam consumers with fake news and other schemes.
McAfee has introduced Project Mockingbird as a way to detect AI-generated deepfakes that use audio to scam consumers with fake news and other schemes.
In a bid to combat the escalating threat posed by AI-generated scams, McAfee created its AI-powered Deepfake Audio Detection technology, dubbed Project Mockingbird.
Unveiled at CES 2024, the big tech trade show in Las Vegas, this innovative technology aims to shield consumers from cybercriminals wielding manipulated, AI-generated audio to perpetrate scams and manipulate public perception.
In these scams, such as with the video attached, scammers will start a video with an legit speaker such as a well-known newscaster. But then it will take fake material and have the speaker utter words that the human speaker never actually said. It’s deepfake, with both audio and video, said Steve Grobman, CTO of McAfee, in an interview with VentureBeat.
“McAfee has been all about protecting consumers from the threats that impact their digital lives. We’ve done that forever, traditionally, around detecting malware and preventing people from going to dangerous websites,” Grobman said. “Clearly, with generative AI, we’re starting to see a very rapid pivot to cybercriminals, bad actors, using generative AI to build a wide range of scams.”
He added, “As we move forward into the election cycle, we fully expect there to be use of generative AI in a number of forms for disinformation, as well as legitimate political campaign content generation. So, because of that, over the last couple of years, McAfee has really increased our investment in how we make sure that we have the right technology that will be able to go into our various products and backend technologies that can detect these capabilities that will then be able to be used by our customers to make more informed decisions on whether a video is authentic, whether it’s something they want to trust, whether it’s something that they need to be more cautious around.”
If used in conjunction with other hacked material, the deepfakes could easily fool people. For instance, Insomniac Games, the maker of Spider-Man 2, was hacked and had its private data put out onto the web. Among the so-called legit material could be deepfake content that would be hard to discern from the real hacked material from the victim company.
“What what we’re going to be announcing at CES is really our first public sets of demonstrations of some of our newer technologies that we built,” Grobman said. “We’re working across all domains. So we’re working on technology for image detection, video detection, text detection. One that we’ve put a lot of investment into recently is deep fake audio. And one of the reasons is if you think about an adversary creating fake content, there’s a lot of optionality to use all sorts of video that isn’t necessarily the person that the audio is coming from. There’s the classic deepfake, where you have somebody talking, and the video and audio are synchronized. But there’s a lot of opportunity to have the audio track on top of the roll or on top of other video when there’s other video in the picture that is not the narrator.”Project Mockingbird
Project Mockingbird detects whether the audio is truly the human person or not, based on listening to the words that are spoken. It’s a way to combat the concerning trend of using generative AI to create convincing deepfakes.
Creating deepfakes of celebrities in porn videos has been a problem for a while, but most of those are confined to deepfake video sites. It’s relatively easy for consumers to avoid such scams. But with the deepfake audio tricks, the problem is more insidious, Grobman said. You can find plenty of these deepfake audio scams sitting in posts on social media, he said. He is particularly concerned about the rise of these deepfake audio scams in light of the coming 2024 U.S. Presidential election.
The surge in AI advancements has facilitated cybercriminals in creating deceptive content, leading to a rise in scams that exploit manipulated audio and video. These deceptions range from voice cloning to impersonate loved ones soliciting money to manipulating authentic videos with altered audio, making it challenging for consumers to discern authenticity in the digital realm.

Continue reading...