Home United States USA — IT Google to ramp up AI efforts to ID extremism on YouTube

Google to ramp up AI efforts to ID extremism on YouTube

318
0
SHARE

Last week Facebook solicited help with what it dubbed “hard questions” — including how it should tackle the spread of terrorism propaganda on its platform…
Last week Facebook solicited help with what it dubbed “ hard questions ” — including how it should tackle the spread of terrorism propaganda on its platform.
Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FT newspaper, explaining how it’s ramping up measures to tackle extremist content.
Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content — with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.
Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content — arguing that terrorists are being radicalized with the help of such content.
Earlier this month the UK’s prime minister also called for international agreements between allied, democratic governments to “regulate cyberspace to prevent the spread of extremism and terrorist planning”.
While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.
Besides the threat of fines being cast into law, there’s an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this year related to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.
Google subsequently updated the platform’s guidelines to stop ads being served to controversial content, including videos containing “hateful content” and “incendiary and demeaning content” so their makers could no longer monetize the content via Google’s ad network. Although the company still needs to be able to identify such content from this measure to be successful.
Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is — detailing four additional steps it says it’s going to take, and conceding that more action is needed to limit the spread of violent extremism.
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now, ” writes Kent Walker, general counsel Google in a blog post.
The four additional steps Walker lists are:
Despite increasing political pressure over extremism — and the attendant bad PR (not to mention threat of big fines) — Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it can’ t be directly accused of providing violent individuals with a revenue stream. (Assuming it’s able to correctly identify all the problem content, of course.)
Whether this compromise will please either side on the ‘remove hate speech’ vs ‘retain free speech’ debate remains to be seen. The risk is it will please neither demographic. The success of the approach will also stand or fall on how quickly and accurately Google is able to identify problem content — and policing content at such scale is inevitably a hard problem.
It’s not clear exactly how many thousands of content reviewers Google employs at this point — we’ ve asked and will update this post with any response.
Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500.CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said it’s unlikely to be able to do this successfully for “many years”.
Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker adds: “We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.”

Continue reading...