Smarter software and human experts will help Google police content glorifying terror and violence.
Google has unveiled four measures it will use to tackle the spread of terror-related material online.
Among the measures it is deploying will be smarter software that can spot extremist material and greater use of human experts to vet content.
It said terrorism was an “attack on open societies” and tackling its influence was a critical challenge.
It said it had worked hard to remove terrorist content for a long time but acknowledged that more had to be done.
The steps it plans to take were outlined in an editorial published in the Financial Times newspaper.
The steps apply mainly to Google’s video sharing site YouTube.
It said it would:
In addition, it said, it would work with Facebook, Microsoft and Twitter to establish an industry body that would produce technology other smaller companies could use to police problematic content.
“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free, ” wrote Kent Walker, Google’s general counsel. “We must not let them.”
Labour MP Yvette Cooper said Google’s pledge to take action was “welcome”.
Chairing of the House of Commons Home Affairs Select Committee, Ms Cooper oversaw a report that was heavily critical of social networks and the efforts they took to root out illegal content.
“The select committee recommended that they should be more proactive in searching for – and taking down – illegal and extremist content, and to invest more of their profits in moderation, ” she said.
“News that Google will now proactively scan content and fund the trusted flaggers who were helping to moderate their own site is therefore important and welcome, though there is still more to do, ” she added.
Google’s announcements comes a few days after Facebook made a similar pledge that would involve it deploying artificial intelligence software to police what people post.