Домой United States USA — IT Facebook’s content moderation rules dubbed “alarming” by child safety charity

Facebook’s content moderation rules dubbed “alarming” by child safety charity

350
0
ПОДЕЛИТЬСЯ

The Guardian has published details of Facebook’s content moderation guidelines covering controversial issues such as violence, hate speech and self-harm..
The Guardian has published details of Facebook’s content moderation guidelines covering controversial issues such as violence, hate speech and self-harm culled from more than 100 internal training manuals, spreadsheets and flowcharts that the newspaper has seen.
The documents set out in black and white some of the contradictory positions Facebook has adopted for dealing with different types of disturbing content as it tries to balance taking down content with holding its preferred line on ‘free speech’ . This goes some way towards explaining why the company continues to run into moderation problems. That and the tiny number of people it employs to review and judge flagged content.
The internal moderation guidelines show, for example, that Facebook allows the sharing of some photos of non-sexual child abuse, such as depictions of bullying, and will only remove or mark up content if there is deemed to be a sadistic or celebratory element.
Facebook is also comfortable with imagery showing animal cruelty — with only content that is deemed “extremely upsetting” to be marked up as disturbing.
And the platform apparently allows users to live stream attempts to self-harm — because it says it “doesn’ t want to censor or punish people in distress”.
When it comes to violent content, Facebook’s guidelines allow videos of violent deaths to be shared, while marked as disturbing, as it says they can help create awareness of issues. While certain types of generally violent written statements — such as those advocating violence against women, for example — are allowed to stand as Facebook’s guidelines require what it deems “credible calls for action” in order for violent statements to be removed.
The policies also include guidelines for how to deal with revenge porn. For this type of content to be removed Facebook requires three conditions are fulfilled — including that the moderator can ‘confirm’ of a lack of consent via a “vengeful context” or from an independent source, such as a news report.
Other details from the guidelines show that anyone with more than 100,000 followers is designated a public figure and so denied the protections afforded to private individuals; and that Facebook changed its policy on nudity following the outcry over its decision to remove an iconic Vietnam war photograph depicting a naked child screaming. It now allows for “newsworthy exceptions” under its “terror of war” guidelines. (Although images of child nudity in the context of the Holocaust are not allowed on the site.)
The exposé of internal rules comes at a time when the social media giant is under mounting pressure for the decisions it makes on content moderation.
In April, for example, the German government backed a proposal to levy fines of up to €50 million on social media platforms for failing to remove illegal hate speech promptly. A UK parliamentary committee has also this month called on the government to look at imposing fines for content moderation failures. While, earlier this month, an Austrian court ruled Facebook must remove posts deemed to be hate speech — and do so globally, rather than just blocking their visibility locally.
At the same time Facebook’s live streaming feature has been used to broadcast murders and suicides, with the company apparently unable to preemptively shut off streams.
In the wake of the problems with Facebook Live, earlier this month the company said it would be hiring 3,000 extra moderators — bringing its total headcount for reviewing posts to 7,500. However this remains a drop in the ocean for a service that has close to two billion users, who are sharing an aggregate of billions of pieces of content daily.
Asked for a response to Facebook’s moderation guidelines, an spokesperson for the UK’s National Society for the Prevention of Cruelty to Children described the rules as “alarming” and called for independent regulation of the platform’s moderation policies — backed up with fines for non-compliance.
“This insight into Facebook’s rules on moderating content is alarming to say the least, ” the spokesperson told us. “There is much more Facebook can do to protect children on their site. Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”
In its own statement responding to the Guardian’s story, Facebook’s Monika Bickert, head of global policy management, said: “Keeping people on Facebook safe is the most important thing we do. We work hard to make Facebook as safe as possible while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously. Mark Zuckerberg recently announced that over the next year, we’ ll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly.”
She also said Facebook is investing in technology to improve its content review process, including looking at how it can do more to automate content review — although it’s currently mostly using automation to assist human content reviewers.
“In addition to investing in more people, we’ re also building better tools to keep our community safe, ” she said. “We’ re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”
CEO Mark Zuckerberg has previously talked about using AI to help parse and moderate content at scale — although he also warned such technology is likely years out.
Facebook is clearly pinning its long term hopes for the massive content moderation problem it is saddled with on future automation. However the notion that algorithms can intelligently judge such human complexities as when nudity may or may not be appropriate is very much an article of faith on the part of the technoutopianists.
The harder political reality for Facebook is that pressure from the outcry over its current content moderation failures will force it to employ a lot more humans to clean up its act in the short term.
Add to that, as these internal moderation guidelines show, Facebook’s own position in apparently wanting to balance openness/free expression with ‘safety’ is inherently contradictory — and invites exactly the sorts of problems it’s running into with content moderation controversies.

Continue reading...