Facebook has revealed that it has purged «tens of thousands» of fake accounts in the UK ahead of a general election next month.
Facebook has revealed that it has purged “tens of thousands” of fake accounts in the UK ahead of a general election next month.
The BBC reported this non-specific figure earlier today, with Facebook also saying it is monitoring the repeated posting of the same content or a sharp increase in messaging, and flagging accounts displaying such activity.
Providing more detail on these measures, Facebook told us: “These changes help us detect fake accounts on our service more effectively — including ones that are hard to spot. We’ ve made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity — without assessing the content itself. For example, our systems may detect repeated posting of the same content, or an increase in messages sent. With these changes, we expect we will also reduce the spread of material generated through inauthentic activity, including spam, misinformation, or other deceptive content that is often shared by creators of fake accounts.”
Facebook has previously been accused of liberal bias by demoting conservative views in its Trending Topics feature — which likely explains why it’s so keen to specify that systems it’s built to try to suppress the spread of certain types of “inauthentic” content do not assess “the content itself”.
Another fake news related tweak Facebook says it has brought to the UK to try to combat the spread of misinformation is to take note of whether people share an article they’ ve read — with its rational being that if a lot of people don’ t share something they’ ve read it might be because the information is misleading.
“We’ re always looking to improve News Feed by listening to what the community is telling us. We’ ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. In December, we started to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it. We’ re now expanding the test to the UK, ” Facebook said on this.
The company has also taken out adverts in UK national newspapers displaying tips to help people spot fake news — having taken similar steps in France last month prior to its presidential election.
In a statement about its approach to tackling fake news in the UK, Facebook’s director of policy for the country, Simon Milner, claimed the company is “doing everything we can”.
“People want to see accurate information on Facebook and so do we. That is why we are doing everything we can to tackle the problem of false news, ” he said. “We have developed new ways to identify and remove fake accounts that might be spreading false news so that we get to the root of the problem. To help people spot false news we are showing tips to everyone on Facebook on how to identify if something they see is false. We can’ t solve this problem alone so we are supporting third party fact checkers during the election in their work with news organisations, so they can independently assess facts and stories.”
A spokesperson told us that Facebook’s ‘how to spot’ fake news ads (pictured below) are running in UK publications including The Times, The Telegraph, Metro and The Guardian.
Tips the company is promoting include being skeptical of headlines; checking URLs to view the source of the information; asking whether photos look like they have manipulated; and cross-referencing with other news sources to try to verify whether a report has multiple sources publishing it.
Facebook does not appear to be running these ads in UK newspapers with the largest readerships, such as The Sun and The Daily Mail. Which suggests the exercise is mostly a PR drive by the company to try to be seen to be taking some very public steps to fight the fake news political hot potato.
The political temperature on this issue is not letting up for Facebook. Last month, for example, a UK parliamentary committee said the company must do more to combat fake news — criticizing it for not responding fast enough to complaints.
“They can spot quite quickly when something goes viral. They should then be able to check whether that story is true or not and, if it is fake, blocking it or alerting people to the fact that it is disputed. It can’ t just be users referring the validity of the story. They have to make a judgment about whether a story is fake or not, ” argued select committee chairman Damian Collins.
Facebook has also been under growing pressure in the UK for not swiftly handling complaints about the spread of hate speech, extremist and illegal content on its platform — and earlier this month another parliamentary committee urged the government to consider imposing fines on it and other major social platforms for content moderation failures in a bid to impose better moderation standards.
Add to that, Facebook’s specific role in influencing the elections will again be facing scrutiny later today, when the BBC’s Panorama program screens an investigation of how content spread via Facebook during the US election and the UK’s Brexit referendum — including considering how much money the social networking giant makes from fake news.
The BBC is already teasing this spectacularly awkward clip of Milner being interviewed for the program where he is repeatedly asked how much money the company makes from fake news — and repeatedly fails to provide a specific answer.
Facebook declined to respond on this when we asked for comment on the program’s claims.
Safe to say, there are some very awkward questions for Facebook here (as there has been for Google too, recently, relating to ads being served extremist content on YouTube) . And while Milner says the company aspires to reduce “to zero” the money it makes from fake news, it’s clearly not yet in a position to say it does not financially benefit from the spread of misinformation.
And while it’s also true that some traditional media outlets have or can benefit from spreading falsity — earlier this year, for example, The Daily Mail was itself effectively branded a source of fake news by Wikipedia editors who voted to exclude it as a source for the website on the grounds that the information it contains is “generally unreliable” — the issue with Facebook goes beyond having an individually skewed editorial agenda. It’s about a massively scalable distribution technology whose core philosophy is to operate without any pre-emptive editorial checks and balances at all.
The point is, Facebook’s staggering size combined with the algorithmic hierarchy of its News Feed, which can create feedback loops of popularity, means its product can act as an amplification platform for fake news.
Домой
United States
USA — IT Facebook culls “tens of thousands” of fake accounts ahead of UK election