Why Facebook’s fake news filter won’t work

At the UN, Colin Powell holds a model vial of anthrax, while arguing that Iraq is likely to possess WMDs, 2003. Wikicommons/ United States Government. Some rights reserved.Last
week, Facebook made a significant intervention into the debate around ‘fake
news’, trialling a new feature (for
now, just in the US) which both alerts users when an article they are trying to
share has been disputed by fact checkers, and appends a disclaimer if the user
decides to share it.

This
is a significant escalation from Facebook’s previous response to the issue, a
community-led reporting feature which was widely praised as an example of
responsible practice by a tech company. So far, the new feature has not
received much scrutiny from the digital rights community. It should; the
implications are troubling.

Before
we go into why, it’s useful to think first about where the concept of fake news
comes from. The phrase came to prominence in the context of the US election, as
part of a broader story of Russian interference. Fears over Russia have
continued to frame the debate in the US  –  see the (now debunked) PropOrNot list,
and the recently introduced bill to
investigate RT America  –  but fake news has since become a global phenomenon.

Despite
this, pinning down what fake news actually refers to can be difficult. In
December last year, Hillary Clinton memorably described an “epidemic of
malicious fake news and false propaganda”, a confusing elision of different
types of media which points to a wider definitional instability. Anything and
everything can now be described as ‘fake news’, whether that’s polls, the entire media, or
even individual people.
Acknowledging this, one of the producers behind the recent CBS 60 Minutes
special on fake news took pains to clarify that
the programme’s focus was “not the ‘fake news’ that is invoked by politicians
against the media for stories that they don’t like”, but rather “stories that
are provably false, have enormous traction in the culture, and are consumed by
millions of people.”

Bilge?

What
is only hinted at in this formulation (with the phrase ‘enormous traction’) is
the role of the digital environment  ­–  and social media in
particular – which is often posited as the key driver of fake news and the related
phenomenon of ‘post-truth politics’. In a Guardian interview on this topic, the
editor of Snopes – one of the four fact-checking outfits which will power
Facebook’s new tool  – described social media in
terms of an “opening of the sluice-gate”; “the bilge”, as he put it, “keeps
coming faster than you can pump.” Like Clinton’s description of an “epidemic of
malicious fake news”, social media is presented here as uncontrollable, riddled
with infection — and toxic.

If
only we could close the gates again! Before social networks, so the story goes,
news  –  at least in open media markets like the US – was real and
authoritative, based on fact rather than hysteria. “We all know that
politicians have lied before,” an op-ed in The Humanist
acknowledged in 2015, “Yet I sense a shift in the landscape of post-truth
America. We’ve crossed some kind of frontier.”

When
considering these arguments, it is important to remember that in 2003, several
years before the advent of Facebook, virtually every US newspaper, including
the New York Times and the Washington Post, published
articles vouching for the existence of weapons of mass destruction in Iraq  – claims which were
later comprehensively debunked. Does this qualify as fake news? If not  – why not?

When is a fact checker a
fake?

Activists rally in Bryant Park in New York prior to marching to the New York Times building in midtown Manhattan on Saturday, March 25, 2017. Richard B. Levine/SIPA USA/PA Images. All rights reserved.To
be clear, I am not trying to argue that the digital environment cannot, in some
cases, exacerbate the spread of misinformation, or facilitate its transmission.
The internet’s radical empowerment of freedom of expression and access to
information, while overwhelmingly positive for democracy and participation,
also of course carries the potential for abuse. Facebook and other businesses
have a role in making sure that their platforms are secure, healthy spaces for
debate, freedom of expression and assembly. This requires thoughtful product
design and user policies, which may include measures to deal with deliberate
misinformation.

But there are clear problems with the approach Facebook is currently
trialling. First of all, its very premise — that it is possible to
unproblematically assess the veracity of news using fact checkers  –  does not stand up to any
scrutiny. Fact checkers are not themselves immune to accusations of partisan bias.
And even if they were, an obvious philosophical problem remains: is there even
such a thing as objective truth? What we understand as fact is inextricable
from questions of power, representation, geography and time. It’s important to
remember, when considering the implementation of a fake news filter on the
world’s largest communications platform, that people used to think the earth
was flat, and doctors used to recommend smoking to
patients.

To some this might seem like an academic, abstract problem, especially
since most of the articles affected by Facebook’s filter would probably be
egregious and offensive  –  like the article used in the feature’s US trial, which claims Irish people were brought to America as slaves.

But consider how a fake news filter might shape the way a user
experiences their timeline; if, for instance, one in every ten articles were to
appear with a disclaimer. Perhaps this would discourage that user from reading,
or sharing, an inaccurate story; or would give them, at least, a more critical
framework through which to assess it. Undoubtedly this is the outcome that
Facebook would like to see.

But what about the stories which aren’t flagged up by the fact
checkers? Mistakes  –  whether minor or serious  – are not uncommon, even among highly respected media
organisations, and are often only discovered after publication; the Washington
Post, for example, had to quietly qualify or withdraw two of its biggest
stories last year. A fact checker would be of little use here. Indeed, the
silence of Facebook’s fact-checking feature on a given article could even
subconsciously encourage a user to let their guard down when reading it, and
suspend their critical faculties. It is hard to see how this would improve or
enrich political and intellectual culture.

Fake news, and the anxieties and structural problems for which it
serves as a proxy, isn’t going anywhere. Facebook’s initiative is just one of
many in the pipeline; in Germany, a draft law currently under
consideration would impose fines of up to €50 million on platforms found to
host fake news; while Factmata, a Google-backed startup,
aims to apply a Facebook-style fact-checking system to search engines.

Developing a critical
faculty

It’s beyond the scope of this article, brief and speculative as it is,
to offer solutions, other than to suggest that, rather than seeking a silver
bullet, we need a more holistic view of the phenomenon  –  one which centres the
critical faculties of people, and attends to the structural factors which make
people turn to ‘fake news’ in the first place.

A recent statement by
the Organisation for Economic Co-operation and Development (OECD), suggesting
schools should teach children how to spot fake news in schools, potentially
offers a useful starting point, and warrants further discussion. The Democracy
Fund’s announcement of a $1
million fund to tackle misinformation is also welcome, particularly in its
acknowledgement that a range of solutions  –  including stronger
independent media organisations  –  is going to be needed.

Above all, we need a much broader conversation on this issue. All of us – businesses, civil
society, media organisations and technical communities  –  have a role to play in
this debate. It would be unwise to leave it just to the fact checkers.