Published on April 27, 2017 | Written by Emma Hinchliffe
Five months after the U.S. presidential election, Facebook is elaborating on its plan to stem the flow of propaganda through its platform — even though it still says fake news on Facebook in 2016 was “marginal” compared to total political discussion.
In a white paper structured around the topic of civic engagement, the social network outlined how it plans to stop the spread of misinformation.
“We believe civic engagement is about more than just voting — it’s about people connecting with their representatives, getting involved, sharing their voice, and holding their governments accountable,” Facebook’s threat intelligence manager Jen Weedon, threat intelligence analyst William Nuland, and Chief Security Officer Alex Stamos wrote in the white paper released Thursday.
“Given the increasing role that Facebook is playing in facilitating civic discourse, we wanted to publicly share what we are doing to help ensure Facebook remains a safe and secure forum for authentic dialogue,” they wrote.
“Information operations” is the term Facebook uses to describe “actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”
That term encompasses “false news, disinformation, or networks of fake accounts aimed at manipulating public opinion.” Those networks of fake accounts are called “false amplifiers.”
To work on stopping these various types of misinformation and abuse on Facebook — categorized as targeted data collection, content creation (aka what’s usually called fake news), and false amplification or fake accounts and spam that spread misinformation — Facebook is taking a few specific steps.
To stop bad actors from collecting data, Facebook is promoting its security options like two-factor authentication. The social network will also offer custom recommendations for what to do to users who are targeted by attackers and preemptively notify users at risk for targeting.
Significantly, Facebook also promised in its white paper to work with government bodies responsible for election protections to notify people at risk.
As for false amplification, Facebook ran a case study around the 2016 election. The tech giant concluded that “the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the U.S, election.”
“In short, while we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues,” Facebook’s report said.
At the end of its white paper, Facebook acknowledged the need for wider efforts on these issues but for the most part outlined initiatives it already has going. The social network works with campaigns and political parties to counter security risks; offers governments guidance on how to do this themselves; runs the Facebook Journalism Project for news organizations; and supports media literacy programs for Facebook users.
To do all this, Facebook had to extend its definition of abuse of the social network to include things like fake news — not just hacking an account.
“These are complicated issues and our responses will constantly evolve, but we wanted to be transparent about our approach,” Facebook’s report said.