Misinformation and extremism spreading unchecked. Hate speech sparking conflict and violence in the U.S. and abroad. Human traffickers sharing a platform with baby pictures and engagement announcements.
Despite its mission to bring people closer together, internal documents obtained by USA TODAY show that Facebook knew that users were being driven apart by a wide range of dangerous and divisive content on its platforms.
The documents were part of the disclosures made to the Securities and Exchange Commission by Facebook whistleblower Frances Haugen. A consortium of news organizations, including USA TODAY, reviewed the redacted versions received by Congress.
The documents provide a rare glimpse into the internal decisions made at Facebook that affect nearly 3 billion users around the globe.
Concerned that Facebook was prioritizing profits over the well-being of its users, Haugen reviewed thousands of documents over several weeks before leaving the company in May.
►A tale of two accounts:The story of Carol and Karen: Two experimental Facebook accounts show how the company helped divide America
►Facebook rebrand on the horizon?:Is Facebook changing company name? Shift to metaverse ignites rebranding plan, report says
The documents, some of which have been the subject of extensive reporting by The Wall Street Journal and The New York Times, detail company research showing that toxic and divisive content is prevalent in posts boosted by Facebook and shared widely by users.
Concerns about how Facebook operates and its impact on teens have united congressional leaders. More political fallout could come when Haugen testifies Monday before the British Parliament.
Facebook is now facing the most intense scrutiny it has encountered since it launched in 2004.
CEO Mark Zuckerberg has defended the company and its practices, sharing in an internal staff memo that “it’s very important to me that everything we build is safe and good for kids.”
The company’s spokesman Andy Stone said in a statement to USA TODAY, “At the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie. The truth is we’ve invested $13 billion and have over 40,000 people to do one job: keep people safe on Facebook.”
Nick Clegg, Facebook’s vice president of global affairs, echoed a similar sentiment in an extensive memo to staff on Saturday obtained by USA TODAY. Clegg told staff that they “shouldn’t be surprised to find ourselves under this sort of intense scrutiny.”
“I think most reasonable people would acknowledge social media is being held responsible for many issues that run much deeper in society – from climate change to polarization, from adolescent mental health to organized crime,” Clegg said. “That is why we need lawmakers to help. It shouldn’t be left to private companies alone to decide where the line is drawn on societal issues.
Sunday, Sen. Richard Blumenthal, D-Conn., chair of the Consumer Protection Subcommittee that held Haugen’s testimony, told CNN that Facebook “ought to come clean and reveal everything.”
The spread of misinformation
The documents reveal the internal discussions and scientific experimentation surrounding misinformation and harmful content being spread on Facebook.
A change to the algorithm which prioritizes what users see in their News Feed rolled out in 2018 and was supposed to encourage “meaningful social interactions” and strengthen bonds with friends and family.
Facebook researchers discovered the algorithm change was exacerbating the spread of misinformation and harmful content and actively experimenting with ways to demote and contain that content, documents show.
►Who is Facebook whistleblower Frances Haugen:Everything you need to know
►From Facebook friend to romance scammer:Older Americans increasingly targeted amid COVID pandemic
News Feeds with violence and nudity
Facebook’s research found that users with low digital literacy skills were significantly more likely to see graphic violence and borderline nudity in their News Feed.
The people most harmed by the influx of disturbing posts were Black, elderly and low-income, among other vulnerable groups, the research found. It also said Facebook also conducted numerous in-depth interviews and in-home visits with 18 of these users over several months. The researchers found that the exposure to disturbing content in their feeds made them less likely to use Facebook and exacerbated the trauma and hardships they were already experiencing.
Researchers found: A 44-year-old who was in a precarious financial situation followed Facebook pages that posted coupons and savings and were bombarded with unknown users’ posts of financial scams. A person who’d used a Facebook group for Narcotics Anonymous and totaled his car was shown alcoholic beverage ads and posts about cars for sale. Black people were consistently shown images of physical violence and police brutality.
By contrast, borderline hate posts appeared much more frequently in high-digital-literacy users’ feeds. Whereas low-digital-literacy users were unable to avoid nudity and graphic violence in their feeds, the research suggested people with better digital skills used them to seek out hate-filled content more effectively.
Curbing harmful content
The documents show the company’s researchers tested various ways to reduce the amount of misinformation and harmful content served to Facebook users.
Tests included straightforward engineering fixes that would demote viral content that was negative, sensational, or meant to provoke outrage.
In April 2019, company officials debated dampening the virality of misinformation by demoting “deep reshares” of content where the poster is not a friend or follower of the original poster.
Facebook found that users encountering posts that are more than two reshares away from the original post are four times as likely to be seeing misinformation.
By demoting that content, Facebook would be “easily scalable and could catch loads of misinfo,” wrote one employee. “While we don’t think it is a substitute for other approaches to tackle misinfo, it is comparatively simple to scale across languages and countries.”
Other documents show Facebook deployed this change in several countries – including India, Ethiopia and Myanmar – in 2019, but it’s not clear whether Facebook stuck with this approach in these instances.
►Done with Facebook?:Here’s how to deactivate or permanently delete your Facebook account
►’Profits before people’:After Facebook whistleblower Frances Haugen argued her case, will Congress act?
How to moderate at-risk countries
Facebook knows of potential harms from content on its platform in at-risk countries but does not have effective moderation – either from its own artificial intelligence screening or from employees who review reports of potentially violating content, the documents show.
Another document, based on data from 2020, offered proposals to change the moderation of content in Arabic to “improve our ability to get ahead of dangerous events, PR fires, and integrity issues in high-priority At-Risk Countries, rather than playing catch up.”
A Facebook employee made several proposals, the records show, including hiring individualsfrom less-represented countries. Because dialects can vary by country or even region, the employee wrote, reviewers might not be equipped to handle reports from other dialects. While Moroccan and Syrian dialects were well represented among Facebook’s reviewers, Libyan, Saudi Arabian and Yemeni were not.
“With the size of the Arabic user base and potential severity of offline harm in almost every Arabic country – as every Arabic nation save Western Sahara is on the At-Risk Countries list and deals with such severe issues as terrorism and sex trafficking – it is surely of the highest importance to put more resources to the task of improving Arabic systems,” the employee wrote.
One document from late 2020 sampled more than 1,000 hate speech reports to Facebook in Afghanistan, finding deficiencies in everything from the accuracy of translation in local languages in its community standards to its reporting process. (Afghanistan was not listed among Facebook’s three tiers of at-risk countries in a document Haugen collected before her departure in May, which was before the United States’ withdrawal.)
The report found for one 30-day set of data, 98% of hate speech was removed reactively in response to reports, while just 2% was removed proactively by Facebook.
The document recommended Facebook allow employees in its Afghanistan market to review its classifiers to refine them and add new ones.
“This is particularly important given the significantly lower detection of Hate Speech contents by automation,” it said.
Platform enables human trafficking
Facebook found that its platform “enables all three stages of the human exploitation lifecycle” – recruitment, facilitation and exploitation – via complex real-world networks, according to internal documents.
Though Facebook’s public-facing community standards claim the company removes content that facilitates human exploitation, internal documents show it has failed to do so.
Facebook has investigated the issue for years, proposing policy and technical changes to help combat exploitation on its platforms, records show. But it’s unclear whether those changes were adopted. In at least one case, Facebook deactivated a tool that was proactively detecting exploitation, according to internal documents.
In October 2019, prompted by a BBC investigation, Apple threatened to remove Facebook and Instagram from its app store because it found it had content promoting domestic servitude, a crime in which a domestic worker is trapped in his or her employment, abused and either underpaid or not paid at all. An internal document shows Facebook had been aware of the issue prior to Apple’s warning.
In response to Apple’s threat, Facebook conducted a review and identified more than 300,000 pieces of potentially violating content on Facebook and Instagram, records show. It took action on 133,566 items and blocked violating hashtags.
Contributing: Mike SniderInternet Explorer Channel Network