Facebook whistleblower Frances Haugen came out swinging in her October 4 statement to the US Senate. “Facebook’s products harm children, stoke division, weaken our democracy and much more.” Internal documents leaked by Haugen to the Wall Street Journal, the Securities and Exchange Commission, and Congress revealed that Facebook prioritizes profits over the health and well-being of individuals, society, and our government.
But there is one element of her testimony that hasn’t yet captured the headlines and is far more dangerous. According to Haugen, the rise of social media also leads to “violence that harms and even kills people.” This statement made me gasp. I have been studying civil wars for 30 years, and it was the first time that someone affiliated with a social media platform had publicly acknowledged what experts have been observing since the mid-2000s. After declining in the 1990s, violence began to increase around the same time that social media became ubiquitous. This included violent protests, hate crimes, domestic terror, ethnic conflict, and even civil war. Experts suspected that social media made it easier for the more extreme members of societies to spread their message, radicalize followers, and build and mobilize a base. And as various forms of political violence escalated in the 2010s, they began to wonder if the recommendation engines at companies like Facebook—that are optimized for engagement and thus prioritize polarizing content—were accelerating the hate and division.
The problem was that no one outside of Facebook could prove it. There was no evidence showing a direct link between what people were seeing on their Facebook feeds and the increase in hate and violence around the world because Facebook controlled all the data and refused to share most of it.
Then came Frances Haugen. Haugen came from the inside. She had access to information—tens of thousands of pages of in-house Facebook research that showed that this connection likely existed—and she was determined to share it to prevent further damage. She said out loud what many civil war experts suspected: that Facebook had long known that its algorithms were inciting hate and violence. It knew that its content-ranking system prioritized angry, extreme, and often false information over less engaging feel-good content and the truth. It knew that the changes it had made to its platform and algorithms over the years were making societies less safe and less stable. And it did nothing.
But look more closely at Facebook’s move into particular regions and countries and the threat has long been clear. Take sub-Saharan Africa. When civil wars began to increase in the mid-2000s, sub-Saharan Africa was the one region of the globe where violence continued to decrease. This puzzled researchers because the region had many of the risk factors for civil war: weak governance, ethnically factionalized populations, and key societal groups excluded from power. But sub-Saharan Africa also had the lowest rates of Internet penetration in the world. This began to change in 2014, when social media became a primary means of communication. Facebook, YouTube, and Twitter made inroads in the region starting in 2015, and as they did, the level of conflict rose. In Ethiopia, for example, longstanding tensions between Tigrayans and Oromos began to boil over in 2019, when a series of fake videos posted online claimed local officials were arming young men. And in the Central African Republic, hate speech promoted on social media has recently been stoking divisions between Muslims and Christians.
Myanmar is an even more disturbing case. Myanmar had a history of ethnic conflict long before Facebook existed. One of the fault lines was between the majority Buddhist population living along Myanmar’s borders and Muslim immigrants from neighboring countries. In 2011, Facebook launched in Myanmar, and a year later, a group of Buddhist ultranationalists used the platform to target Muslim populations throughout the country, blaming them for local violence, and describing them as invaders, and a threat to the Buddhist majority. Before long, Myanmar’s military leaders were using Facebook to bolster the military’s power by posting hate speech and false news stories. In June 2012, local mobs began a systematic campaign to expel Rohingya Muslims from the Rakhine region of Myanmar. In the years that followed, dozens of journalists, human rights organizations, foreign governments, and even citizens of Myanmar alerted Facebook to the spread of hate speech and misinformation on its platform. But Facebook remained silent, refusing to acknowledge the problem, even though it was the main news source for most citizens of Myanmar—something of which the company was well aware. By January 2018, an estimated 24,000 Rohingya people had been killed, and 700,000 of the nearly one million Rohingya had been forced to flee in what became the largest human exodus in Asia since the Vietnam War.
These cases suggest that as social media penetrates a country and gains a larger share of people’s attention, ethnic factions are likely to grow, social divisions are likely to widen, resentment toward immigrants will almost certainly increase, and violence could easily expand. But they were just theories until Haugen’s leak of internal Facebook research. Her leak supports what we have been observing around the world: open, unregulated social media platforms appear to be an accelerant for the conditions that lead to civil war.
1 comment
Yes. But why? What is in our psychology that simply making partisan viewpoints more available always leads to more violent outcomes? Where is the violence avoidance reaction? When browsing and not in active violence, we choose to absorbe more violence potential. We cant seem to turn away?
Yes, regulate! And fast. But I wonder what part of neurobiology ans psychology makes this so depressingly predictable.