Between 2016 and 2018, systematic hate speech and disinformation spread across Facebook in Myanmar, fueling violence against the Rohingya Muslim minority that culminated in mass killings, rape, and the displacement of over 700,000 people. A UN Fact-Finding Mission concluded Facebook played a 'determining role' in the genocide. Internal documents obtained through lawsuits and whistleblowers revealed the company had two Burmese-speaking content moderators for a country of 18 million Facebook users, knew its recommendation algorithm amplified extremist military propaganda, and repeatedly chose growth over safety interventions.
When Facebook launched in Myanmar in 2011, it arrived in a country just beginning to emerge from decades of military dictatorship. The platform offered something unprecedented: free access through mobile partnerships that waived data charges. For millions of people getting online for the first time, Facebook wasn't just a social network—it was the internet itself.
By 2014, Facebook had 7.3 million users in Myanmar, many of whom used the terms "Facebook" and "internet" interchangeably. The company's Free Basics program, launched globally in 2013, accelerated this consolidation by providing zero-rated access to Facebook while limiting exposure to the broader web. This created an information ecosystem with a single point of control, and that control belonged to a company headquartered 8,000 miles away in Menlo Park, California.
The disparity between Facebook's market penetration and its safety infrastructure was not an accident. Internal documents later disclosed by whistleblower Frances Haugen revealed that company executives knew the platform lacked adequate moderation for most of the world's languages but prioritized growth metrics above safety systems. In Myanmar, this choice would have genocidal consequences.
Facebook employees began warning leadership about hate speech targeting Myanmar's Rohingya Muslim minority as early as 2013. These warnings came not from distant observers but from people inside the company who could see the content spreading across the platform. The Rohingya, a stateless ethnic minority rendered non-citizens by Myanmar's 1982 Citizenship Law, were being systematically dehumanized in posts that compared them to animals, called them "Bengali dogs," and advocated their extermination.
In 2015, the international free expression organization ARTICLE 19 provided Facebook with detailed documentation of anti-Rohingya hate speech and warned that the platform's inadequate moderation had created "dangerous levels of ethnic and religious hatred." Facebook met with ARTICLE 19 representatives multiple times. The company acknowledged the problem. But it failed to implement substantive changes.
"We weren't doing what we should have been doing to stop calls for violence against the Rohingya."
Facebook internal memo — 2018This internal admission, disclosed three years later through the Haugen documents, confirmed what civil society groups had been saying for years: Facebook knew its platform was being used to incite violence and chose not to act with urgency commensurate to the risk.
The pattern was consistent. Warning, acknowledgment, insufficient response. By 2016, hate speech against the Rohingya had become pervasive on Myanmar Facebook. Military-linked accounts and civilian ultranationalist Buddhist groups used the platform to spread propaganda portraying the Rohingya as illegal immigrants, terrorists plotting to establish an Islamic state, and an existential threat to Buddhist Myanmar. Facebook's algorithm, designed to maximize engagement by surfacing content that generated strong reactions, systematically amplified this material.
Facebook's core business model depends on keeping users engaged. The longer people stay on the platform, the more advertisements they see, and the more revenue the company generates. To maximize engagement, Facebook developed recommendation algorithms that prioritize content likely to produce reactions—particularly strong emotional reactions like anger.
A 2022 Amnesty International investigation documented that hate speech against the Rohingya generated angry reactions at approximately five times the rate of neutral content. This meant that Facebook's algorithm, operating exactly as designed, gave anti-Rohingya posts massive distribution advantages. The system wasn't neutral. It was structurally biased toward the most inflammatory content.
This wasn't a bug. It was a feature—one that Facebook's own researchers had identified as problematic in multiple internal reports. A 2018 internal study found that 64% of people who joined extremist groups on Facebook did so because the platform's recommendation algorithm suggested them. Users weren't seeking out hate groups; Facebook was connecting them to hate groups.
In Myanmar, this mechanism transformed the platform into a radicalization engine. Users who viewed one anti-Rohingya post would be recommended similar content. The recommendation system created filter bubbles where increasingly extreme views were normalized through repetition and the appearance of consensus. When someone joined a group spreading hate speech, Facebook would suggest they join additional groups with similar content.
Myanmar's military, the Tatmadaw, had ruled the country from 1962 to 2011. Even after the nominal transition to civilian government, senior military officers including Commander-in-Chief Min Aung Hlaing retained significant political power. As Facebook became Myanmar's primary communication platform, the military recognized its strategic value.
Beginning in 2012, military-linked accounts began systematically using Facebook to spread anti-Rohingya propaganda. The operation was sophisticated. It included official military accounts, fake accounts impersonating local activists and Buddhist monks, and coordinated networks that amplified military messaging. Posts were designed to generate maximum emotional reaction: graphic images with false captions, fabricated stories of Rohingya violence against Buddhists, and explicit calls for ethnic cleansing.
Reuters journalists Wa Lone and Kyaw Soe Oo, investigating a September 2017 massacre in Inn Din village where soldiers and Buddhist civilians executed ten Rohingya men and boys, found that Facebook posts had been used to coordinate the attack. Participants shared the posts, discussed logistics in Facebook Messenger, and posted photographs of the killings afterward. The journalists were arrested in December 2017 and sentenced to seven years in prison for their reporting.
Facebook did not ban Min Aung Hlaing's account until August 2018—one year after the genocide had begun and only after sustained international pressure. By that point, his posts had reached millions of people, contributing to the normalization of violence that made systematic ethnic cleansing possible.
On August 25, 2017, Myanmar military forces launched coordinated attacks on Rohingya villages across Rakhine State. The operation, described by the military as "clearance operations" in response to militant attacks, quickly became systematic ethnic cleansing.
Soldiers went village to village, killing civilians, burning homes, and committing widespread sexual violence. The UN Fact-Finding Mission later documented these as acts of genocide executed with intent to destroy the Rohingya as a group. Satellite imagery showed 392 Rohingya villages burned. Witness testimony described children thrown into fires, women gang-raped in front of their families, and men executed en masse.
Throughout the violence, Facebook remained the primary communication infrastructure. Rohingya refugees later testified that they had seen Facebook posts being used by military units to coordinate movements and by civilian mobs to identify Rohingya for targeting. The posts provided specific information: which villages to attack, which roads Rohingya were using to flee, where displaced people were hiding.
The UN Fact-Finding Mission's September 2018 report made an extraordinary finding: Facebook had played a "determining role" in the genocide. The investigators concluded that hate speech on the platform had "substantively contributed to the level of acrimony and dissension and conflict" and that without Facebook's amplification, the genocide might not have reached the scale and intensity that it did.
The Frances Haugen disclosures in 2021 provided documentary evidence of Facebook's internal knowledge. The leaked documents included:
These documents demolished Facebook's public defense that it had been unaware of the problems or had acted as quickly as possible. The company had possessed detailed knowledge of the risks for years. It had received specific warnings from both internal employees and external organizations. It had the technical and financial resources to address the problems. It chose not to.
"The mechanisms of ethnic cleansing described in the UN reports are not abstract. They are code, running at scale."
Maung Zarni — Scholar and human rights advocate, 2019Facebook's response to international condemnation followed a familiar pattern. In March 2018, six months after the peak of the violence, Mark Zuckerberg publicly acknowledged for the first time that Facebook had failed in Myanmar. In a CNN interview, he said the company "could have done more."
In August 2018, Facebook banned Min Aung Hlaing and other senior military officials—a year after the genocide began. The company claimed it had finally developed sufficient evidence to conclude these accounts violated its community standards, an explanation that prompted the question: if it took a year to gather sufficient evidence against the commander-in-chief of a military conducting genocide, what did Facebook consider sufficient evidence?
In November 2018, Facebook released a commissioned human rights assessment conducted by Business for Social Responsibility. The BSR report confirmed that Facebook had "not sufficiently assessed or addressed the heightened risks" in Myanmar and that the platform had been used to "foment division and incite offline violence." Critics noted that Facebook had commissioned the assessment only after the genocide, suggesting it was designed more for legal protection than genuine accountability.
Facebook increased its Myanmar-focused staff and content moderation capacity, but these resources came only after 700,000 people had been displaced and an estimated 10,000 killed. The timing demonstrated that Facebook could implement safety measures when facing reputational and legal consequences, raising the question of why such measures had not been implemented when facing only moral consequences.
In December 2021, Rohingya refugees filed a $150 billion class action lawsuit against Meta in California federal court. The complaint alleged that Facebook's algorithms and business model had facilitated genocide, that the company knew its platform was being used to incite violence, and that it had failed to take adequate preventive action.
The lawsuit cited specific mechanisms: Facebook's algorithm amplified hate speech because it generated engagement; the recommendation system connected users to extremist content; fake accounts operated for years despite violating Facebook's stated policies; and military propaganda was given massive distribution without fact-checking or warning labels.
Meta's defense centers on Section 230 of the Communications Decency Act, which generally shields internet platforms from liability for user-generated content. The company argues it is not responsible for what users post and that imposing liability would make platforms legally unworkable.
Plaintiffs counter that Section 230 protections should not apply when a platform's algorithms actively amplify harmful content. They argue there is a meaningful distinction between passively hosting user content and actively recommending it to millions of people. The case remains pending, and its outcome could establish precedent for algorithmic accountability.
Myanmar was not an isolated incident. Internal Facebook documents revealed similar safety failures across multiple countries. In India, WhatsApp forwards incited lynch mobs that killed dozens of people. In Ethiopia, Facebook posts were used to coordinate ethnic violence during the Tigray conflict. In the Philippines, the platform amplified President Duterte's violent anti-drug rhetoric.
The pattern was consistent: Facebook prioritized growth in developing countries; it failed to invest proportionally in content moderation and safety systems for those countries; local actors exploited the platform to incite violence; Facebook responded only after international attention; and the company characterized its failures as mistakes rather than systemic priorities.
Amnesty International's 2022 report on Myanmar argued that Facebook's failures violated the company's responsibility to respect human rights under international law. The report documented that Facebook possessed the resources and technical capacity to prevent the harms but chose not to deploy them until reputational and legal consequences materialized.
The Myanmar genocide raises fundamental questions about platform accountability that remain unresolved. If a company builds communication infrastructure used by millions of people, what responsibility does it bear for how that infrastructure is used? If an algorithm systematically amplifies certain types of content, is the platform neutral or is it making editorial decisions? When does failure to prevent foreseeable harm become complicity in that harm?
Facebook's defense has consistently been that it is a neutral platform, that users generate the content, and that the company cannot be expected to monitor billions of posts in hundreds of languages. This defense becomes more difficult when internal documents show the company knew specific harms were occurring and had the capacity to address them but chose not to because doing so would reduce engagement and thus revenue.
"Facebook had a choice between safety and engagement. It chose engagement."
Amnesty International report — March 2022The Rohingya genocide was not caused solely by Facebook. It was the result of decades of discrimination, military authoritarianism, ultranationalist Buddhist ideology, and specific decisions by military leaders to commit atrocities. But the UN finding that Facebook played a "determining role" indicates that the platform was not merely incidental to the genocide—it was structurally significant to how the genocide unfolded.
What makes the Myanmar case particularly damning is the timeline. Facebook was warned in 2013. It had five years to implement safeguards before the August 2017 attacks. The company had the resources, the technical capacity, and the specific information needed to act. It chose growth over safety until the international cost of that choice became unbearable.
The documentary evidence from the Facebook Papers, UN investigations, and civil society reports establishes several facts beyond reasonable dispute:
Facebook knew its platform was being used to spread hate speech against the Rohingya years before the genocide. The company received warnings from both internal employees and external organizations, with specific examples of violating content.
Facebook's moderation capacity was grossly inadequate for the Myanmar market. Two content moderators in 2014 for 7.3 million users was not an oversight—it was a resource allocation decision.
Facebook's algorithm systematically amplified anti-Rohingya content because hate speech generated higher engagement than neutral content, and the algorithm was designed to maximize engagement.
The company possessed the technical and financial resources to address these problems but chose not to implement solutions until after the genocide and only under international pressure.
What remains contested is the appropriate legal and moral framework for evaluating these facts. Is a platform that builds communication infrastructure and controls its distribution through algorithms merely a neutral intermediary? Or does it bear responsibility for foreseeable harms that its choices enable and amplify?
The Rohingya genocide will not be the last time these questions arise. As long as platforms prioritize engagement over safety and deploy their infrastructure globally without proportional investment in safety systems, the pattern will repeat. Myanmar simply provided the clearest documentation of the consequences.