After much anticipation and delay, an independent consultancy finally released its report on the conduct of Meta — the social media giant that runs Facebook, Instagram, and WhatsApp — during the events of May 2021 in Israel-Palestine.
Following a bout of censorship during that violent month — which witnessed a mass Palestinian uprising, Israeli repression, and a war on Gaza — Meta commissioned Business for Social Responsibility (BSR) to conduct a review into its moderation policies for Arabic and Hebrew language content across all three platforms, and to produce a human rights due diligence report.
Among its key findings, the BSR report observed that Meta’s censorship not only violated Palestinians’ fundamental rights, but that the company did not apply its content moderation policies equally to the two languages. Rather, Arabic content was overly moderated, while Hebrew content was largely untouched.
The conclusions are far from surprising. In fact, they firmly validate the lived experience of the majority of Palestinian users across all of Meta’s platforms, who have long argued that the company’s censorship practices are both discriminatory and systematic. The findings further add to the heaps of evidence, documented over many years, showing that Meta is far from a neutral intermediary when it comes to Israel-Palestine.
Still, while the report is a welcomed outcome for transparency and accountability, it also falls short on recognizing the larger context that underpins Meta’s biased policies and actions — biases that emerged not just by accident, but by design.
Shadowbans and overenforcement
During Israel’s brutal crackdown on Palestinian protestors in the Old City of Jerusalem and the neighborhood of Sheikh Jarrah in late April and May 2021, along with the military onslaught on Gaza and the uprising that ensued, many Palestinians took to social media to document, by the minute, the Israeli regime’s violence and human rights abuses. They also used the platforms to debunk disinformation about what was happening on the ground, and to share an authentic, alternative narrative to both mainstream press coverage and Israeli government propaganda.
Almost immediately, social media companies, including Meta, began clamping down on Palestinian speech. Accounts belonging to Palestinian activists, journalists, and eyewitnesses were arbitrarily suspended and their content systematically taken down. Some users also experienced shadowbans soon after they expressed public support and solidarity with Palestinians, with others finding that their Palestine-related posts had greatly reduced visibility among their followers.
At the same time, dozens of Israeli group chats of the “Death to Arabs” variety were formed on WhatsApp to organize pogroms against Palestinian communities both inside Israel and in the occupied West Bank. Racist slurs, incitement to violence, and even direct calls for murder and genocide directed at Palestinians in Hebrew went undeterred on Facebook and Instagram.
According to BSR’s findings, Meta’s overenforcement of its policies on Arabic content — which included erronous and arbitrary takedowns and suspensions — had an “adverse impact” on Palestinians’ rights to freedom of expression, freedom of association and assembly, political participation, bodily security, non-discrimination, freedom from incitement, and access to remedy.
Most notably, BSR found that Meta censored Arabic content at a higher rate than Hebrew content during that period, and further found that the detection rate of “potentially violating Arabic content” was much higher than Hebrew. This is because Meta has built classifiers — predictive algorithms that assess whether a piece of content fits into a “class” that violates the platform’s policies — to automatically detect and remove hostile Arabic speech, while there are none for Hebrew.
Although BSR states that Meta’s bias against Palestinians is “unintentional,” this characterization of bias misses the mark on how institutional and structural discrimination and racism actually operate. In other words, the company’s content moderation system is discriminatory not only due to its selective application, but because of its very design.
Take, for example, Meta’s terrorism-related guidelines: the so-called “Dangerous Individuals and Organizations” policy, or DIO. While the company refuses to publicly state who it classifies and bans as “dangerous” or “terrorist,” a leaked list of 4,000 persons and groups shows that it disproportionately targets Muslim communities from the Middle East and South Asia.
This partly explains why, according to BSR, “Meta’s DOI policy and the list are more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”
Whereas Meta has bended this rule, among others, in the context of Russia’s invasion of Ukraine — which even allows Ukrainians to freely praise the neo-Nazi Azov Regiment as a force for self-defence — no such conflict-sensitive exceptions have ever been made for Palestinians, who are struggling against a no-less brutal military occupation.
Moreover, the company’s treatment of the global non-Western majority, where only crumbs of investment and resources are allocated, is itself a structural problem that affects Palestine. From Myanmar to Ethiopia, Meta treats non-English languages and communities outside of the United States and Europe as a non-priority, despite the fatal consequences of unmoderated hate speech and incitement to violence.
The double standards witnessed in Israel-Palestine are therefore intertwined with deeper problems at the heart of the social media giant’s global practices. Contrary to Meta’s Corporate Human Rights Policy — which was launched just two months before the May crisis — the company has consistently shown blatant disregard to protecting the most vulnerable communities across its platforms.
Meta is thus not acting in blissful ignorance. Its rapid response to the Russian invasion of Ukraine demonstrates that the company can act when it wants — when there is a will, there is a way. And despite thorough documentation of censorship, disinformation, targeted violence, and hate speech against Palestinians, it has failed to take any meaningful and serious action.
In fact, the same violations are repeated over and over again. For instance, as soon as violence erupted in Jerusalem in April 2022 — almost a year after the Sheikh Jarrah protests — Facebook shut down the page of the Palestinian news site Al Qastal while it broadcasted live from the Israeli occupation forces’ violent raid on Al-Aqsa Mosque. To continuously turn a blind eye to the adverse impact of its actions on an oppressed population, despite the amount of evidence, makes it clear that Meta’s bias is indeed intentional.
The result of activism
Meta’s response to the BSR report has so far been underwhelming. For one, Meta did not publicly acknowledge any wrongdoing: it footnoted that its statement “should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions or viewpoints identified by BSR, nor should the implementation of any suggested reforms be taken as admission of wrongdoing.”
For another, despite acknowledging BSR’s 21 non-binding recommendations to address the negative impacts of its actions on Palestinian rights, Meta provided no concrete timeline for when to take action. These important recommendations include reviewing the company’s DIO policy and its designation of deceased historical figures, building classifiers for Hebrew language content, and providing transparency for users over their enforcement actions such as feature limiting (shadowbanning). Meta also rejected one BSR recommendation that called for funding public research to examine the company’s legal counter-terrorism obligations versus its current policies and actions.
It is crucial to note that Meta’s commissioning of the report did not come out of pure good will, but rather due to the persistent public and private campaigning of Palestinian, regional, and global activists and human rights groups calling on the company to stop silencing speech on Palestine. Now that the findings are out, we must continue to demand that Meta respect peoples’ rights and hold it accountable for its censorship.
At this point, Meta cannot evade responsibility for its biased moderation of Palestinian content. Systems are not created in a vacuum; they are a sum of corporate decisions. To not create classifiers for Hebrew hate speech despite its prevalence is a decision. Protecting pro-Zionist speech while deleting direct documentation of Israeli rights abuses is a decision. Answering censorship requests from an occupying power against its occupied population is a decision. It’s time for Meta to decide otherwise.