From Myanmar to the Philippines, the internet giant’s record speaks volumes about its commitment, or the lack of it, to tackle hate speech and violence.
Full disclosure: Facebook is one of the sponsors of The Media Rumble, the annual event organised by Newslaundry in partnership with Teamwork Arts.
In February, Bharatiya Janata Party leader Kapil Mishra made an incendiary speech, recorded and widely shared on social media, that was denounced as the immediate trigger for the communal violence in Northeast Delhi which left at least 53 people dead and 200 injured.
In June, Facebook chief Mark Zuckerberg obliquely referenced Mishra’s speech while outlining the dangerous relationship between social media and violence, during a meeting with the company’s 25,000 employees. But by the time Facebook took down the post, it had already been widely shared and, within hours, riots broke out.
Facebook, which once offered hope to free speech advocates, is today embroiled in accusations that the company has done little to curb hate speech on its platform. In India, the world’s largest social network is facing tough questions regarding its soft approach to the regulation of hateful content.
Two days ago, Reuters reported that 11 Facebook employees had written an open letter to their leadership, demanding that the “company leaders acknowledge and denounce ‘anti-Muslim bigotry’ and ensure more ‘policy consistency’”, and make its policy team more diverse. “The Muslim community at Facebook would like to hear from Facebook leadership on our asks,” the letter said.
The same day, Facebook announced that it was “expanding” on its “Dangerous Individuals and Organizations policy” to “address organisations and movements that have demonstrated significant risks to public safety”. In the process, Facebook said it had already removed over 790 groups, 100 pages and 1,500 ads tied to QAnon — a conspiracy theory that US president Donald Trump himself has supported. Facebook also claimed to have blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 groups and 440 pages on Facebook and over 10,000 accounts on Instagram.
Facebook had announced in 2017 that it would be doing its best to cross-check facts and reduce the spread of misinformation. But in the last few years, a series of offline instances of violence in Sri Lanka, Myanmar, the Philippines, and now India, can be traced back to online content that was allowed to be alive for far too long.
It is not that Facebook does not recognise its role in the “very real human rights impact" in, for example, the 2018 violence in Sri Lanka. On several occasions, the platform has confessed to aggravating offline crimes but recent revelations show Facebook’s slow response is not entirely out of human error but deliberate actions to ensure the politically powerful are not held accountable.
Let’s take a look at what has emerged about Facebook’s role in inciting violence across three countries.
Myanmar
Through 2016 and 2017, Myanmar’s military and Buddhist militias are said to have massacred, tortured and raped the Rohingya, a Muslim minority, and burned down their villages, forcing over eight lakh of them to flee to neighboring Bangladesh. In November, the Gambia took Mynamer to the International Court of Justice, accusing it of committing genocide against the Rohingya. The court in January 2020 urged Myanmar to “take all measures within its power” to prevent further physical or mental harm to the Rohingya community, including by the military. It also ordered the preservation of evidence related to genocide allegations.
But where does Facebook come in?
In March 2018, UN human rights experts said Facebook had played a “determining role” in spreading violence in Myanmar. In November that year, Facebook admitted that its platform had been unwittingly used to incite violence and that it wasn’t doing enough to prevent the platform from being used to “foment division and incite offline violence”. The company committed to providing information to the investigators of the human rights violations in Myanmar.
To this end, Facebook said it would preserve content, including information on accounts and pages that had been removed in August and October 2018.
In June this year, the Gambia filed an application in a United States federal court seeking information on officials and military units from the accounts and pages “preserved” by Facebook. The Gambia had filed a similar application against Twitter in May, but the case was withdrawn “presumably because Twitter agreed to cooperate”, Time reported.
Facebook, however, rejected the Gambia’s request, claiming it was “extraordinarily broad” and “unduly intrusive or burdensome”. It also claimed that handing over the information would specifically violate a section of the Stored Communication Act which, Time reported, is a US federal law that “prevents social media companies from releasing communications and data to third parties on a whim”.
But, as Time pointed out, “The law is intended to protect the privacy of individuals, not shield unlawful actions of State actors.” And since most of the pages Facebook took down were public or information for public viewing, the law does not require the company keep them private.
Sri Lanka
In March 2018, a Facebook post in Sinhalese read: “Kill all Muslims, do not spare even an infant. They are dogs.” Six days later, even as two people were killed, 450 Muslim homes and shops vandalised and 60 vehicles burnt in Sri Lanka’s Kandy, the post stayed up. The post had been reported for violating the company’s community standards, as the Guardian noted, but that didn’t seem to matter.
In May this year, two years after the violence, Facebook admitted to not taking strict action against abusive content on its platform in Sri Lanka. “We recognise, and apologise for, the very real human rights impacts that resulted,” it said. Facebook announced that it would hire content moderators who spoke and understood local languages, and deploy technology to detect signs of hate speech and keep abusive content from spreading. According to Facebook they currently have “more than 35,000 people working on safety and security”.
Recently the company also announced that from August this year onward, they will be releasing their community standards enforcement report on a quarterly basis in order to “effectively track our progress and demonstrate our continued commitment to making Facebook and Instagram safe and inclusive”.
But again in the run-up to the 2019 presidential election, the Guardian reported that Gotabaya Rajapaksa, now president, shared fake news reports that went viral on his official Facebook page — despite AFP debunking them. The Guardian said: “Sri Lankan civil society groups are sounding the alarm about Facebook’s policies in advance of the...presidential election, with warnings that the company’s controversial decision to allow politicians to advertise misinformation is ‘inappropriate and incendiary to boot’.”
Facebook later told the Guardian that it has “teams of people dedicated to protecting Sri Lanka’s upcoming election” and the platform hopes to “play a positive role in the democratic process”.
Interestingly, current Sri Lankan prime minister Mahindra Rajapaksa had met with Ankhi Das, currently the top public policy executive at Facebook India, last January to discuss an "array of issues" including "increasing circulation of fake news". Das has been in the news this month after the Wall Street Journal named her in a story on how the platform's hate speech rules "collide" with Indian politics.
The Philippines
In the Philippines, where smartphones outnumber people, Facebook is accused of complying with the authoritarian regime of Rodrigo Duterte to legitimise violence and suppress public opinion.
In 2015, Facebook launched internet.org in the Philippines, an app that gave the entire population free access to select internet services, including of course the Facebook ecosystem. In 2016, Duterte was elected president.
Ahead of the election, Facebook held training sessions with candidates, including Duterte, on how best to use the platform. As Bloomberg reported, Duterte’s team “constructed a social media apparatus unlike that of any other candidate in the race”. It included smear campaigns and threats of violence and Facebook, instead of condemning it, called Duterte the “undisputed king of Facebook conversations”.
Later that year, Leila de Lima was arrested on charges of drug trafficking. The senator had initiated an investigation into extrajudicial killings carried out by Duterte’s “death squads”. Before her arrest on seemingly trumped-up charges, she had been targeted with viral Facebook posts and photographs about, among other things, her “promiscuous sexual behaviour”.
According to Buzzfeed, this was part of a narrative “propagated by Duterte”. Facebook admitted to Buzzfeed that de Lima’s photos “violated its policies and were removed” and that it had tried to stop fake news from being spread – but only after de Lima was put behind bars.
Facebook has also been damned by Maria Ressa, one of the country’s most prominent journalists who has taken on the Duterte regime over corruption and drugs. Ressa has consistently been critical of Facebook, holding the social media giant responsible for the spread of disinformation and hatred.
In an interview with the Centre for International Governance Innovation, Ressa said she had gathered data on fake Facebook accounts and took it to the company. “I said, ‘This is really alarming, look at these numbers.’ The people I met with were shocked but didn’t know what to do. At the end of the meeting, I said, ‘You have to do something, because [if not] Trump could win.’ And we all laughed because that didn’t seem possible then…and then in November, he won.”
“Facebook broke democracy in many countries around the world,” Ressa added, “including in mine.”
***
The media must be free and fair, uninfluenced by corporate or state interests. That's why you, the public, need to pay to keep news free. Support independent media by subscribing to Newslaundry today.