Why Facebook Keeps Failing in Ethiopia

Published on 23rd November 2021

In late October, Dejene Assefa, an Addis Ababa–based activist known for his appearances on state television in Ethiopia, posted a message to his more than 120,000 followers on Facebook. The post exhorted his compatriots to rise up across the country and murder members of the Tigrayan ethnic group. “The war is with those you grew up with, your neighbor,” he wrote in Amharic, one of the main languages of Ethiopia. “If you can rid your forest of these thorns … victory will be yours.” The message was shared nearly 900 times and attracted over 2,000 reactions. Many of the replies echoed the call to violence and promised to heed Dejene’s advice.

Ethiopia’s federal army is currently engaged in a brutal civil war with rebel groups, mostly from the Tigray region. In Addis Ababa, police have reportedly conducted citywide raids — dragging people identified as having Tigrayan ancestry out of homes, businesses, even churches. On Facebook, calls for the murder and mass internment of ethnic Tigrayans have proliferated.

“The content is some of the most terrifying I’ve ever seen anywhere,” Timnit Gebru, a former Google data scientist and leading expert on bias in AI, who is fluent in Amharic, told Rest of World. “It was literally a clear and urgent call to genocide. This is reminiscent of what was seen on Radio Mille Collines in Rwanda.” Radio Television Libre des Mille Collines, a station set up by Hutu extremists in Rwanda, broadcast calls to violence that helped spark the genocide in the country in 1994.

Facebook knows the risks of misinformation and hate speech in Ethiopia, and it knows it doesn’t have a handle on dangerous content. In internal messages and documents, released as part of the so-called Facebook Papers leaks and seen by Rest of World, the company acknowledged in 2020 that it has insufficient moderation capabilities in Amharic, and that language barriers have prevented users from reporting problematic content. To try to fill the gap in its understanding of the context in Ethiopia, the company proposed using “network-based models,” an opaque, data-driven and experimental mechanism. 

The recent surge in hate speech on Facebook in Ethiopia, which has been linked to violence, demonstrates that the company has still not fixed its problems in the country, a year into a civil war that has divided the country along ethnic lines.

“What I saw was shocking. It was not one random person with 10 or 100 or 1,000 followers. It was a group of leaders, with hundreds of thousands of followers, clearly instructing people what to do,” Gebru said.

“The most shocking part was the urgency, and the horrifying way in which the words were designed to make people act now.”

Ethiopia’s civil war broke out in late 2020, when the country’s prime minister, Abiy Ahmed, sent troops to the country’s northern Tigray region to oust fighters who had attacked federal military bases. At first, federal government forces, which were eventually backed by soldiers from neighboring Eritrea, seemed to have the upper hand, but they have since been pushed back by a sustained counterattack. Human rights groups have documented atrocities on both sides, but a joint report by the United Nations High Commissioner for Refugees has found abuses committed by government forces, including massacres of ethnic Tigrayans and the weaponized rape of what could amount to thousands of women. The U.S. government is considering declaring the campaign a genocide

In Ethiopia, where journalists have been jailed and state media outlets censor all news of abuses by state and allied forces, the government’s response has been buttressed by an army of social media activists and personalities, who manufacture consent for the conduct of its forces. Several have large followings on Facebook, which has more than 6 million users in the country. These accounts have often singled out journalists, human rights activists, and anyone critical of the Ethiopian military, labeling them “traitors.”

Dehumanizing language targeting ethnic minorities has become normalized. In July, with the tide of war shifting, a frustrated Abiy Ahmed launched a Facebook tirade, vowing to crush the “cancerous” rebels he also described as “weeds.” Ethiopian government–backed Facebook accounts began using the terms to loosely refer to the entire ethnic Tigrayan population. 

The rhetoric online was not just a reflection of the political environment in the country but is likely to have actively contributed to worsening violence. 

Facebook’s internal documents show that this year it identified at least two campaigns by diaspora groups, one mainly based in Egypt, the other partly based in Sudan, which were allegedly trying to stoke ethnic divisions. One was affiliated to the Fano militia, which has been accused of human rights abuses, the other to an Oromo group which was calling for violence against the state. Oromo militants are also waging war against the federal government.

“Content on Facebook has had real-life impacts on civilians,” Yohannes Ayalew, a former lecturer in law at Bahir Dar University in Ethiopia, now a PhD candidate at Monash University in Australia, told Rest of World. He pointed to a surge in hate speech and calls for “revenge” on Facebook, after the murder of the Ethiopian singer and activist Hachalu Hundessa, in June 2020. That led to a surge of brutal mob violence, in which hundreds of ethnic Amhara civilians and members of other minorities in Ethiopia’s Oromia region were murdered.

“Content on Facebook has had real life impacts on civilians.”

During testimony before a U.S. senate subcommittee in October, Facebook whistleblower Frances Haugen said the company’s failures in Ethiopia could match those in Myanmar, where U.N. officials alleged that the company had played a prominent role in facilitating genocidal violence.

Internal documents show the reasons the company failed. Facebook knew it didn’t have sufficient coverage in local languages to actively identify hate speech or calls to violence. It also collected a low number of reports from users to help it identify problematic content, which it attributed to digital literacy, its reporting interfaces being confusing to Ethiopian users, and a lack of local language support.

In June 2020, employees reviewing the platform’s “signals” — the data, collected from users and partners, that it uses to understand problematic content — said that they found “significant gaps” in the most at-risk countries, especially in Myanmar and Ethiopia, “showcasing that our current signals may be inadequate.” A report this fall found that even among the “tier 1” at-risk countries, Ethiopia was an outlier, with the “lowest completion rate” for user reports.

The papers contain several references to how weak Facebook’s signals are in “ARCs” — at-risk countries.

Aware of its poor coverage in Ethiopia, Facebook proposed using a different approach to tackling its problem: network-based moderation. The company began to invest in earnest in this approach after the 2016 U.S. presidential election, in response to allegations of Russian interference using its platform, Evelyn Douek, a lecturer at the Harvard Law School and an expert in social media moderation, told Rest of World. Rather than using specific words or phrases to directly identify hate speech or misinformation, network-based moderation is based on identifying patterns of behavior that are consistent with malicious activity. 

“You can understand why they might do it in countries where they don’t have the language capacity, because it doesn’t rely on understanding the content of individual posts,” Douek said.

However, Douek said that this is “an especially opaque form of content moderation,” and one that the company has released few details about. This form of moderation relies on the platform’s own research and data, which it rarely shares with external researchers, and on its own models. 

Documents released as part of the Facebook Papers show that this approach is still experimental, and that it isn’t clear whether its models work in the context of hate speech —  even in the U.S., where it is based and where it has the greatest volume of data. “The networks surrounding organized hate are another clear example of harmful networks that our current policies and procedures can’t handle,” one internal document, looking at the networks that share and boost certain content, including that of white supremacists content in the U.S., reads. Other documents refer to the complexity of mapping network effects, and how far the company has to go to understand them. Even so, staff seem keen to try these models out in Ethiopia and Myanmar, “where our classifier and language signals are weaker, and where network-based models may be able to help the most.”

A spokesperson for Meta, the recently renamed holding company that owns Facebook, said in a statement: “As of now, we haven’t used this new protocol to disrupt networks in Ethiopia.”

Douek echoed other researchers, saying that even if network-based moderation does what it’s supposed to, it’s very unlikely to be enough on its own. “You cannot enter a market without the language understanding or the contextual understanding or political expertise and expect this kind of moderation to be sufficient or prevent harm,” she said. “It’s just not adequate.”

In one leaked document, Facebook staff said that once the company identifies a vulnerable country, either because of a spike in reported problematic speech or because of an active conflict there, it can take up to a year to then implement enforcement. 

The reality may be longer. The company recognized its issues in Ethiopia as early as 2019, but in May 2021, in one conversation about measuring the prevalence of hate speech in the run-up to elections in Ethiopia, an employee wrote, “We don’t have coverage for Ethiopia due to lack of human review capacity there.” 

Analysts said that the company clearly wasn’t ready for the latest upsurge in hate and calls for violence. Accusations of “treason” against ethnic Tigrayans have become commonplace over the last few months.

 In August, Ethiopian state media commentator Muktar Ousman, who has over 210,000 followers on Facebook, claimed exactly this; two months later, two ethnically Tigrayan university lecturers were murdered, acts that Muktar celebrated in postings on Facebook and Twitter, where he has 168,000 followers.

The violence has spread beyond Tigrayan communities. Another ethnic minority, the Qemant, has come under attack by government forces and allied militias. Thousands of Qemant civilians have fled their homes for the safety of neighboring Sudan this year.

Later in September, a Facebook post alleged, without evidence, that terrorists from a Qemant village hijacked a bus, a purported incident in which two people were killed. The post got hundreds of reactions. The next day, the village was pillaged and burnt down, an attack that lasted several days.

Between late October and early November, Tigrayan fighters made their most significant gains of the war, capturing the cities of Dessie and Kombolcha, about 400 kilometers away from the Ethiopian capital Addis Ababa. The loss of two strategic cities lying on the highways connecting the country to neighboring Djibouti led to another escalation in violent and sectarian rhetoric on Facebook. Ethiopian government–affiliated Facebook accounts accused local Tigrayan residents of the two cities of acting as spies for the rebels. 

“While safety work in Ethiopia has been going on for a long time, we know that the risks on the ground right now are higher,” Mercy Ndegwa, public policy director for East and Horn of Africa at Meta, said in a statement sent to Rest of World. “We … stand ready to take additional action to meet the demands of this ongoing human rights situation.”

Facebook said it has recently designated Ethiopia a “temporary high-risk location,” and promised to remove posts promoting violence and misinformation. It said it had added the Oromo Liberation Army, an armed rebel group, to a list of blacklisted entities.

In October, researcher Gebru took to Twitter to express alarm about Assefa’s tirade and reported it to Facebook. It was nearly 24 hours before the post was taken down. It isn’t clear on what grounds it was removed, and Gebru said she was originally told the post didn’t violate the site’s community standards. The content was widely shared and can still be found, word-for-word, on pages of other government supporters.

In early November, Facebook removed a post by Prime Minister Abiy Ahmed, in which he called on citizens to rise and “bury” the rebels, for violating Facebook rules on inciting violence. It was an important intervention, analysts said, but a day after that Abiy’s post was taken down, Addis Ababa mayor Adanech Abiebie took to Facebook to applaud volunteers conducting citywide neighborhood-watch searches for rebel sympathizers and added, “Without any doubt the junta [a term used to refer to Tigrayan rebels] will be buried wherever they roam!” That post is yet to be removed from the platform.

“It seems to me that what Facebook is doing is a lip service for its bombshell criticisms in all corners,” Ayalew, from Monash University, said.

Zecharias Zelalem is a freelance journalist from Ethiopia. His work has appeared in Al-Jazeera, Quartz, the Addis Standard and Open Democracy.

Peter Guest is the enterprise editor for Rest of World.

Courtesy: Rest of World


This article has been read 1,174 times
COMMENTS