Ethiopia's Tigray conflict: What are Facebook and Twitter doing about hate speech?
Social media giants Facebook and Twitter have come under fire over their roles in the ongoing conflict in Ethiopia.
Critics say they are not doing enough to prevent the spread of hate speech and incitement to violence on their platforms, but that has been rejected by the companies.
We've looked at some examples and what is being done to deal with them.
How influential is social media in Ethiopia?
Only about 10% of the population in Ethiopia uses Facebook, according to the company. There are relatively low rates of internet adoption in the country and a lack of broadband infrastructure in some areas.
However, social media is widely used by Ethiopians abroad, and Facebook posts by Prime Minister Abiy Ahmed regularly receive tens of thousands of likes and thousands of shares.
The Ethiopian government - and those who oppose it - track what appears on social media closely.
For example, Facebook - whose parent company is now called Meta - was publicly criticised by the government because it removed a post by Mr Abiy urging citizens to "bury" Tigrayan rebels, who were advancing southwards.
"Organisations like Facebook, that has been used to spread violence, has shown its true colours by deleting our prime minister's message," a government statement said.
A Meta spokesperson told the BBC: "At Meta, we remove content from individuals or organisations that violates our community standards, no matter who they are."
How much hate speech is there?
Rights group Amnesty International says it has noted a significant rise in social media posts in the current conflict that have clearly incited violence and used ethnic slurs, noting that many of these "have gone unchecked".
Last month, Facebook whistle-blower Frances Haugen said of the social media giant's engagement-based ranking: "In places like Ethiopia, it's literally fanning ethnic violence."
There are also examples of incitements to violence on other platforms like Twitter.
We have obscured some of the tweet above in support of a Tigrayan rebel group, because it urges violence against Mr Abiy.
And in this example below by a pro-Amhara Facebook account, we have chosen not to show all the text and the full image for similar reasons.
"My frustration is that they're not taking action on the clearest examples," says Timnit Gebru, a data scientist who follows Ethiopia's social media.
She was among those who recently campaigned for the removal of a post that called on people to get rid of "traitors" against the government, which they believed was an "incitement to genocide".
Facebook initially said the post did not violate its policies, but later removed it.
"Even after they finally take down specific posts... they don't take action against the accounts," Ms Gebru says.
A group calling itself Network Against Hate-speech has set up its own Facebook page to flag unacceptable content, and urges readers to report further examples they find online.
The BBC also found posts that had been on Facebook for weeks and were only deleted for violations when we highlighted them.
But the company also said some of the posts we flagged were not in violation of its policy, and so were not removed.
Both pro and anti-government users have also used the in-built reporting tools on Facebook and Twitter to try to shut down their opponents.
This has led to accusations that social media platforms don't treat one side or the other fairly.
What do social media companies say?
Twitter says it has been monitoring the situation in Ethiopia and is committed to "protecting the safety of the conversation on Twitter".
Early this month, the company temporarily disabled its Trends for Ethiopia, which is meant to show the topics that are most popular at any given time.
"We hope this measure will reduce the risks of co-ordination that could incite violence or cause harm," it says.
But some pro-government accounts complained that this occurred just when a pro-government hashtag was among the top trends.
Facebook says it has invested in safety and security measures in Ethiopia for more than two years and has built capacity to detect hateful and inflammatory content posted in Amharic, Oromo, Somali and Tigrinya.
"We have a dedicated team, which includes native language speakers, closely monitoring the situation on the ground, who are focused on making sure we're removing harmful content," says a Meta spokesperson.
However, Facebook did not make clear how many people it employed to review content in each of those languages.
Much of the content goes unchecked, argues Mulatu Alemayehu from Addis Ababa University, who follows social media conversations.
He also says that "either knowingly or unknowingly, those posts written [in languages] other than Amharic [the most widely spoken] are less vulnerable to being reported and blocked".
This, he suggests, has created a perception that accounts posting in Amharic are being targeted disproportionately, while content in other languages goes under the radar.
Facebook says it has also made it easier for human rights organisations and civil society groups to alert them to potentially harmful content, and that it uses technology to automatically detect hateful content posted in Amharic and Oromo.
Between May and October 2021, the company says it acted on more than 92,000 pieces of content in Ethiopia on Facebook and Instagram.
"About 98% of which was detected before it was reported by people to us," it says.