Unilever threatens to pull ads from Facebook and Google
Unilever has threatened to withdraw ads from platforms like Google and Facebook if they do not do enough to police extremist and illegal content.
Unilever said consumer trust in social media is now at a new low.
"We cannot have an environment where our consumers don't trust what they see online," said Unilever's chief marketing officer Keith Weed.
He said it was in the interest of digital media firms to act before "advertisers stop advertising".
Mr Weed said companies could not continue to support an online advertising industry where extremist material, fake news, child exploitation, political manipulation, racism and sexism were rife.
"It is acutely clear from the groundswell of consumer voices over recent months that people are becoming increasingly concerned about the impact of digital on wellbeing, on democracy - and on truth itself," Mr Weed said.
"This is not something that can brushed aside or ignored. "
Unilever has pledged to:
- Not invest in platforms that do not protect children or create division in society
- Only invest in platforms that make a positive contribution to society
- Tackle gender stereotypes in advertising
- Only partner with companies creating a responsible digital infrastructure
According to research firm Pivotal, Facebook and Google accounted for 73% of all digital advertising in the US in 2017.
During 2017, Google brought in £4.4bn in revenue from online advertising, while Facebook collected £1.8bn, according to eMarketer.
Experts in digital media say that more buyers of advertising will have to join Unilever to spur change.
"The advertising ecosystem contains so many players, so for Facebook and Google to see any dent in the profits they make, there will need to be many companies that not only put their hat in the ring, but also follow through on these threats," Sam Barker, a senior analyst at Juniper Research told the BBC.
The discussion over how online platforms tackle unsavoury and extremist content is not new - it has been rising in volume over the last few years.
At the World Economic Forum in Davos last month Prime Minister Theresa May called on investors to put pressure on tech firms to tackle the problem much more quickly.
In December, the European Commission warned the likes of Facebook, Google, YouTube, Twitter and other firms that it was considering legislation if self-regulation continued to fail.
For their part, in 2017 both Facebook and Google announced measures to improve the detection of illegal content.
Facebook said it was using artificial intelligence to spot images, videos and text related to terrorism, as well as clusters of fake accounts, while Google announced it would dedicate more than 10,000 staff to rooting out violent extremist content on YouTube in 2018.
Slow response
"Facebook and Google's response to fake news, brand safety and content prompted by hate or extremism has been very slow," Karin von Abrams, a principal analyst at eMarketer told the BBC.
"Yes they are now addressing these problems, but they should have been quicker to put money into these things. The efforts they're making are not enough at the moment to weed out these comments and content."
Ms Abrams said that many in the digital media industry felt that tech giants had swung the power balance in the global advertising space primarily in their favour, and that the balance needed to be redressed.
However, despite their considerable power, she did not feel that the likes of Facebook and Google could afford to anger enormous commercial organisations with multi-billion pound advertising budgets.
"In the current situation, advertisers would lose out," she said. "It may be we're reaching a tipping point - fast moving consumer goods companies will pursue this...they cannot not consider the erosion of consumer trust in their brands."
A Facebook spokesperson told the BBC: "We fully support Unilever's commitments and are working closely with them."
The BBC contacted Google for comment and is waiting for a response.