Regulator Ofcom to have more powers over UK social media

Getty Images A woman using a tablet computerGetty Images

New powers will be given to the watchdog Ofcom to force social media firms to act over harmful content.

Until now, firms like Facebook, Tiktok, YouTube, Snapchat and Twitter have largely been self-regulating.

The companies have defended their own rules about taking down unacceptable content, but critics say independent rules are needed to keep people safe.

It is unclear what penalties Ofcom will be able to enforce to target violence, cyber-bullying and child abuse.

There have been widespread calls for social media firms to take more responsibility for their content, especially after the death of Molly Russell who took her own life after viewing graphic content on Instagram.

The government has now announced it is "minded" to grant new powers to Ofcom - which currently only regulates the media and the telecoms industry, not internet safety.

Ofcom will have the power to make tech firms responsible for protecting people from harmful content such as violence, terrorism, cyber-bullying and child abuse - and platforms will need to ensure that content is removed quickly.

They will also be expected to "minimise the risks" of it appearing at all.

The regulator has just announced the appointment of a new chief executive, Dame Melanie Dawes, who will take up the role in March.

Molly Russell
Molly Russell's family found she had been accessing distressing material about depression and suicide on Instagram

"There are many platforms who ideally would not have wanted regulation, but I think that's changing," said Digital Secretary Baroness Nicky Morgan.

"I think they understand now that actually regulation is coming."

Julian Knight, chair elect of the Digital, Culture, Media and Sport Committee which scrutinises social media companies, called for "a muscular approach" to regulation.

"That means more than a hefty fine - it means having the clout to disrupt the activities of businesses that fail to comply, and ultimately, the threat of a prison sentence for breaking the law," he said.

In a statement, Facebook said it had "long called" for new regulation, and said it was "looking forward to carrying on the discussion" with the government and wider industry.

New powers

Communication watchdog Ofcom already regulates television and radio broadcasters, including the BBC, and deals with complaints about them.

This is the government's first response to the Online Harms consultation it carried out in the UK in 2019, which received 2,500 replies.

The new rules will apply to firms hosting user-generated content, including comments, forums and video-sharing - that is likely to include Facebook, Snapchat, Twitter, YouTube and TikTok.

The intention is that government sets the direction of the policy but gives Ofcom the freedom to draw up and adapt the details. By doing this, the watchdog should have the ability to tackle new online threats as they emerge without the need for further legislation.

A full response will be published in the spring.

Children's charity the NSPCC welcomed the news.

"Too many times social media companies have said: 'We don't like the idea of children being abused on our sites, we'll do something, leave it to us,'" said chief executive Peter Wanless.

"Thirteen self-regulatory attempts to keep children safe online have failed.

"Statutory regulation is essential."

Seyi Akiwowo
Seyi Akiwowo set up the campaign group Glitch after experiencing online harassment.

Seyi Akiwowo set up the online abuse awareness group Glitch after experiencing sexist and racist harassment online after a video of her giving a talk in her role as a councillor was posted on a neo-Nazi forum.

"When I first suffered abuse the response of the tech companies was below [what I'd hoped]," she said.

"I am excited by the Online Harms Bill - it places the duty of care on these multi-billion pound tech companies."

Global regulation

In many countries, social media platforms are permitted to regulate themselves, as long as they adhere to local laws on illegal material.

Germany introduced the NetzDG Law in 2018, which states that social media platforms with more than two million registered German users have to review and remove illegal content within 24 hours of being posted or face fines of up to €50m (£42m).

Australia passed the Sharing of Abhorrent Violent Material Act in April 2019, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10% of a company's global turnover.

China blocks many western tech giants including Twitter, Google and Facebook, and the state monitors Chinese social apps for politically sensitive content.