Taylor Swift deepfakes spark calls in Congress for new legislation

Reuters Taylor SwiftReuters

US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were viewed millions of times online.

The images were posted on social media sites, including X and Telegram.

US Representative Joe Morelle called the spread of the pictures "appalling".

In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.

It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."

While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.

The name "Taylor Swift" is no longer searchable on X, alongside terms such as "Taylor Swift AI" and "Taylor AI".

Deepfakes use artificial intelligence (AI) to make a video of someone by manipulating their face or body. A study in 2023 found that there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI.

There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at state level to tackle the issue.

In the UK, the sharing of deepfake pornography became illegal as part of its Online Safety Act in 2023.

Democratic Rep Morelle, who last year unveiled the proposed Preventing Deepfakes of Intimate Images Act - which would have made it illegal to share deepfake pornography without consent - called for urgent action on the issue.

He said the images and videos "can cause irrevocable emotional, financial, and reputational harm - and unfortunately, women are disproportionately impacted".

Pornography consists of the overwhelming majority of the deepfakes posted online, with women making up 99% of those targeted in such content, according to the State of Deepfakes report published last year.

"What's happened to Taylor Swift is nothing new," Democratic Rep Yvette D Clarke posted on X. She noted that women had been targeted by the technology "for years", adding that with "advancements in AI, creating deepfakes is easier & cheaper".

Republican Congressman Tom Kean Jr agreed, saying that it is "clear that AI technology is advancing faster than the necessary guardrails".

"Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend," he added.

Swift has not spoken publicly about the images, but the Daily Mail reported that her team is "considering legal action" against the site which published the AI-generated images.

Worries about AI-generated content have increased as billions of people vote in elections this year across the globe.

This week, a fake robocall claiming to be from US President Joe Biden sparked an investigation. It is thought to have been made by AI.

Watch: The BBC's James Clayton puts a deepfake video detector to the test