X (formerly Twitter) has taken swift action to protect its users from further exposure to "sensitive content," blocking searches for Taylor Swift amid a rise of pornographic deepfake images depicting the American pop star.
Users attempting to search for Taylor Swift on the platform encountered error messages, indicating that searches for her name were temporarily disabled.
I can't believe these Taylor Swift photos that just came out #TaylorSwiftAlpic.twitter.com/RxcVb36cs2
— Ape𝕏 (@Apex644864791) January 26, 2024
"This is a temporary action and done with an abundance of caution as we prioritize safety on this issue," Joe Benarroch, head of business operations at X, said in a statement.
Google's English Dictionary defines deepfake as "a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information."
Swifties Take Action
Last week's emergence of sexually explicit and abusive deepfake images of Swift prompted a response from both the platform and her dedicated fanbase, known as "Swifties."
Swifties launched a counteroffensive on X and initiated a #ProtectTaylorSwift campaign to flood the platform with positive images of the singer and report accounts posting or sharing the deepfakes.
I GOT ONE OF THEM#ProtectTaylorSwiftpic.twitter.com/A8LLQdmq1A
— hanna⸆⸉ 💋 (@so_it_goes_123) January 25, 2024
Reality Defender, a group specializing in detecting deepfakes, reported a surge in nonconsensual pornographic material featuring Swift, particularly on the Musk-owned platform.
The group identified numerous unique AI-generated images, with some depicting Swift in football-related scenarios, objectifying her and, in some cases, portraying acts of violence on her deepfake persona.
Deepfake technology has increasingly become a concern in recent years, as it has become more accessible and easier to use.
A 2019 report from AI firm DeepTrace Labs highlighted the disproportionate targeting of women by deepfake creators. Hollywood actors and K-pop singers were among the primary victims identified in the report.
Recently, deepfake technology has also been used to imitate Anatoly Yakovenko, co-founder of blockchain platform Solana, to create fake giveaway ads on YouTube and X.
X to Hire More Content Moderators
Meanwhile, X CEO Linda Yaccarino announced the establishment of a new "Trust and Safety center of excellence" in Austin, Texas, aimed at fighting child sexual exploitation (CSE) on the platform.
"While X is not the platform of choice for children and minors – users between 13-17 account for less than 1% of our daily U.S. users – we have made it more difficult for bad actors to share or engage with CSE material on X," Yaccarino said in a statement.
"We are improving our detection mechanisms to find more reportable content on the platform to report to the National Center for Missing and Exploited Children (NCMEC)."
In 2023, X suspended 12.4 million accounts for violating CSE policies, a significant increase from 2.3 million suspensions in 2022. Additionally, the platform referred 850,000 reports to NCMEC last year, including those done through X's first fully-automated report system.
To bolster its efforts, X is in the process of hiring 100 content moderators for its new Austin office dedicated to combating online child abuse.
Editing by Katherine 'Makkie' Maclang