时间:2025-03-01 00:22:19 来源:网络整理编辑:百科
When deepfake explicit pictures of Taylor Swift recently began going viral on X (formerly Twitter),
When deepfake explicit pictures of Taylor Swift recently began going viral on X (formerly Twitter), the platform eventually deleted the original poster's account, then made the pop star's name unsearchable, though certain search terms still surfaced the pornographic content.
In short, X couldn't figure out how to keep the images off its platform. This doesn't bode well for the average person who becomes a victim of a nonconsensual deepfake image or video.
After all, if social media platforms can't protect one of the world's most famous people from deepfake abuse, they certainly don't guarantee safety for unknown users, who can't lean on lawyers, publicists, and a fervent fan base for help.
Adam Dodge, a licensed attorney and founder of Ending Tech-Enabled Abuse (EndTAB), says the lack of safeguards, regulation, and robust legal protection puts victims, who are consistently women, in the position of managing the fallout of nonconsensual explicit or pornographic deepfake images and videos that feature their likeness.
Dodge argues that burdening an already traumatized person with these tasks only magnifies the violation she's experienced. But unfortunately, doing it yourself is currently the main way to handle deepfake abuse.
SEE ALSO:Deepfakes of Taylor Swift have gone viral. How does this keep happening?If you become the victim of deepfake abuse, here are six steps you can take toward protecting yourself:
Dodge says that victims may hear that deepfake pornography doesn't really harm the victim because the images or videos aren't real. He urges victims not to believe that rationale.
Instead, he frames AI image-based abuse as a form of violence, particularly against women. Fake images and videos can damage a woman's reputation, professional prospects, and be used by strangers to harass and bully her online and offline. Dealing with their removal is also exhausting and emotional. In other words, consider this type of abuse against women as part of a spectrum of violence that leads to real trauma.
Before a victim starts the arduous process of dealing with deepfakes, Dodge recommends they take a moment to note that what happened is not their fault, and to validate what they're experiencing.
"Acknowledging the harm is really important for the victim, and for the people supporting them, and for the people who are creating and sharing these things, so they understand it's a deeply violent and harmful act."
There are also resources to help support victims. The U.S.-based Cyber Civil Rights Initiative has an image abuse helpline, along with a thorough guide for what to do once you've become a victim. In the United Kingdom, victims can turn to the Revenge Porn Helpline, which aids victims of intimate image abuse.
Currently, Dodge says that the majority of AI image-based abuse happens mostly through two mediums.
One type is perpetrated through apps that enable users to take an existing image of someone and turn that into a fake nude using the app's AI-powered algorithm.
The second type of abuse is generated by deepfake face-swapping apps, which can superimpose someone's face onto a preexisting pornographic image or video. Though fake, the resulting image or video is surprisingly realistic.
A growing type of abuse can be traced back to text-image generators, which are capable of turning word prompts into fake nude or explicit images. (Mashable is not publishing the names of these apps due to concerns over further publicizing them for perpetrators.)
Regardless of the format used, victims should do their best to document every instance of AI image-based abuse via screenshots or saving image and video files. These screenshots or files can be used in takedown requests and legal action, when possible. For a step-by-step guide to documenting evidence, review the Cyber Civil Rights Initiative's guide.
Still, gathering this evidence can further traumatize victims, which is why Dodge recommends they enlist the help of a "support circle" to do this work.
"If [victims] do need to report it, having evidence is really critical," Dodge says.
Social media platforms let people report when a user has posted nonconsensual images of them online. Historically, these takedown requests have been used to help victims whose real intimate images were shared without permission. But Dodge says victims of AI image-based abuse can also use this tool.
Each platform has their own process. For a thorough listing of online removal policies for major apps, social media platforms, and dating sites, consult the Cyber Civil Rights Initiative's guide.
Dodge also recommends the free tool offered by StopNCII.org, a nonprofit that supports victims of nonconsensual intimate image abuse. The organization's tool allows victims to select an image or video of them that's been shared without their consent and independently generates a digital fingerprint, or hash, as a way of flagging that content. The user does not have to upload the image or video itself, so it never leaves the victim's possession.
The organization then shares the hash with its partners, which includes companies like Facebook, Reddit, and TikTok. In turn, its partners are then primed to detect content that matches the generated digital fingerprint. The company removes any matches within their platform, as necessary.
Bing and Google allow people to submit requests to de-index fake and nonconsensual pornographic images and videos from their search results. Dodge recommends that victims use this strategy to limit the discoverability of AI image-based abuse.
Google's step-by-step process can be found here. Directions for the same process on Bing are here.
It's important to make these requests specifically of each company. This month, NBC News found that Google and Bing search results surfaced nonconsensual deepfake porn in response to certain queries, raising questions about how frequently the companies patrolled their indexes for such content in order to remove it.
As of 2021, more than a dozen states, including California, Texas, and New York, had laws related to deepfake imagery, according to the Cyber Civil Rights Initiative. If you live in a state with laws prohibiting the creation of deepfake pornography or AI image-based abuse, you may be able to file a police report or sue the perpetrator. Internationally, sharing deepfake porn just became a crime in England and Wales.
Even in the many U.S. states that don't bar this type of abuse, Dodge says there are other related laws that may apply to a victim's case, including cyber stalking, extortion, and child pornography.
Still, Dodge says that many police departments are unprepared and lack the resources and staff to investigate these cases, so it's important to manage expectations about what's possible. Additionally, some victims, particularly those who are already marginalized in some way, may choose not to report nonconsensual deepfakes to the authorities for various reasons, including lack of trust in law enforcement.
Dodge says that victims of nonconsensual intimate imagery are sometimes targeted by strangers online, if their personal information becomes connected with the content.
Even if this hasn't happened yet, Dodge recommends opting out of data broker sites that collect your personal information and sell it to anyone for a fee. Such brokers include companies like Spokeo, PeekYou, PeopleSmart, and BeenVerified. Victims will have to go to each broker to request removal of their personal information, though a service like DeleteMe can monitor and remove such data for a fee. DeleteMe charges a minimum of $129 for an annual subscription, which scans for and removes personal information every three months.
Google also has a free tool for removing some personal information from its search results.
Given how rapidly AI-powered image tools are proliferating, Dodge can't imagine a future without nonconsensual explicit images generated by AI.
Even a few years ago, committing such abuse required computing power, time, and technical expertise, he notes. Now these tools are easy to access and use.
"We couldn't be making it easier," says Dodge.
TopicsArtificial IntelligenceSocial Good
Hiddleswift finally followed each other on Instagram after 3 excruciating days2025-03-01 00:18
葡萄牙6比1瑞士晉級8強!拉莫斯帽子戲法!C羅替補!(歐洲杯法國進16強了嗎)2025-03-01 00:11
2022世界杯日本vs西班牙誰更厲害誰能贏 、曆史戰績比分預測(西班牙對日本)2025-03-01 00:09
世界杯小組賽第二輪比賽結束 ,巴西葡萄牙鎖定出線名額 ,東道主卡塔爾出局(世界杯第二輪預選賽)2025-02-28 23:35
Tourist survives for month in frozen New Zealand wilderness after partner dies2025-02-28 22:40
“英倫德比”英格蘭3:0大勝威爾士 小組頭名出線(英格蘭3:0蘇格蘭)2025-02-28 22:31
阿根廷 2:0 波蘭攜手出線 ,墨西哥 2:1 沙特兩隊均遭淘汰;廣州解除部分地區疫情管控|20221201(德國1:0阿根廷)2025-02-28 22:19
【波盈足球】 世足梅西有望彌補8年前遺憾 阿根廷曆史交手法國握戰績優勢 ( 阿根廷,法國 )2025-02-28 21:54
The five guys who climbed Australia's highest mountain, in swimwear2025-02-28 21:46
2022世界杯日本vs西班牙誰更厲害誰能贏、曆史戰績比分預測(西班牙對日本)2025-02-28 21:42
Michael Phelps says goodbye to the pool with Olympic gold2025-03-01 00:21
【波盈足球】 世足英格蘭還有機會拿下冠軍? 粉絲搬出奇葩規章:長知識了 ( 英格蘭,美聯社 )2025-03-01 00:15
卡塔爾世界杯32強名單全部出爐 !請收藏 !(卡塔爾世界杯亞洲12強賽什麽時候開始)2025-03-01 00:05
法國英格蘭會師1/4決賽 姆巴佩5球領跑射手榜(英格蘭對意大利比分多少)2025-02-28 23:08
Singapore gets world's first driverless taxis2025-02-28 23:05
【波盈足球】 世足阿根廷主帥貼心安排 梅西有室友可以陪聊天了 ( 梅西,美聯社 )2025-02-28 23:01
阿根廷世界杯戰荷蘭首發曝光 !準備兩套戰術,若變陣曼聯鐵衛登場(2020歐洲杯預選賽芬蘭)2025-02-28 22:42
【波盈足球】 世足第11度「歐洲VS南美」決賽組合 曆史紀錄站在阿根廷這邊 ( 阿根廷,冠軍賽 )2025-02-28 22:39
WhatsApp announces plans to share user data with Facebook2025-02-28 22:15
2022世界杯四強預測:荷蘭、巴西、摩洛哥和法國會進入四強 ?(2022年世界杯阿根廷能奪冠)2025-02-28 22:11