LmCast :: Stay tuned in

Deepfake ‘Nudify’ Technology Is Getting Darker—and More Dangerous

Recorded: Jan. 27, 2026, noon

Original Summarized

Deepfake ‘Nudify’ Technology Is Getting Darker—and More Dangerous | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoMerchSearchSearchSign InSign InMatt BurgessSecurityJan 26, 2026 6:30 AMDeepfake ‘Nudify’ Technology Is Getting Darker—and More DangerousSexual deepfakes continue to get more sophisticated, capable, easy to access, and perilous for millions of women who are abused with the technology.Photograph: Getty ImagesSave StorySave this storySave StorySave this storyOpen the website of one explicit deepfake generator and you’ll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. “Transform any photo into a nude version with our advanced AI technology,” text on the website says.The options for potential abuse are extensive. Among the 65 video “templates” on the website are a range of “undressing” videos where the women being depicted will remove clothing—but there are also explicit video scenes named “fuck machine deepthroat” and various “semen” videos. Each video costs a small fee to be generated; adding AI-generated audio costs more.The website, which WIRED is not naming to limit further exposure, includes warnings saying people should only upload photos they have consent to transform with AI. It’s unclear if there are any checks to enforce this.Grok, the chatbot created by Elon Musk’s companies, has been used to created thousands of nonconsensual “undressing” or “nudify” bikini images—further industrializing and normalizing the process of digital sexual harassment. But it’s only the most visible—and far from the most explicit. For years, a deepfake ecosystem, comprising dozens of websites, bots, and apps, has been growing, making it easier than ever before to automate image-based sexual abuse, including the creation of child sexual abuse material (CSAM). This “nudify” ecosystem, and the harm it causes to women and girls, is likely more sophisticated than many people understand.“It’s no longer a very crude synthetic strip,” says Henry Ajder, a deepfake expert who has tracked the technology for more than half a decade. “We’re talking about a much higher degree of realism of what's actually generated, but also a much broader range of functionality.” Combined, the services are likely making millions of dollars per year. “It's a societal scourge, and it’s one of the worst, darkest parts of this AI revolution and synthetic media revolution that we're seeing,” he says.Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and rapidly expanded to offer harmful video creation. Image-to-video models typically now only need one photo to generate a short clip. A WIRED review of more than 50 “deepfake” websites, which likely receive millions of views per month, shows that nearly all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios women can be depicted into.Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a “sex-mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age, and other settings.” Another posted that “more styles” of images and videos would be coming soon and users could “create exactly what you envision with your own descriptions” using custom prompts to AI systems.“It's not just, 'You want to undress someone.’ It’s like, 'Here are all these different fantasy versions of it.’ It's the different poses. It's the different sexual positions,” says independent analyst Santiago Lakatos, who along with media outlet Indicator has researched how “nudify” services often use big technology company infrastructure and likely made big money in the process. “There’s versions where you can make someone [appear] pregnant,” Lakatos says.A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.Lakatos says, in recent years, multiple larger “deepfake” websites have solidified their market position and now offer APIs to other people creating nonconsensual image and video generators, allowing more services to mushroom up. “They’re consolidating by buying up other different websites or nudify apps. They’re adding features that allow them to become infrastructure providers.”So-called sexual deepfakes first emerged toward the end of 2017 and, at the time, required a user to have technical knowledge to create sexual imagery or videos. The widespread advances in generative AI systems of the past three years, including the availability of sophisticated open source photo and video generators, have allowed the technology to become more accessible, more realistic, and easier to use.General deepfake videos of politicians and of conflicts around the world have been created to spread misinformation and disinformation. However, sexual deepfakes have continuously created widespread harm to women and girls. At the same time, laws to protect people have been slow to be implemented or not introduced at all.“This ecosystem is built on the back of open-source models,” says Stephen Casper, a researcher working on AI safeguards and governance at Massachusetts Institute of Technology, who has documented the rise in deepfake video abuse and its role in nonconsensual intimate imagery generation. “Oftentimes it’s just an open-source model that has been used to develop an app that then a user uses,” Casper says.The victims and survivors of nonconsensual intimate imagery (NCII), including deepfakes and other nonconsensually shared media, are nearly always women. False images and nonconsensual videos cause huge harm, including harassment, humiliation, and feeling “dehumanized.” Explicit deepfakes have been used to abuse politicians, celebrities, and social media influencers in recent years. But they have also been used by men to harass colleagues and friends, and by boys in schools to create nonconsensual intimate imagery of their classmates.“Typically, the victims or the people who are affected by this are women and children or other types of gender or sexual minorities,” says Pani Farvid, associate professor of applied psychology and founder of The SexTech Lab at The New School. “We as a society globally do not take violence against women seriously, no matter what form it comes in.”“There's a range of these different behaviors where some [perpetrators] are more opportunistic and do not see the harm that they're creating, and it is based on how an AI tool is also presented,” Farvid says, adding some AI companion services can target people with gendered services. “For others, this is because they are in abusive rings or child abuse rings, or they are folks who are already engaging in other forms of violence, gender-based violence, or sexual violence.”One Australian study, led by the researcher Asher Flynn, interviewed 25 creators and victims of deepfake abuse. The study concluded that a trio of factors—increasingly easy-to-use deepfake tools, the normalization of creating nonconsensual sexual images, and the minimization of harms—could impact the prevention and response to the still growing problem. Unlike the widespread public sharing that happened with nonconsensual sexual images created using Grok on X, explicit deepfakes were more likely to be shared with victims or their friends and family privately, the study found. “I just simply used the personal WhatsApp groups,” one perpetrator told the researchers. “And some of these groups had up to 50 people.”The academic research found four primary motivations for the deepfake abuse—of 10 perpetrators they interviewed, eight identified as men. These included sextortion, causing harm to others, getting reinforcement or bonding from their peers, and curiosity about the tools and what they could do with them.Multiple experts WIRED spoke to said many of the communities developing deepfake tools have a “cavalier” or casual attitude to the harms they cause. “There's this tendency of a certain banality of the use of this tool to create NCII or even to have access to NCII that are concerning,” says Bruna Martins dos Santos, a policy and advocacy manager at Witness, a human rights group.For some abusers creating deepfakes, the technology is about power and control. “You just want to see what’s possible,” one abuser told Flynn and fellow researchers involved in the study. “Then you have a little godlike buzz of seeing that you’re capable of creating something like that.”You Might Also LikeIn your inbox: Maxwell Zeff's dispatch from the heart of AIThe best EVs coming in 2026Big Story: Your first humanoid coworker will be ChineseWhat to do if ICE invades your neighborhoodSpecial edition: You’re already living in the Chinese centuryMatt Burgess is a senior writer at WIRED focused on information security, privacy, and data regulation in Europe. He graduated from the University of Sheffield with a degree in journalism and now lives in London. Send tips to Matt_Burgess@wired.com. ... Read MoreSenior writerXTopicsartificial intelligenceDeepfakesalgorithmsmachine learningCrimeRead More10% Off Dell Coupon Codes for January 2026Get 10% off with verified Dell promo code, plus today’s coupons for up to $600 off laptops, Alienware monitors, and all things tech.20% Off TurboTax Service Codes for January 2026Tax season doesn’t have to be stressful. Save up to 20% on federal tax filings, 10% off Full Service, and more with exclusive TurboTax discount codes on WIRED.Top Newegg Promo Codes and Coupons for January 2026Enjoy up to 10% off your entire order with today’s Newegg discount code and save with the latest deals for gaming PCs, laptops, and computer parts.Judge Delays Minnesota ICE Decision While Weighing Whether State Is Being Illegally PunishedA federal judge ordered a new briefing due Wednesday on whether DHS is using armed raids to pressure Minnesota into abandoning its sanctuary policies, leaving ICE operations in place for now.Palantir Defends Work With ICE to Staff Following Killing of Alex Pretti“In my opinion ICE are the bad guys. I am not proud that the company I enjoy so much working for is part of this,” one worker wrote on Slack.TikTok Data Center Outage Triggers Trust Crisis for New US OwnersThe technical failure coincided with TikTok’s ownership transition, leading users to question whether videos criticizing ICE raids in Minnesota were being intentionally censored.Redditors Are Mounting a Resistance Against ICEA user from r/Minneapolis was among the first to share footage of federal agents shooting Alex Pretti. Following his death, subreddits about football, cats, and embroidery have all rallied against ICE.This Wireless Mic Kit Is $70 OffSave on a full DJI Mic 3 bundle, or pick and choose to build your own portable recording setup.Intel’s Panther Lake Chip Is Its Biggest Win in YearsI’ve tested two new laptops powered by Panther Lake—pitting them head-to-head against laptops with Apple Silicon—and Intel has finally scored a much-needed win with the Core Ultra Series 3.After 5 Years, Apple Finally Upgrades the AirTagThe second-generation AirTag features Apple’s newer Ultra Wideband chip and has a louder speaker and better range.We Strapped on Exoskeletons and Raced. There’s One Clear WinnerWIRED put the latest consumer exoskeletons from Dnsys and Hypershell in a head-to-head test on a pro athletic track. On your marks …Deepfake ‘Nudify’ Technology Is Getting Darker—and More DangerousSexual deepfakes continue to get more sophisticated, capable, easy to access, and perilous for millions of women who are abused with the technology.WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.SubscribeNewslettersTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok

Deepfake “Nudify” technology is rapidly evolving, presenting a growing and increasingly dangerous threat primarily directed at women. As detailed by Henry Ajder, a deepfake expert, the technology is moving beyond crude synthetic strips to generate remarkably realistic, high-quality videos, often featuring explicit sexual scenarios. The proliferation of services like the one WIRED isn’t naming—which offers 65 “undressing” templates—demonstrates a troubling shift in accessibility and sophistication.

The issue isn’t merely the creation of harmful imagery; it's the industrialization and normalization of this process. Elon Musk’s company, Grok, has been utilized to create thousands of non-consensual “undressing” images, suggesting a wider industrialization of abuse. Multiple deepfake websites and Telegram channels offer these services, with some employing APIs to facilitate further growth. The sheer volume of accounts and activity – over 1.4 million signed up to 39 deepfake creation bots and channels – underscores the scale of the problem.

Several factors contribute to this escalating threat. Firstly, the accessibility of open-source deepfake models has dramatically lowered the barrier to entry. Secondly, the increasing sophistication of generative AI systems has produced more realistic outputs. Thirdly, a “cavalier” or casual attitude toward the potential harms is pervasive within some deepfake communities. This is coupled with factors such as opportunistic behavior, the normalization of creating non-consensual imagery, and the minimization of harms.

Motivations for the creation of these deepfakes are varied. According to interviews with 10 perpetrators, the primary drivers include sextortion, causing harm to others, obtaining a sense of power, and curiosity. A significant number of individuals—eight out of ten interviewed—used the technology for gratification or reinforcement within abusive networks. The use of WhatsApp groups to share these deepfakes – with some groups comprising up to 50 people – highlights the degree of personal and targeted dissemination.

Experts like Pani Farvid emphasize that victims are overwhelmingly women and children, reinforcing a pattern of gender-based violence. These deepfakes cause immense harm—including harassment, humiliation, and a profound sense of dehumanization. Researcher Asher Flynn’s study identified three key factors: increasing ease of use, normalization of creation, and minimized harm, as impacting prevention – a trio of dynamics that contribute to a growing risk.

Furthermore, the accessibility to tools has allowed users to operate within networks – one individual described using WhatsApp groups to share deepfakes. Legal protections and responses have been slow to materialize. The rise of these tools is occurring within a pre-existing landscape of online harassment, further complicated by limited legal recourse. The proliferation of this technology necessitates a complex, multi-faceted response including increased regulation, legal frameworks, technological countermeasures, and societal awareness campaigns.