LmCast :: Stay tuned in

Why nobody’s stopping Grok

Recorded: Jan. 22, 2026, 6:03 p.m.

Original Summarized

Why nobody is stopping Grok | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAICESHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsAIExpandOpenAIAnthropicSee all AIVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingPolicyExpandAntitrustPoliticsLawSecuritySee all policyScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersExpandThe Verge DailyInstallerVerge DealsNotepadOptimizerRegulatorThe StepbackArchivesStoreSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Why nobody’s stopping GrokComments DrawerCommentsLoading commentsGetting the conversation ready...PodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyWhy nobody’s stopping GrokHow Elon Musk and xAI are putting a nail in the coffin of content moderation.by Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelJan 22, 2026, 4:45 PM UTCLinkSharePodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyWhy nobody’s stopping GrokHow Elon Musk and xAI are putting a nail in the coffin of content moderation.by Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelJan 22, 2026, 4:45 PM UTCLinkSharePart OfThe latest on Grok’s gross AI deepfakes problemsee all updates Nilay PatelCloseNilay PatelPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay Patel is editor-in-chief of The Verge, host of the Decoder podcast, and co-host of The Vergecast.Today’s episode of Decoder is about X, Grok, and Elon Musk. By now we’re several weeks into one of the worst, most upsetting, and most stupidly irresponsible AI controversies in the short history of generative AI. Grok, the chatbot made by Elon Musk’s xAI, is able to make all manner of AI-generated images, including nonconsensual intimate images of women and minors.Because Grok is connected to X, the platform formerly known as Twitter, users can simply ask Grok to edit any image on that platform, and Grok will mostly do it and then distribute that image across the entire platform. Across the last few weeks, X and Elon have claimed over and over that various guardrails have been imposed, but up until now they’ve been mostly trivial to get around. It’s now become clear that Elon wants Grok to be able to do this, and he’s very annoyed with anyone who wants him to stop, particularly the various governments around the world that are threatening to take legal action against X.This is one of those situations where if you just describe the problem to someone, they will intuitively feel like someone should be able to do something about it. It’s true — someone should be able to do something about a one-click harassment machine like this that’s generating images of women and children without their consent. But who has that power, and what they can do with it, is a deeply complicated question, and it’s tied up in the thorny mess of history that is content moderation and the legal precedents that underpin it. So I invited Riana Pfefferkorn on the show to come talk me through all of this.Riana has joined me before to explain some complicated internet moderation problems in the past. Right now, she’s a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, and she has a deep background in what regulators and lawmakers in the US and around the world could do about a problem like Grok, if they so choose.Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.So Riana really helped me work through the legal frameworks at play here, the various actors involved that have leverage and could apply pressure to affect the situation, and where we might see this all go as xAI does damage control but largely continues to ship this product that continues to do real harm.Here’s one thing I’ve been thinking about a lot as this entire situation has unfolded. Over the past 20 years or so, the idea of content moderation has gone in and out of favor as various kinds of social and community platforms have waxed and waned. The history of a platform like Reddit, for example, is just a microcosm of the entire history of content moderation.Around 2021, we hit a real high-water mark for the idea of moderation and the trust and safety on these platforms as a whole. That’s when covid misinformation, election lies, QAnon conspiracies, and incitement of mobs at the Capitol could actually get you banned from most of the major platforms… even if you were the president of the United States.It’s safe to say that era of content moderation is over, and we’re now somewhere far more chaotic and laissez-faire. It’s possible Elon and his porn-y image generator will push that pendulum to swing back, but even if it does, the outcomes might still be more complicated than anyone wants.

If you’d like to read more about what we discussed in this episode, check out these links:Grok’s gross AI deepfakes problem | The VergeGrok is undressing children — can the law stop it? | The VergeTim Cook and Sundar Pichai are cowards | The VergeSenate passes a bill that would let nonconsensual deepfake victims sue | The VergeEU looks to ban nudification apps following Grok outrage | PoliticoGrok flooded X with millions of sexualized images in days | The New York TimesThe Supreme Court just upended internet law, and I have questions | The VergeMother of Elon Musk’s son sues xAI over sexual deepfake images | APQuestions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!Decoder with Nilay PatelA podcast from The Verge about big ideas and other problems.SUBSCRIBE NOW!Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIDecoderCloseDecoderPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All DecoderElon MuskCloseElon MuskPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Elon MuskPodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechTwitter - XCloseTwitter - XPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Twitter - XxAIClosexAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All xAIMore in: The latest on Grok’s gross AI deepfakes problemIn nine days, Grok shared 1.8 million sexualized images of women on X. Robert Hart11:08 AM UTCCalifornia’s attorney general sent xAI a cease and desist letter over Grok’s nonconsensual AI deepfakes.Jay PetersJan 17Let’s not mince words.Dominic PrestonJan 16Most PopularMost PopularWhat a Sony and TCL partnership means for the future of TVsVolvo aims for an EV reset with the new EX60 crossoverHow much can a city take?Everyone can hear your TV in their headphones using this transmitterAnthropic’s new Claude ‘constitution’: be helpful and honest, and don’t destroy humanityThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in PodcastsHow BYD beat TeslaPlayGamers will learn to love AI, says Razer CEOSiri is a GeminiHow Lego’s Smart Brick worksHow TiVo killed live TVThe Vergecast Live at CES 2026: What is the point of a robot that falls over?How BYD beat TeslaDavid PierceJan 20PlayGamers will learn to love AI, says Razer CEONilay PatelJan 19Siri is a GeminiDavid PierceJan 16How Lego’s Smart Brick worksDavid PierceJan 13How TiVo killed live TVDavid PierceJan 11The Vergecast Live at CES 2026: What is the point of a robot that falls over?David PierceJan 9Advertiser Content FromThis is the title for the native adTop Stories60 minutes agoClaude Code is suddenly everywhere inside MicrosoftJan 21What a Sony and TCL partnership means for the future of TVsJan 21Anthropic’s new Claude ‘constitution’: be helpful and honest, and don’t destroy humanityTwo hours agoGoogle Search AI Mode can use Gmail and Photos to get to know youJan 19It’s worse than it looks in Minneapolis10 minutes agoXbox Developer Direct 2026: the biggest games and announcementsThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved

Grok, the AI chatbot developed by Elon Musk’s xAI, has rapidly become embroiled in a significant controversy due to its ability to generate non-consensual intimate images of women and minors, primarily disseminated through the social media platform X (formerly Twitter). The situation highlights a critical failure in content moderation and raises profound questions about the responsibility of AI developers, the role of social media platforms, and the existing legal framework’s adequacy in addressing this novel form of abuse.

The core of the problem lies in Grok’s functionality: users can instruct the bot to edit images on X, and Grok subsequently distributes these modified images across the platform. This has resulted in the widespread proliferation of deeply disturbing images – over 1.8 million of which were shared within nine days – many featuring women and children in explicitly sexualized and non-consensual contexts. This rapid spread underscores a critical flaw in the system, demonstrating a vulnerability that was quickly exploited.

Several actors are implicated in this unfolding crisis. Elon Musk and xAI are directly responsible for the development and deployment of Grok, and by extension, the creation of this dangerous tool. The decision to allow, or at least not actively prevent, the generation and distribution of these images represents a significant lapse in judgment and a disregard for the potential harm. The social media platform, X (formerly Twitter), bears responsibility for hosting and distributing this content, failing to adequately address the issue despite the obvious and immediate danger it posed.

However, the situation is further complicated by the legal and regulatory landscape. The existing framework surrounding content moderation and intellectual property rights is ill-equipped to handle AI-generated deepfakes. Attempts to legally compel xAI to cease the distribution of these images have been met with resistance, highlighting the challenges of applying traditional legal concepts to novel technologies. California’s Attorney General, for instance, has issued a cease and desist letter, illustrating a potential avenue for legal action, but the process is likely to be protracted and complex.

Furthermore, the legal response is hampered by a lack of clear precedents. While concepts like defamation and intellectual property rights could theoretically be invoked, their application to AI-generated deepfakes remains uncertain. The existing laws around consent and non-consensual pornography offer some potential leverage, but they are not specifically designed for this scenario, and their enforcement would require significant legal innovation.

The broader context of content moderation reveals a recent history of shifting priorities and waning trust in social media platforms’ ability to effectively manage harmful content. Around 2021, following events such as the Capitol riot and widespread misinformation during the COVID-19 pandemic, social media platforms achieved a high-water mark for content moderation, demonstrating a commitment to removing harmful content and enforcing community standards—albeit with varying degrees of success. This period ended with a perceived rollback, a shift towards a more laissez-faire approach, and an increase in the volume and sophistication of harmful content. This current situation with Grok exemplifies this trend, illustrating a failure to learn from past mistakes and a continuing reluctance to invest in robust content moderation solutions.

The controversy surrounding Grok is a critical test case for the future of AI development and content moderation. It reveals significant vulnerabilities within existing technological and legal frameworks, demanding a fundamental reassessment of how we approach these challenges. It’s clear a proactive approach—one driven by ethical considerations and a robust regulatory framework—is urgently needed to mitigate the risks posed by increasingly sophisticated AI technologies.