LmCast :: Stay tuned in

OpenAI shelves erotic chatbot ‘indefinitely’

Recorded: March 26, 2026, noon

Original Summarized

OpenAI shelves erotic chatbot ‘indefinitely’ | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.OpenAI shelves erotic chatbot ‘indefinitely’Comments DrawerCommentsLoading commentsGetting the conversation ready...AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechOpenAI shelves erotic chatbot ‘indefinitely’Much like Sora, OpenAI says it’s pausing plans for ChatGPT’s ‘adult mode’ to focus on core products.Much like Sora, OpenAI says it’s pausing plans for ChatGPT’s ‘adult mode’ to focus on core products.by Jess WeatherbedCloseJess WeatherbedNews ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess WeatherbedMar 26, 2026, 11:58 AM UTCLinkShareGiftImage: The Verge / Cathryn HuttonJess WeatherbedCloseJess WeatherbedPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.OpenAI ​has paused plans to release a sexualized “adult mode” for ChatGPT, in its latest move to refocus on the company’s core ​products. According to The Financial Times, the erotic chatbot has been shelved “indefinitely” after facing pushback from employees and investors due to the problematic and harmful ​effects sexualized AI ​content can have on society.This decision comes in the wake of OpenAI also discontinuing its text-to-video AI platform Sora, citing “internal discussion about our broader research priorities.” It’s the latest side quest to be dropped by the company after CEO Sam Altman declared a “code red” in December, suggesting that competitors like Google and Anthropic are starting to close in on the ChatGPT-maker’s once-unassailable lead.OpenAI wants to spend more time researching the long-term effects of sexually explicit chats and emotional attachments before making a product decision, The Financial Times reports, but said there was currently no “empirical evidence.” Last week, The Wall Street Journal also reported that the adult mode had been delayed amid internal concerns surrounding moderation and safeguarding children.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Jess WeatherbedCloseJess WeatherbedNews ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Jess WeatherbedAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsOpenAICloseOpenAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All OpenAITechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularThe United States router ban, explainedSony and Honda ain’t feelin’ the Afeela anymoreSeiko resurrected a 44-year-old digital watch NASA astronauts wore to spaceThe best deals we’ve found from Amazon’s Big Spring Sale (so far)Samsung’s Galaxy A57 gets thinner, faster, and more expensiveThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in AIMeta is laying off hundreds of employees as it pours money into AIDisney’s big bets on the metaverse and AI slop aren’t going so wellCan you monitor a situation without monitors? The Polymarket sports bar triedReddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re humanGoogle Lyria 3 Pro makes longer AI songsSenate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillanceMeta is laying off hundreds of employees as it pours money into AIEmma RothMar 25Disney’s big bets on the metaverse and AI slop aren’t going so wellCharles Pulliam-MooreMar 25Can you monitor a situation without monitors? The Polymarket sports bar triedTina NguyenMar 25Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re humanEmma RothMar 25Google Lyria 3 Pro makes longer AI songsTerrence O'BrienMar 25Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillanceLauren FeinerMar 25Advertiser Content FromThis is the title for the native adTop StoriesMar 25The United States router ban, explainedMar 25Meta and YouTube found negligent in landmark social media addiction caseMar 25Can you monitor a situation without monitors? The Polymarket sports bar triedMar 25Disney’s big bets on the metaverse and AI slop aren’t going so wellMar 25The TSA is broken — is privatization next?Mar 25Live-service games are such a mess even Fortnite is strugglingThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved

OpenAI has initiated a significant strategic shift, shelving indefinitely its plans for an “adult mode” within the ChatGPT platform, a project initially conceived as a sexually explicit chatbot. This decision, reported by *The Financial Times*, is being driven by internal concerns regarding the potential harms associated with sexualized AI content and a broader refocusing of the company’s research priorities. Simultaneously, OpenAI has discontinued its text-to-video AI platform, Sora, citing a need to reassess its longer-term research direction following a “code red” declaration in December, indicating competitive pressure from companies like Google and Anthropic.

The rationale behind the postponement centers on OpenAI’s desire to investigate the long-term effects of engaging in sexually explicit conversations and the potential for developing emotional attachments within AI interactions, a period during which the company claims to have lacked “empirical evidence.” Previous reports, including one from *The Wall Street Journal*, had highlighted existing internal anxieties surrounding the moderation capabilities and safeguarding of children associated with the adult mode. The shelving of this project mirrors a broader trend of caution within OpenAI as it navigates the complex ethical and societal implications of its powerful AI technologies. This strategic recalibration suggests a deliberate move away from highly sensitive and potentially problematic applications toward core product development and a deeper understanding of the long-term ramifications of advanced AI systems. The decision reinforces a commitment to prioritizing research on core functionalities and mitigating potential risks, aligning with a perceived need to address competitive pressures.