Anthropic’s Claude Code gets ‘safer’ auto mode
Recorded: March 25, 2026, 2 p.m.
| Original | Summarized |
Anthropic’s Claude Code gets ‘safer’ auto mode | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Anthropic’s Claude Code gets ‘safer’ auto modeComments DrawerCommentsLoading commentsGetting the conversation ready...AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicAnthropic’s Claude Code gets ‘safer’ auto modeThe feature is a middle-ground between cautious handholding and dangerous levels of autonomy.The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.by Robert HartCloseRobert HartAI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert HartMar 25, 2026, 11:39 AM UTCLinkShareGiftImage: The VergeRobert HartCloseRobert HartPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert Hart is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.Anthropic has launched an “auto mode” for Claude Code, a new tool that lets AI make permissions-level decisions on users’ behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.Claude Code is capable of acting independently on users’ behalf, a useful but risky feature as it can also do things users don’t want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in “the coming days.”Anthropic warns the tool is experimental and “doesn’t eliminate” risk entirely, recommending developers use it in “isolated environments.”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Robert HartCloseRobert HartAI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert HartAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsMost PopularMost PopularNvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’The US government just banned consumer routers made outside the USWelp, I bought an iPhone againDonut Lab’s solid-state battery could barely hold a charge after getting damagedOpenAI just gave up on Sora and its billion-dollar Disney dealThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in AIOpenAI just gave up on Sora and its billion-dollar Disney dealArm’s first CPU ever will plug into Meta’s AI data centers later this yearChatGPT and Gemini are fighting to be the AI bot that sells you stuffAnthropic’s Claude Code and Cowork can control your computerGoogle’s new Pixel 10 ads made me go ‘Wait, WHAT are they trying to sell?’Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’OpenAI just gave up on Sora and its billion-dollar Disney dealRichard LawlerMar 24Arm’s first CPU ever will plug into Meta’s AI data centers later this yearRichard LawlerMar 24ChatGPT and Gemini are fighting to be the AI bot that sells you stuffEmma RothMar 24Anthropic’s Claude Code and Cowork can control your computerJess WeatherbedMar 24Google’s new Pixel 10 ads made me go ‘Wait, WHAT are they trying to sell?’Sean HollisterMar 23Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’Hayden FieldMar 23Advertiser Content FromThis is the title for the native adTop Stories11:00 AM UTCIt’s always a good time to revisit Super Mario Bros. WonderAn hour agoSony and Honda ain’t feelin’ the Afeela anymoreMar 24Welp, I bought an iPhone againMar 24OpenAI just gave up on Sora and its billion-dollar Disney dealMar 24Meta misled users about its products’ safety, jury decidesTwo hours agoSamsung’s Galaxy A57 gets thinner, faster, and more expensiveThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved |
Anthropic has introduced an “auto mode” for Claude Code, a sophisticated AI tool designed to facilitate automated actions on user behalf, representing a strategic response to the inherent risks associated with granting AI independent operational capabilities. The core intention behind this feature is to furnish developers – or “vibe coders,” as Anthropic terms them – with a safer intermediate option, mitigating the potential for unintended and detrimental actions, which range from unauthorized file deletion and sensitive data transmission to the execution of potentially malicious code or the fulfillment of concealed instructions. The fundamental challenge with allowing AI systems to operate autonomously lies in their capacity to deviate from user intent, necessitating a mechanism for oversight and control. Auto mode addresses this directly by proactively monitoring the actions of Claude Code, flagging and blocking those deemed risky before their execution. This system then provides the agent with the opportunity to retry the action or request user intervention, thereby introducing a layer of human oversight within the automated process. Currently, this auto mode is undergoing a limited release as a research preview exclusively for Team plan subscribers. Anthropic acknowledges the experimental nature of the technology and emphasizes that it “doesn’t eliminate” risk entirely, strongly recommending that developers deploy it only within isolated environments to prevent unintended consequences. This cautious approach reflects a clear understanding of the inherent complexities and potential vulnerabilities associated with advanced AI systems. The development team anticipates expanding access to include Enterprise and API users within the coming days, signaling a deliberate and phased rollout intended to balance innovation with responsible deployment. The design of auto mode fundamentally relies on a risk assessment strategy, leveraging internal algorithms to identify actions that deviate from established safety parameters. Essentially, it's a dynamic system of checks and balances intended to prevent the AI from acting in ways that could compromise user security or data integrity. Furthermore, Anthropic’s measured approach—emphasizing the research preview phase and ongoing warnings about the tool's limitations—demonstrates a commitment to thorough testing and refinement before wider release. The system remains a tool under development, continually learning and adapting to potential threats. The intended benefit is a more controlled method of utilizing the power of Claude Code while minimizing the dangers associated with unrestrained autonomy. |