Anthropic’s AI bubble ‘YOLO’ warning
Recorded: Dec. 3, 2025, 11:03 p.m.
| Original | Summarized |
Anthropic’s AI bubble ‘YOLO’ warning | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessCreatorsMobilePolicySecurityTransportationReviewsExpandLaptopsPhonesHeadphonesTabletsSmart HomeSmartwatchesSpeakersDronesScienceExpandSpaceEnergyEnvironmentHealthEntertainmentExpandGamesTV ShowsMoviesAudioAIVerge ShoppingExpandBuying GuidesDealsGift GuidesSee All ShoppingCarsExpandElectric CarsAutonomous CarsRide-sharingScootersOther TransportationFeaturesVideosExpandYouTubeTikTokInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersExpandThe Verge DailyInstallerVerge DealsNotepadOptimizerRegulatorThe StepbackArchivesStoreSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Anthropic’s AI bubble ‘YOLO’ warningComments DrawerCommentsLoading commentsGetting the conversation ready...ColumnCloseColumnPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ColumnAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AITechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechAnthropic’s AI bubble ‘YOLO’ warningDario Amodei appears to take shots at OpenAI’s ‘YOLOing’ and big, circular deals.Dario Amodei appears to take shots at OpenAI’s ‘YOLOing’ and big, circular deals.by Alex HeathCloseAlex HeathSources author, Verge contributorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Alex HeathDec 3, 2025, 9:45 PM UTCLinkShareAndrew Ross Sorkin and Dario Amodei speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City. Image: GettyAlex HeathCloseAlex HeathPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Alex Heath is a contributing writer and author of the Sources newsletter.This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.Dario Amodei took the stage at the DealBook Summit on Wednesday to throw punches without naming names.The Anthropic CEO spent a good chunk of the interview with Andrew Ross Sorkin drawing a careful line between his company’s approach and that of a certain competitor. When asked about whether the AI industry is in a bubble, Amodei separated the “technological side” from the “economic side” and then twisted the knife.“On the technological side, I feel really solid,” he said. “On the economic side, I have my concerns where, even if the technology fulfills all its promises, I think there are players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things could happen.”Who might those players be? Despite Sorkin’s prodding, Amodei wouldn’t name OpenAI or Sam Altman. But he didn’t have to.“There are some players who are YOLOing,” he said. “Let’s say you’re a person who just kind of constitutionally wants to YOLO things or just likes big numbers, then you may turn the dial too far.”He also touched on “circular deals,” where chip suppliers like Nvidia invest in AI companies that then spend those funds on their chips. Amodei acknowledged that Anthropic has done some of these deals, though “not at the same scale as some other players,” and walked through the math of how they can work responsibly: A new gigawatt data center costs roughly $10 billion to build over five years. A vendor invests upfront, and an AI startup pays back its share of the deal as revenue grows.While he again didn’t name names, he referenced the eye-popping numbers OpenAI has been trumpeting for its compute buildout. “I don’t think there’s anything wrong with that in principle,” he said. “Now, if you start stacking these where they get to huge amounts of money, and you’re saying, ‘By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.”The cone of uncertaintyThe heart of Amodei’s argument was a concept he’s been using internally: the “cone of uncertainty.”He said that Anthropic’s revenue has grown tenfold annually for three years, from zero to $100 million in 2023, $100 million to $1 billion in 2024, and now somewhere between $8 billion and $10 billion by this year’s end. (Sam Altman, by comparison, has said that OpenAI expects to end 2025 with an annualized revenue run rate exceeding $20 billion.) But even Amodei doesn’t know if Anthropic will hit $20 billion or $50 billion next year. “It’s very uncertain.”That uncertainty is concerning, he explained, because data centers take one to two years to build. Decisions on 2027 compute needs have to be made now. Buy too little, and you lose customers to competitors. Buy too much, and you risk bankruptcy. Amodei added, “How much buffer there is in that cone is basically determined by my margins.”“We want to buy enough that we’re confident even in the 10th percentile scenario,” he said. “There’s always a tail risk. But we’re trying to manage that risk well.” He positioned Anthropic’s enterprise focus, with higher margins and more predictable revenue, as structurally safer than that of consumer-first businesses. “We don’t have to do any code reds.” Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Alex HeathCloseAlex HeathSources author, Verge contributorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Alex HeathAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicColumnCloseColumnPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ColumnSourcesCloseSourcesPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All SourcesTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularSteam Machine today, Steam Phones tomorrowOpenAI declares ‘code red’ as Google catches up in AI raceCrucial is shutting down — because Micron wants to sell its RAM and SSDs to AI companies insteadSilicon Valley is rallying behind a guy who sucksHBO Max’s Mad Men 4K release is the opposite of a remasterThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in ColumnAmazon’s bet that AI benchmarks don’t matterSilicon Valley is rallying behind a guy who sucksYou need to read the treatise on spacing out, Bored and BrilliantThe indie web is here to make the internet weird againThe dark side of optimizing your metabolismWhat the leaked AI executive order tells us about the Big Tech power grabAmazon’s bet that AI benchmarks don’t matterAlex HeathDec 2Silicon Valley is rallying behind a guy who sucksTina NguyenDec 2You need to read the treatise on spacing out, Bored and BrilliantTerrence O'BrienNov 30The indie web is here to make the internet weird againStevie BonifieldNov 30The dark side of optimizing your metabolismVictoria SongNov 28What the leaked AI executive order tells us about the Big Tech power grabTina NguyenNov 26Advertiser Content FromThis is the title for the native adTop Stories5:00 PM UTCOne day, AI might be better than you at surfing the web. That day isn’t today.7:01 PM UTCCrucial is shutting down — because Micron wants to sell its RAM and SSDs to AI companies instead6:30 PM UTCThe ugly Pebble 2 Duo is the smartwatch for meTwo hours agoApple’s head of UI design is leaving for Meta3:00 PM UTCAnyone want to buy a car that drives itself?Dec 2Silicon Valley is rallying behind a guy who sucksThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2025 Vox Media, LLC. All Rights Reserved |
Anthropic’s CEO, Dario Amodei, delivered a cautious assessment of the artificial intelligence landscape during an appearance at the New York Times DealBook Summit in December 2025, directly referencing concerns about a potential AI bubble. Amodei’s remarks centered around a “cone of uncertainty” model, illustrating the challenging and potentially precarious nature of investment decisions within the rapidly evolving AI sector. The core of Amodei’s argument revolved around Anthropic’s own growth trajectory and the complexities inherent in scaling a technology with long development timelines. Anthropic has experienced tenfold annual revenue growth over three years, escalating from zero in 2023 to $100 million in 2023, then $1 billion in 2024, and currently estimated to be between $8 billion and $10 billion by the end of 2025. Notably, he contrasted this with Sam Altman’s projections for OpenAI, which anticipate annualized revenue exceeding $20 billion by 2025. However, Amodei stressed that the company’s future revenue—potentially reaching $20 billion or $50 billion—remains highly uncertain. He framed the situation as a scenario where significant investment choices made now (specifically regarding data center construction) could have massive repercussions a year or two down the line. Amodei separated the AI industry into two distinct realms: the “technological side” and the “economic side.” He asserted a ‘solid’ position on the technological front, suggesting that Anthropic’s AI models were performing effectively. However, he expressed considerable concern regarding the economic dynamics. He posited that some players within the ecosystem – whom he deliberately avoided naming, but likely including OpenAI – were taking a "YOLOing" approach, characterized by a willingness to make large, potentially risky investments based on optimistic projections rather than a carefully considered, sustainable growth strategy. This approach, he argued, could lead to rapid overextension and instability. A "YOLOing" approach, he explained, would be one where a company strives for extremely ambitious, large-scale revenue targets, such as a goal of $200 billion annually by 2027 or 2028 – a scenario that could lead to economic difficulties. Furthermore, Amodei highlighted the impact of infrastructure development through “circular deals,” where companies like Nvidia invest heavily in AI development, and AI startups subsequently use those funds to purchase Nvidia's computing hardware. He acknowledged that Anthropic had engaged in similar deals, though on a smaller scale than some competitors. These models, he explained, involve upfront investment, followed by revenue repayment. He emphasized the importance of managing risk, stating that Anthropic aims to purchase enough capacity to confidently handle even the most pessimistic 10th percentile scenario. The construction of a new gigawatt-scale data center, representing a $10 billion investment over five years, is a prime example of this cautious approach. The “cone of uncertainty”—a concept Amodei utilizes internally—visualized the precarious balance between under-investment (potentially leading to loss of customers to competitors) and over-investment (risking bankruptcy). Amodei’s focus was on mitigating risk through a more measured, margin-focused approach, contrasting it with the potentially reckless strategies of other players. He positioned Anthropic's enterprise-centric model—characterized by higher margins and more predictable revenue—as inherently safer than the consumer-focused strategies of competitors. Ultimately, Amodei emphasized the importance of careful planning and risk management within the highly volatile and rapidly evolving AI landscape. |