Judge sides with Anthropic to temporarily block the Pentagon’s ban
Recorded: March 27, 2026, 2 a.m.
| Original | Summarized |
Judge sides with Anthropic to temporarily block the Pentagon’s ban | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Judge sides with Anthropic to temporarily block the Pentagon’s banComments DrawerCommentsLoading commentsGetting the conversation ready...AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportAnalysisCloseAnalysisPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnalysisJudge sides with Anthropic to temporarily block the Pentagon’s banJudge Lin wrote that ‘punishing Anthropic … is classic illegal First Amendment retaliation.’by Hayden FieldCloseHayden FieldSenior AI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden FieldMar 27, 2026, 12:33 AM UTCLinkShareGift Image: Cath Virginia / The Verge, Getty ImagesAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportAnalysisCloseAnalysisPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnalysisJudge sides with Anthropic to temporarily block the Pentagon’s banJudge Lin wrote that ‘punishing Anthropic … is classic illegal First Amendment retaliation.’by Hayden FieldCloseHayden FieldSenior AI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden FieldMar 27, 2026, 12:33 AM UTCLinkShareGiftHayden FieldCloseHayden FieldPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”A final verdict could be weeks or months out.Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decisionmaking process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Hayden FieldCloseHayden FieldSenior AI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden FieldAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnalysisCloseAnalysisPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnalysisAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportMost PopularMost PopularThe United States router ban, explainedIntel and LG Display may have beaten Apple and Qualcomm with the best laptop battery life everMeta gets ready to launch two new Ray-Ban AI glassesSeiko resurrected a 44-year-old digital watch NASA astronauts wore to spaceNetflix is raising prices againThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in AIApple’s AI Playlist Playground is bad at musicGoogle’s ‘live’ AI search assistant can handle conversations in dozens more languagesMeta gets ready to launch two new Ray-Ban AI glassesWikipedia bans AI-generated articlesSenators are pushing to find out how much electricity data centers actually useWebtoon is adding AI localization tools to its comics platformApple’s AI Playlist Playground is bad at musicTerrence O'BrienMar 26Google’s ‘live’ AI search assistant can handle conversations in dozens more languagesEmma RothMar 26Meta gets ready to launch two new Ray-Ban AI glassesJanko RoettgersMar 26Wikipedia bans AI-generated articlesEmma RothMar 26Senators are pushing to find out how much electricity data centers actually useStevie BonifieldMar 26Webtoon is adding AI localization tools to its comics platformCharles Pulliam-MooreMar 26Advertiser Content FromThis is the title for the native adTop StoriesMar 26My brief, weird time with the Samsung TriFoldMar 26Netflix is raising prices againMar 26Apple’s AI Playlist Playground is bad at musicMar 26Why a two-seater robotaxi makes more sense than you thinkMar 26Everyone hates Ticketmaster. Why’d Trump go easy on them?VideoMar 26The versatile Play speaker is a great way into the Sonos worldThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved |
Anthropic has secured a temporary injunction against the Pentagon, effectively blocking the department’s initial ban on the company’s AI model, Claude. This decision, rendered by Judge Rita F. Lin in the Northern District of California, stemmed from a lawsuit arguing that the Department of War’s designation of Anthropic as a supply chain risk constituted an illegal violation of Anthropic’s First Amendment rights, specifically regarding retaliatory punishment for expressing concerns about potential misuse of the technology. The core of the legal challenge centers on the Department’s claim that Anthropic’s AI could be used for autonomous lethal weapons or domestic mass surveillance, a position that Anthropic contends the government was improperly leveraging to exert undue control over the company’s output. Judge Lin’s ruling underscores the critical distinction between the Department’s authority to dictate the use of government-procured AI and the potential for governmental overreach in suppressing dissenting voices or limiting a company’s ability to critique government actions. The judge specifically referenced a contentious X post by Defense Secretary Pete Hegseth that effectively barred contractors from working with Anthropic, characterizing it as an attempt to “cripple” the company. Furthermore, the injunction stems from concerns about the potential impact of the supply chain risk designation on Anthropic’s commercial relationships and revenue streams. The judgment establishes a pathway for a protracted legal battle, likely involving complex arguments concerning the boundaries of government oversight and the protection of free speech in the context of emerging technologies. The timeline for a final verdict remains uncertain, potentially spanning weeks or months. Anthropic’s spokesperson, Danielle Cohen, expressed gratitude for the court’s swift action and reiterated the company’s commitment to working productively with the government while safeguarding its interests and those of its customers. The case raises broader questions about the relationship between government and technology companies, particularly concerning the ethical implications of AI development and deployment, and the legal framework governing their interactions. It’s a notable instance of a U.S. company being designated as a supply chain risk, a designation typically reserved for entities linked to foreign adversaries, fueling further debate about potential governmental overreach and its impact on innovation and business operations. |