Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says
Recorded: March 25, 2026, 3 a.m.
| Original | Summarized |
Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoLivestreamsMerchSearchSearchParesh DaveArtificial IntelligenceMar 24, 2026 6:13 PMPentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge SaysDuring a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for labeling the Claude AI developer a supply-chain risk.Photograph: Chip Somodevilla/Getty ImagesCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyThe US Department of Defense appears to be illegally punishing Anthropic for trying to restrict the use of its AI tools by the military, US district judge Rita Lin said during a court hearing on Tuesday.“It looks like an attempt to cripple Anthropic,” Lin said of the Pentagon designating the company a supply-chain risk. “It looks like [the department] is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment.”Anthropic has filed two federal lawsuits alleging that the Trump administration’s decision to designate the company a security risk amounted to illegal retaliation. The government slapped the label on Anthropic after it pushed for limitations on how its AI could be used by the military. Tuesday’s hearing came in a case filed in San Francisco.Anthropic is seeking a temporary order to pause the designation. The relief, Anthropic hopes, would help convince some of the company’s skittish customers to stick with it just a bit longer. Lin can issue a pause only if she determines that Anthropic is likely to win the overall case. Her ruling on the injunction is expected in the next few days.The dispute has sparked a broader public conversation about how artificial intelligence is increasingly being used by the armed forces, and whether Silicon Valley companies should give deference to the government in determining how the technology they develop is deployed.The Department of Defense, which now calls itself the Department of War (DoW), has argued that it followed procedures and appropriately determined that Anthropic’s AI tools could no longer be relied upon to operate as expected during critical moments. It has asked Lin not to second-guess its assessment about the threat it claims Anthropic poses to national security.“The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software … so it doesn’t operate in the way DoW expects and wants it to,” Trump administration attorney Eric Hamilton said during Tuesday’s hearing.Lin said that it was Defense Secretary Pete Hegseth’s role—not hers—to decide whether Anthropic is an appropriate vendor for the department. But Lin said it’s up to her to determine whether Hegseth violated the law by taking steps beyond simply canceling Anthropic’s government contracts. Lin said it was “troubling” to her that the security designation and directives more broadly limiting use of Anthropic’s AI tool Claude by government contractors “don’t seem to be tailored to stated national security concerns.”As Anthropic’s spat with the government escalated last month, Hegseth posted on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”But on Tuesday, Hamilton acknowledged that Hegseth has no legal authority to bar military contractors from using Anthropic for work unrelated to the Department of Defense. When asked by Lin why Hegseth would have posted that, Hamilton said, “I don’t know.”Lin further questioned Hamilton about whether the Pentagon had considered taking less punitive measures to move the department away from using Anthropic’s tools. She described the supply-chain-risk designation as a powerful authority typically reserved for foreign adversaries, terrorists, and other hostile actors.Michael Mongan, a WilmerHale attorney representing Anthropic, said it was extraordinary for the government to go after a “stubborn” negotiating partner with the designation.The Pentagon has said it is working to replace Anthropic technologies over the coming months with alternatives from Google, OpenAI, and xAI. It also said it has put measures in place to prevent Anthropic from engaging in any tampering during the transition. Hamilton said he didn’t know if it was even possible for Anthropic to update its AI models without permission from the Pentagon; the company says it is not.A ruling in the other case, at the federal appeals court in Washington, DC, is expected to come soon without a hearing.CommentsBack to topTriangleYou Might Also LikeIn your inbox: Upgrade your life with WIRED-tested gearNvidia plans to launch an open-source AI agent platformBig Story: He built the Epstein database—it consumed his lifeShould you leave your phone charging overnight?Watch: How right wing influencers infiltrated the governmentParesh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, ... Read MoreSenior WriterTopicsartificial intelligencegovernmentDefenseAnthropiclawsuitPentagonRead MoreJustice Department Says Anthropic Can’t Be Trusted With Warfighting SystemsIn response to Anthropic’s lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military.Paresh DaveTrump Administration Won’t Rule Out Further Action Against AnthropicThe White House is preparing an executive order targeting the AI startup, even as its earlier actions against the company face a major test in court.Paresh DaveAnthropic Sues Department of Defense Over Supply-Chain-Risk DesignationThe Claude chatbot developer says the Trump administration overstepped by escalating a contract dispute into a federal ban on the company’s technology.Paresh DaveAnthropic Denies It Could Sabotage AI Tools During WarThe Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that’s impossible.Paresh DaveOpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US GovernmentGoogle DeepMind chief scientist Jeff Dean is among the AI researchers and engineers rushing to Anthropic's defense.Maxwell ZeffAnthropic Claims Pentagon Feud Could Cost It BillionsExecutives at the AI startup say companies paused deal talks after the Trump administration labeled it a supply-chain risk, warning that the fallout could cause a major revenue hit.Paresh DaveAnthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’Anthropic says it would be “legally unsound” for the Pentagon to blacklist its technology after talks over military use of its artificial intelligence models broke down.Maxwell ZeffTrump Moves to Ban Anthropic From the US GovernmentPresident Donald Trump’s sudden order comes after the Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.Will KnightWhen AI Companies Go to War, Safety Gets Left BehindWe were promised AI regulation and a race to the top. Now, we’re arguing about killer robots.Steven LevyWhat AI Models for War Actually Look LikeWhile companies like Anthropic debate limits on military uses of AI, Smack Technologies is training models to plan battlefield operations.Will KnightOpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft AnywaySources allege the Defense Department experimented with Microsoft’s version of OpenAI technology before the ChatGPT-maker lifted its prohibition on military applications.Maxwell ZeffDHS Ousts CBP Privacy Officers Who Questioned ‘Illegal’ OrdersDepartment of Homeland Security leaders removed top privacy officers who objected to mislabeling government records to block their public release, WIRED has learned.Dell CameronWIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.More From WIREDSubscribeNewslettersLivestreamsTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviews and GuidesReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok |
The Department of Defense, now operating as the Department of War (DoW), is facing scrutiny following a district court judge’s assessment that its actions against Anthropic, a developer of the Claude AI, constitute an “attempt to cripple” the company. Judge Rita Lin’s concerns center on the DoW’s designation of Anthropic as a supply-chain risk, arguing it represents an illegal attempt to punish the company for challenging the military’s use of its AI tools and for raising First Amendment concerns regarding a contract dispute. Anthropic, led by Michael Mongan, initiated two federal lawsuits alleging that the Trump administration’s initial labeling of the company as a security risk constituted retaliatory action. The crux of the dispute involves Anthropic’s push for limitations on Claude’s deployment within the military, a stance the DoW views as potentially compromising national security. During Tuesday’s hearing, Judge Lin questioned the DoW’s justification for its position, noting that the security designation and associated limitations on Claude’s use by government contractors appeared disproportionately punitive considering the stated national security concerns. Department of War Secretary Pete Hegseth, represented by Eric Hamilton, defended the DoW’s actions, emphasizing that the department followed established procedures and that Anthropic’s resistance could have led to manipulation of the software during critical operations. Hamilton’s assertion—that “I don’t know” when asked about Hegseth’s subsequent public statement barring military contractors from using Anthropic’s technology—further complicated the situation. The legal battle’s implications extend to a broader discussion about the increasing utilization of artificial intelligence within the armed forces and the potential for friction between Silicon Valley companies and governmental oversight. The DoW’s actions have sparked debate concerning the level of deference technology firms should afford to government decisions regarding the deployment of their AI tools. Currently, the DoW is transitioning away from Anthropic’s technology, seeking alternatives from Google, OpenAI, and xAI, while simultaneously implementing measures to prevent any potential tampering during the transfer. However, Anthropic’s legal team has voiced concerns about the DoW’s ability to unilaterally dictate the operation of its AI models, arguing that such control would be legally unsound. The situation has escalated with Secretary Hegseth posting on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” which Hamilton acknowledged he had no authority to implement. The DoW’s actions are being closely watched, particularly in light of the impending rulings from the federal appeals court in Washington, D.C., concerning this case, and the broader implications for the regulation and utilization of AI in national security. |