Anthropic Supply-Chain-Risk Designation Halted by Judge
Recorded: March 27, 2026, 4 a.m.
| Original | Summarized |
Anthropic Supply-Chain-Risk Designation Halted by Judge | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoLivestreamsMerchSearchSearchParesh DaveBusinessMar 26, 2026 7:33 PMAnthropic Supply-Chain-Risk Designation Halted by JudgeA judge temporarily blocked the Trump administration’s designation, clearing the way for Anthropic to keep doing business without the label starting next week.Photo-Illustration: WIRED Staff; Getty ImagesCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyAnthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation.“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling.The Department of Defense, which under Trump calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted. Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary.The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic.Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.”The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain-risk designation as the basis.The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC, has yet to rule on the second lawsuit Anthropic filed, which focuses on a different law under which the company was also barred from providing software to the military.But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.CommentsBack to topTriangleYou Might Also LikeIn your inbox: Upgrade your life with WIRED-tested gearNvidia plans to launch an open-source AI agent platformBig Story: He built the Epstein database—it consumed his lifeShould you leave your phone charging overnight?Watch: How right wing influencers infiltrated the governmentParesh Dave is a senior writer for WIRED, covering the inner workings of Big Tech companies. He writes about how apps and gadgets are built and about their impacts while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, ... Read MoreSenior WriterTopicsAnthropicdepartment of defenseClaudeMilitarylawsuitsRead MoreAnthropic Sues Department of Defense Over Supply-Chain-Risk DesignationThe Claude chatbot developer says the Trump administration overstepped by escalating a contract dispute into a federal ban on the company’s technology.Paresh DaveTrump Administration Won’t Rule Out Further Action Against AnthropicThe White House is preparing an executive order targeting the AI startup, even as its earlier actions against the company face a major test in court.Paresh DaveAnthropic Denies It Could Sabotage AI Tools During WarThe Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that’s impossible.Paresh DavePentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge SaysDuring a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for labeling the Claude AI developer a supply-chain risk.Paresh DaveJustice Department Says Anthropic Can’t Be Trusted With Warfighting SystemsIn response to Anthropic’s lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military.Paresh DaveOpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US GovernmentGoogle DeepMind chief scientist Jeff Dean is among the AI researchers and engineers rushing to Anthropic's defense.Maxwell ZeffAnthropic Claims Pentagon Feud Could Cost It BillionsExecutives at the AI startup say companies paused deal talks after the Trump administration labeled it a supply-chain risk, warning that the fallout could cause a major revenue hit.Paresh DaveTrump Moves to Ban Anthropic From the US GovernmentPresident Donald Trump’s sudden order comes after the Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.Will KnightAnthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’Anthropic says it would be “legally unsound” for the Pentagon to blacklist its technology after talks over military use of its artificial intelligence models broke down.Will KnightDHS Ousts CBP Privacy Officers Who Questioned ‘Illegal’ OrdersDepartment of Homeland Security leaders removed top privacy officers who objected to mislabeling government records to block their public release, WIRED has learned.Dell CameronKalshi Has Been Temporarily Banned in NevadaA judge ordered Kalshi to immediately halt sports and election contracts in the state, intensifying a growing regulatory battle over prediction markets.Kate KnibbsWhen AI Companies Go to War, Safety Gets Left BehindWe were promised AI regulation and a race to the top. Now, we’re arguing about killer robots.Steven LevyWIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.More From WIREDSubscribeNewslettersLivestreamsTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviews and GuidesReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok |
Anthropic, a generative AI company, experienced a significant legal setback when a federal judge temporarily blocked the Trump administration’s designation of the company as a supply-chain risk. Judge Rita Lin, presiding over a case in San Francisco, ruled that the Department of Defense’s actions were “likely both contrary to law and arbitrary and capricious,” citing a lack of legitimate basis for inferring potential sabotage from Anthropic’s insistence on usage restrictions for its Claude AI tools. The ruling, effective one week after its issuance, essentially restores the status quo as of February 27th, prior to the administration’s directives, and prevents the Department of War from using the supply-chain-risk designation as a basis for halting Claude’s usage across federal agencies and potentially harming Anthropic’s sales and reputation. The judge specifically stated that the government did not “cripple” or “punish” Anthropic illegally, clarifying that the ruling does not compel the Department of War to utilize Anthropic’s products or services nor prevent their transition to alternative AI providers, provided these actions align with relevant regulations and constitutional provisions. This decision follows Anthropic’s filing of two lawsuits challenging the sanctions, during which Lin expressed concern over the government’s actions. The immediate impact of the ruling remains uncertain, as its full effect will not be realized for a week and is contingent on a decision from the appeals court in Washington, D.C., regarding a separate lawsuit. Despite this, Anthropic’s legal victory offers potential reassurance to customers considering working with a company facing scrutiny from the Trump administration, potentially shifting the legal landscape surrounding the company’s operations. The Department of Defense, under the Trump administration, had been reliant on Anthropic’s Claude AI tools for sensitive document creation and classified data analysis, but ultimately began pulling the plug due to perceived restrictions placed by Anthropic, leading to the legal challenges. |