LmCast :: Stay tuned in

Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance

Recorded: March 25, 2026, 6:02 p.m.

Original Summarized

Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillanceComments DrawerCommentsLoading commentsGetting the conversation ready...PolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportSenate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillanceAdam Schiff doesn’t want to rely on the Pentagon or AI CEOs’ word when it comes to autonomous weapons.Adam Schiff doesn’t want to rely on the Pentagon or AI CEOs’ word when it comes to autonomous weapons.by Lauren FeinerCloseLauren FeinerSenior Policy ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Lauren FeinerMar 25, 2026, 3:05 PM UTCLinkShareGiftImage: The Verge, Getty ImagesPart OfAI vs. the Pentagon: killer robots, mass surveillance, and red linessee all updates Lauren FeinerCloseLauren FeinerPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Lauren Feiner is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform.Anthropic’s fight with the Pentagon is expanding to Congress. Sen. Adam Schiff (D-CA) is working on a new bill to “codify” Anthropic’s red lines and ensure humans make the ultimate decisions in questions of life and death, and Sen. Elissa Slotkin (D-MI) recently introduced a bill to limit the Defense Department’s ability to use AI for mass surveillance of Americans.The Trump administration blacklisted Anthropic earlier this month after it set limits on how the military could use its AI models, designating it a supply-chain risk. Anthropic has filed suit, accusing the government of violating its constitutional rights. It’s insisted that the Pentagon avoid using its products for fully autonomous weapons and mass domestic surveillance — resisting a deal signed by major competitor OpenAI. Anthropic is waiting to hear if a court will block the administration’s decision to label it a supply chain risk.“I was alarmed to see the Pentagon take aim at Anthropic because Anthropic was simply trying to insist on policies that the vast majority of American people agree with,” Schiff told The Verge in a phone interview last week. “The idea that they would therefore then try to turn around and kill the company, kill one of the preeminent leaders of AI is such a hostile, dictatorial kind of an act. They would set back America’s leadership in AI, and Anthropic is one of the very best.”Schiff’s office is still in the process of drafting the legislation, but he said the aim is to ensure AI isn’t used for “certain illicit purposes.” Slotkin recently introduced a similar bill last week called the AI Guardrails Act, to reinforce protections against domestic mass surveillance and the use of autonomous lethal weapons without human intervention. It’s not yet clear how Schiff’s bill will differ or align on key points, though it covers similar ground. Schiff spokesperson Ruby Robles Perez said his office continues to talk with stakeholders and industry leaders before finalizing their bill. Slotkin’s bill restricts the Department of Defense’s ability to use AI to detonate a nuclear weapon or track people or groups in the US, but also outlines how the Defense Secretary can notify Congress in the event that “extraordinary circumstances” necessitate the use of AI to deploy autonomous lethal weapons.In the bill Schiff is drafting, the specifics about what constitutes an autonomous weapon or domestic surveillance are still the subject of discussion, but he said they are also looking to existing frameworks from the Biden administration. “We haven’t resolved all of those questions yet, including how this language would apply to those who were non-citizens, but people who are lawfully in the country are deserving of protection. And then as a human rights matter, it may go beyond that as well,” Schiff said.“We don’t want to delegate that kind of responsibility over life and death to an algorithm”One principle guiding this effort is the idea of a human in the loop. “Whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We don’t want to delegate that kind of responsibility over life and death to an algorithm,” Schiff said.But that doesn’t mean there’s no role for AI on the battlefield. “There are certainly circumstances in which, because AI can operate faster than human beings can, you want AI to be able to tip and cue information for human operators either that need to take steps to defend the country or that need to adjust given what it can see in real time on the battlefield,” Schiff said. “So the applications are very significant. They can be very beneficial from a national security and defense perspective. But they can also mean life or death. They can mean distinguishing between a civilian target and a military target, or getting those things wrong.”With a Democratic minority in both houses, the short-term success of the bill may depend on Republicans’ willingness to be seen as critical of the administration. With midterms approaching, it will only get harder until the end of the year to pass new legislation, though the balance of power in Congress could shift if Democrats regain one or both chambers. It could still take at least another week or two to unveil the proposal, but Schiff is looking at legislative vehicles like the National Defense Authorization Act (NDAA) to move it forward.“There’s certainly bipartisan support in the public for these kinds of limitations,” Schiff said. “As always, you confront the issue when you’re taking steps to prevent any kind of administrative abuse that it raises issues with some of my colleagues about whether it can be read as an implicit criticism of the administration. So we’ll have to deal with that, but I’m hoping that we can make it bipartisan.”Since Anthropic put up its fight with the Pentagon, OpenAI has scrambled to defend its reasons for signing terms that have garnered public pushback. Even with OpenAI saying it will insist on the same terms, Schiff said he’d rather not have to place that trust in the Pentagon or any CEO. “I would have a lot more confidence, frankly, if these were statutory requirements, than relying on the lawfulness of the Pentagon or the word of an AI CEO,” he said.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Lauren FeinerCloseLauren FeinerSenior Policy ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Lauren FeinerAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicLawCloseLawPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All LawPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyPoliticsClosePoliticsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PoliticsReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportMore in: AI vs. the Pentagon: killer robots, mass surveillance, and red linesDavid Sacks’ big Iran warning gets big time ignoredTina NguyenMar 18The mysterious case of the DHS white supremacist memelordTina NguyenMar 11Anthropic is launching a new think tank amid Pentagon blacklist fightHayden FieldMar 11Most PopularMost PopularNvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’Welp, I bought an iPhone againOpenAI just gave up on Sora and its billion-dollar Disney dealDonut Lab’s solid-state battery could barely hold a charge after getting damagedAyaneo says selling its Windows gaming handheld ‘is no longer sustainable’The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in PolicyMark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’Apple’s iOS 26.4 update adds age verification in the UKMeta misled users about its products’ safety, jury decidesWhat is ICE actually doing at the airport?Meta’s reckoning over kids safety is in the hands of two juriesJohn Deere, Garmin, and Philips may have undermined military right to repairMark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’Stevie Bonifield2:41 PM UTCApple’s iOS 26.4 update adds age verification in the UKEmma Roth1:54 PM UTCMeta misled users about its products’ safety, jury decidesLauren FeinerMar 24What is ICE actually doing at the airport?Gaby Del ValleMar 24Meta’s reckoning over kids safety is in the hands of two juriesLauren Feiner and Adi RobertsonMar 24John Deere, Garmin, and Philips may have undermined military right to repairEmma RothMar 24Advertiser Content FromThis is the title for the native adTop Stories47 minutes agoThe TSA is broken — is privatization next?Two hours agoLive-service games are such a mess even Fortnite is struggling12:40 PM UTCSony and Honda ain’t feelin’ the Afeela anymore11:00 AM UTCIt’s always a good time to revisit Super Mario Bros. WonderMar 24Welp, I bought an iPhone again16 minutes agoNintendo is going to charge less for digital Switch 2 gamesThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved

Senators Adam Schiff and Elissa Slotkin are pursuing legislative efforts to “codify” Anthropic’s red lines regarding the utilization of autonomous weapons systems and mass surveillance technologies by the Department of Defense. This action stems from the Trump administration’s blacklisting of Anthropic, an AI firm, following its attempts to impose limitations on the military’s access to its models, specifically prohibiting the use of the technology for fully autonomous weapons and domestic surveillance. Anthropic has responded with a lawsuit alleging a constitutional rights violation. Schiff’s proposed bill, currently in drafting stages, aims to establish clear guidelines ensuring human oversight in applications with life-or-death consequences, while Slotkin has introduced the AI Guardrails Act, which seeks to restrict the Department of Defense’s capacity to employ AI for nuclear weapon deployment, tracking of individuals within the United States, or any other potentially intrusive surveillance activities.

The underlying principle driving this legislative push is the establishment of a ‘human in the loop’ framework, recognizing the potential dangers associated with delegating lethal decision-making to algorithmic systems. Schiff emphasized that the goal is to prevent the delegation of responsibilities related to life and death to algorithms, a position mirrored by Slotkin’s bill. Both proposals highlight concerns surrounding the ethical and operational risks posed by autonomous weapons and mass surveillance, reflecting broader anxieties surrounding the rapid advancement of artificial intelligence. Schiff’s office is currently refining the specific language within the bill, focusing on defining “autonomous weapons” and “mass surveillance,” with consideration given to existing Biden administration frameworks and aiming for broad applicability, including protections for non-citizens lawfully present in the country, potentially extending beyond this category.

Slotkin’s bill specifically addresses concerns about potential misuse by allowing notification to Congress in “extraordinary circumstances” related to the deployment of autonomous lethal weapons, underlining a cautious approach toward technological advancements. Currently, the success of this legislation faces challenges, including the impending midterm elections, which could shift the balance of power in Congress, and the potential for friction with colleagues wary of criticizing the administration. Schiff acknowledged the potential for political headwinds but remains optimistic regarding public support for the proposed limitations, highlighting the bipartisan sentiment surrounding the issue. He expressed reservations about relying solely on the word of AI CEOs or the Pentagon, preferring statutory requirements for greater accountability. Schiff’s strategy involves introducing the proposed legislation through vehicles such as the National Defense Authorization Act (NDAA), which offers a pathway for broader consideration within the defense establishment.

Ultimately, Schiff’s and Slotkin’s efforts represent a proactive response to mounting concerns about the unchecked development and deployment of advanced AI technologies, particularly in the military and surveillance contexts. The legislative process is ongoing, with the specific details of the bills still subject to debate and refinement, but the core objective—to safeguard human rights and ethical considerations in the age of artificial intelligence—remains firmly established.