Trump takes another shot at dismantling state AI regulation
Recorded: March 20, 2026, 9 p.m.
| Original | Summarized |
Trump takes another shot at dismantling state AI regulation | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Trump takes another shot at dismantling state AI regulationComments DrawerCommentsLoading commentsGetting the conversation ready...AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportTrump takes another shot at dismantling state AI regulationThe new policy blueprint bowed to bipartisan pressure on child safety but still prioritized AI acceleration.The new policy blueprint bowed to bipartisan pressure on child safety but still prioritized AI acceleration.by Hayden FieldCloseHayden FieldSenior AI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden FieldMar 20, 2026, 6:17 PM UTCLinkShareGiftIllustration by Cath Virginia / The Verge | Photos from Getty ImagesHayden FieldCloseHayden FieldPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.The Trump administration on Friday unveiled its new legislative blueprint for AI regulation, and the seven-point plan includes a clear message: The federal government should avoid many AI regulations beyond a set of child safety rules, and it should bar states from messing with the “national strategy to achieve global AI dominance.”The plan advises Congress to protect minors using AI services with more safeguards and take action to attempt to prevent electricity costs from spiking due to AI infrastructure. It encourages “youth development and skills training” to boost familiarity with AI tools, without much further detail. But it suggests taking a wait-and-see approach to whether training AI models on copyrighted material without permission is legal, and it maintains a long-running Republican push to limit whether states can enact their own AI laws.The entire document and all its provisions, however, will only take effect if Congress adopts them into legislation and passes them into law.The Trump administration blueprint encourages passing laws similar to the Take It Down Act — which was signed into law in May 2025 and bars nonconsensual AI-generated “intimate visual depictions,” requiring certain platforms to rapidly remove them. The document also is pro-age verification, suggesting that Congress “establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age-gating is controversial from a privacy standpoint and has a lot of potential surveillance implications. It proposes other child protection measures like limiting the ability for AI models to train on minors’ data and limits to targeted advertising based on their data. (The document does not seek to prohibit those practices for children’s data, just limit them.) At the same time, it states that Congress “should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.”In the age of deepfakes, when AI-generated videos are looking more real than ever and a fake video of a politician can instantly propagate global conspiracy theories, the new policy blueprint seeks to “consider establishing a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes.” (That could mean finally creating a federal likeness law.) But it also says lawmakers should provide “clear exceptions” for parody, news reporting, satire, and other First Amendment-protected use cases.The blueprint also discourages Congress from taking up AI copyright issues. “Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue,” it says. “Congress should not take any actions that would impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use.”In another section, the blueprint raises concerns about large-scale scams and fraud that are increasingly powered by AI, stating that Congress should “augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors,” although no extra details are provided.The Trump administration continued leaning into the pro-federal, anti-state approach to AI regulation that it’s been promoting (so far unsuccessfully) for nearly a year. The blueprint says Congress should “preempt state AI laws that impose undue burdens” and avoid “fifty discordant” standards for companies, adding that states “should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.” Other legal protections for AI companies were baked in, too, such as the idea that states shouldn’t be allowed to “penalize AI developers for a third party’s unlawful conduct involving their models.” But in the child-privacy section, the document does allow states some limited wiggle room, stating that Congress shouldn’t preempt states from “enforcing their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.” The allowance comes after numerous figures from both parties expressed concern about overturning local child safety laws, including nearly 40 attorneys general for US states and territories.The overall goal, as in earlier Trump administration proposals, is speeding AI development. “The United States must lead the world in AI by removing barriers to innovation [and] accelerating deployment of AI applications across sectors,” the document states, adding that Congress should find ways to make federal datasets available to AI companies and academics in “AI-ready formats for use in training AI models and systems.” It didn’t specify which types of federal datasets it sought to make publicly available for AI training. The plan also definitively answers a long-asked question in AI regulation — whether there should be one federal body responsible for AI regulation or whether AI regulation should be left to each sector — and says that Congress “should not create any new federal rulemaking body to regulate AI”; instead, it says, it will “support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise.”President Trump signed an executive order last July seeking to prevent “woke AI” by banning government agencies from using models that “incorporated” topics like systemic racism. He recently ordered all agencies to blacklist the “Radical Left AI company” Anthropic for setting limits on military use of its models, something Anthropic alleges violates its First Amendment rights. At the same time, the blueprint states that the government “must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent.” It goes further to say that Congress should explicitly prevent the government from “coercing” AI providers “to ban, compel, or alter content based on partisan or ideological agendas” — and that in the event that government agencies censor expression on AI platforms or dictate the information they provide, then Congress should provide a way for Americans to “seek redress.”Last month, we saw the first bipartisan effort to address higher utility bills in communities with data centers nearby, and the new AI policy framework seems to address those concerns on both sides of the aisle, saying that Congress should find ways to make sure that “residential ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation.” But, it says, Congress should streamline federal permits for data center construction and operation, making it easier for AI companies to and make it easier for “develop or procure on-site and behind-the-meter power generation” — meaning that data center construction should still be full-speed-ahead, but community members shouldn’t have to literally pay the price on their monthly bills.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Hayden FieldCloseHayden FieldSenior AI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Hayden FieldAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnalysisCloseAnalysisPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnalysisPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportMost PopularMost PopularMarc Andreessen is a philosophical zombieValve’s huge SteamOS 3.8 update adds long-awaited features — and supports Steam MachineBelkin’s wireless HDMI adapter freed me from a long annoying cable when I travelGoogle Search is now using AI to replace headlinesA rogue AI led to a serious security incident at MetaThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in AIGoogle Search is now using AI to replace headlinesAmazon is making an Alexa phoneWhy people really hate AIOpenAI is planning a desktop ‘superapp’A rogue AI led to a serious security incident at MetaAdobe’s AI image generator can now be trained on your own artGoogle Search is now using AI to replace headlinesSean Hollister2:30 PM UTCAmazon is making an Alexa phoneStevie Bonifield1:42 PM UTCWhy people really hate AIDavid Pierce1:27 PM UTCOpenAI is planning a desktop ‘superapp’Jay Peters12:09 AM UTCA rogue AI led to a serious security incident at MetaStevie BonifieldMar 19Adobe’s AI image generator can now be trained on your own artJess WeatherbedMar 19Advertiser Content FromThis is the title for the native adTop Stories2:00 PM UTCMuch ado about protein1:27 PM UTCWhy people really hate AI2:30 PM UTCGoogle Search is now using AI to replace headlinesMar 19Marc Andreessen is a philosophical zombieMar 19Prediction markets are trying to lure journalists with partnership dealsMar 19Paid streaming for cheapskates is having a momentThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved |
The Trump administration has released a seven-point legislative blueprint for AI regulation, reflecting a continued strategy of minimizing federal oversight and prioritizing AI acceleration. This plan, primarily focused on avoiding ambiguity and responding to bipartisan pressure on child safety, explicitly seeks to limit state authority over AI development and deployment. The core argument centers on the notion that AI represents an inherently interstate issue with significant national security implications, thus justifying a federal-centric approach. The blueprint advocates for Congress to enact safeguards related to child safety, including age verification requirements for AI platforms and limitations on training models on minors’ data, echoing concerns raised by numerous state attorneys general. However, the document also seeks to preempt state laws that impose “undue burdens” on AI companies, explicitly forbidding states from regulating AI development itself. It encourages Congress to adopt measures similar to the Take It Down Act, addressing the issue of non-consensual AI-generated intimate visual depictions, and emphasizes the need for protections against unauthorized commercial use of AI-generated replicas of individuals’ likenesses. Despite advocating for a cautious approach regarding copyright issues—acknowledging competing arguments but ultimately supporting judicial resolution—the blueprint strongly discourages Congress from creating new federal rulemaking bodies and instead promotes sector-specific AI applications through existing regulatory agencies. Furthermore, the plan addresses immediate concerns about escalating AI-enabled fraud and scams, proposing augmented law enforcement efforts to combat these threats, although specific details remain sparse. A key element highlights the importance of making federal datasets available to AI companies and academics in accessible formats, supporting the administration’s broader goal of fostering AI development. The document also firmly rejects the creation of a single federal AI regulatory body, opting to maintain existing agency oversight within relevant sectors. Notably, the blueprint incorporates elements intended to align with President Trump’s previous executive orders, specifically addressing concerns about “woke AI” and preventing government coercion of AI providers to censor lawful political expression. It emphasizes the protection of free speech and First Amendment rights, even when related to AI systems, and posits a mechanism for redress if government agencies attempt to silence or alter content based on partisan agendas. The administration’s stance reflects an ongoing effort to foster innovation by limiting overly prescriptive regulations. The plan’s provisions regarding utility costs resulting from AI data center construction also attempt to bridge divides, seeking streamlined permitting processes and encouraging on-site power generation. While the blueprint acknowledges concerns raised by state attorneys general regarding child sexual abuse material generated by AI, it maintains a largely permissive stance, allowing states to enforce generally applicable laws protecting children – a tacit concession after considerable pressure. The overall strategy—outlined by Hayden Field—is to accelerate AI development by removing barriers to innovation and deploying AI applications across sectors, while concurrently limiting interference from state-level regulations. |