America’s coming war over AI regulation | MIT Technology Review
You need to enable JavaScript to view this site.
Skip to ContentMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioArtificial intelligenceAmerica’s coming war over AI regulationIn 2026, states will go head to head with the White House’s sweeping executive order. By Michelle Kimarchive pageJanuary 23, 2026Stephanie Arnett/MIT Technology Review | Adobe Stock MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation. In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation. Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdA location I care about is.Tell me why it mattersLearn more about how we're using AI. For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates. If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.” Related StoryWhat's next for AI in 2026Read next But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge. In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal. The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says. While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws. Related StoryIt's surprisingly easy to stumble into a relationship with an AI chatbotRead next With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures. Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann. While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots. Related StoryData centers are amazing. Everyone hates them.Read nextFueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act. Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms. In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come. by Michelle KimShareShare story on linkedinShare story on facebookShare story on emailPopular10 Breakthrough Technologies 2026Amy NordrumThe great AI hype correction of 2025Will Douglas HeavenChina figured out how to sell EVs. Now it has to deal with their aging batteries.Caiwei ChenThe 8 worst technology flops of 2025Antonio RegaladoDeep DiveArtificial intelligenceThe great AI hype correction of 2025Four ways to think about this year's reckoning. By Will Douglas Heavenarchive pageWhat’s next for AI in 2026Our AI writers make their big bets for the coming year—here are five hot trends to watch. By Rhiannon Williamsarchive pageWill Douglas Heavenarchive pageCaiwei Chenarchive pageJames O'Donnellarchive pageMichelle Kimarchive pageMeet the new biologists treating LLMs like aliensBy studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time. By Will Douglas Heavenarchive pageAn AI model trained on prison phone calls now looks for planned crimes in those callsThe model is built to detect when crimes are being “contemplated.” By James O'Donnellarchive pageStay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more.Enter your emailPrivacy PolicyThank you for submitting your email!Explore more newslettersIt looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.The latest iteration of a legacyFounded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact.READ ABOUT OUR HISTORYAdvertise with MIT Technology ReviewElevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.ADVERTISE WITH US© 2026 MIT Technology ReviewAboutAbout usCareersCustom contentAdvertise with usInternational EditionsRepublishingMIT Alumni NewsHelpHelp & FAQMy subscriptionEditorial guidelinesPrivacy policyTerms of ServiceWrite for usContact uslinkedin opens in a new windowinstagram opens in a new windowreddit opens in a new windowfacebook opens in a new windowrss opens in a new window |
The battle over artificial intelligence regulation in the United States is poised to intensify dramatically in 2026, shifting from a broad, executive-driven push to a fragmented, courtroom-based struggle. Following President Donald Trump’s sweeping executive order in late 2025, designed to curtail state AI legislation, the coming year will see states aggressively challenging the order’s authority while tech giants and AI safety advocates deploy significant resources to influence elections and shape regulatory outcomes.
Initially, Trump’s executive order aimed to establish a “minimally burdensome” national AI policy, prioritizing American dominance in the burgeoning industry. The core of the strategy involved the Department of Justice suing states whose AI laws clashed with the administration’s vision, primarily focusing on transparency and bias mitigation—issues largely aligned with Democratic priorities. Simultaneously, the Department of Commerce was tasked with withholding federal broadband funding from states deemed to have “onerous” AI regulations. James Grimmelmann, a law professor at Cornell Law School, notes that this approach would primarily target a small number of provisions related to AI bias, reflecting the administration’s leaning towards a more liberal approach to these issues.
However, this strategy quickly encountered resistance. Several states, notably California and New York, prepared to fight the executive order in court, driven by mounting public pressure regarding the safety of chatbots and concerns about data centers. Simultaneously, a multitude of super PACs, funded by figures like OpenAI president Greg Brockman and venture capital firms, began a coordinated effort to elect candidates who supported unfettered AI development.
The legal battles in 2026 would be pivotal. While Trump’s administration sought to dictate a national policy, states would leverage their legal systems to defend their autonomy. This is further complicated by the fact that the gridlocked and polarized Congress wouldn’t be delivering a comprehensive AI policy bill, leaving states to take the lead in shaping the future of the technology.
Beyond legal challenges, a key battleground would be the upcoming elections. As Margot Kaminski, a law professor at the University of Colorado Law School, pointed out, Trump’s efforts were already stretched thin, and the executive order had hardened positions, creating significant partisan divides. This environment spurred the creation of super PACs, like one led by former Democratic congressman Brad Carson, to support candidates advocating for AI regulation. These groups would compete against forces like Leading the Future, a super PAC backed by Brockman and Andreessen Horowitz, which aimed to elect candidates who favored less restrictive AI development.
Several specific areas of regulatory contention emerged. States would push for control over data centers, demanding reporting on their energy consumption and pushing for AI companies to cover their own electricity costs. The growing public anxiety surrounding AI’s potential for harm – including job displacement and the risks associated with advanced intelligence – would likely fuel efforts to pass child safety laws, mirroring the provisions of SB 53 and the RAISE Act.
Furthermore, the rapid pace of AI development meant that existing legal frameworks – product liability and free speech doctrines – were ill-equipped to address the novel dangers posed by these technologies. The courts would be tasked with grappling with these ambiguities, placing the burden on state governments to establish appropriate safeguards.
As of 2026, the process of shaping the future of AI would be messy, slow, and intensely political. The outcomes would not only determine the trajectory of innovation within the United States but also significantly influence how AI technologies developed worldwide. Michelle Kim concluded that the regulatory battles fought in American state capitals would be crucial, as they would dictate the development of this transformative technology for years to come. |