LmCast :: Stay tuned in

The Bay Area’s animal welfare movement wants to recruit AI

Recorded: March 24, 2026, 2:23 a.m.

Original Summarized

The Bay Area’s animal welfare movement wants to recruit AI | MIT Technology Review

You need to enable JavaScript to view this site.

Skip to ContentMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioArtificial intelligenceThe Bay Area’s animal welfare movement wants to recruit AIAt the Sentient Futures Summit, animal welfare advocates pushed to make AI care about animals—and asked if AI might be sentient too.
By Michelle Kimarchive pageGrace Huckinsarchive pageMarch 23, 2026Sarah Rogers/MIT Technology Review | Getty Images In early February, animal welfare advocates and AI researchers gathered in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. Yellow and red canopies billowed overhead, Persian rugs blanketed the floor, and mosaic lamps glowed beside potted plants.  In the common area, a wildlife advocate spoke passionately to a crowd lounging in beanbags about a form of rodent birth control that could manage rat populations without poison. In the “Crustacean Room,” a dozen people sat in a circle, debating whether the sentience of insects could tell us anything about the inner lives of chatbots. In front of the “Bovine Room” stood a bookshelf stacked with copies of Eliezer Yudkowsky’s If Anyone Builds It, Everyone Dies, a manifesto arguing that AI could wipe out humanity.  The event was hosted by Sentient Futures, an organization that believes the future of animal welfare will depend on AI. Like many Bay Area denizens, the attendees were decidedly “AGI-pilled”—they believe that artificial general intelligence, powerful AI that can compete with humans on most cognitive tasks, is on the horizon. If that’s true, they reason, then AI will likely prove key to solving society’s thorniest problems—including animal suffering. Related StoryHow AGI became the most consequential conspiracy theory of our timeRead next To be clear, experts still fiercely debate whether today’s AI systems will ever achieve human- or superhuman-level intelligence, and it’s not clear what will happen if they do. But some conference attendees envision a possible future in which it is AI systems, and not humans, who call the shots. Eventually, they think, the welfare of animals could hinge on whether we’ve trained AI systems to value animal lives. 
“AI is going to be very transformative, and it’s going to pretty much flip the game board,” said Constance Li, founder of Sentient Futures. “If you think that AI will make the majority of decisions, then it matters how they value animals and other sentient beings”—those that can feel and, therefore, suffer. Like Li, many summit attendees have been committed to animal welfare since long before AI came into the picture. But they’re not the types to donate a hundred bucks to an animal shelter. Instead of focusing on local actions, they prioritize larger-scale solutions, such as reducing factory farming by promoting cultivated meat, which is grown in a lab from animal cells. 
Related StoryInside effective altruism, where the far future counts a lot more than the presentRead next The Bay Area animal welfare movement is closely linked to effective altruism, a philanthropic movement committed to maximizing the amount of good one does in the world—indeed, many conference attendees work for organizations funded by effective altruists. That philosophy might sound great on paper, but “maximizing good” is a tricky puzzle that might not admit a clear solution. The movement has been widely criticized for some of its conclusions, such as promoting working in exploitative industries to maximize charitable donations and ignoring present-day harms in favor of  issues that could cause suffering for a large number of people who haven’t been born yet. Critics also argue that effective altruists neglect the importance of systemic issues such as racism and economic exploitation and overlook the insights that marginalized communities might have into the best ways to improve their own lives. When it comes to animal welfare, this exactingly utilitarian approach can lead to some strange conclusions. For example, some effective altruists say it makes sense to commit significant resources to improving the welfare of insects and shrimp because they exist in such staggering numbers, even though they may not have much individual capacity for suffering.  Now the movement is sorting out how AI fits in. At the summit, Jasmine Brazilek, cofounder of a nonprofit called Compassion in Machine Learning, opened her sticker-stamped laptop to pull up a benchmark she devised to measure how LLMs reason about animal welfare. A cloud security engineer turned animal advocate, she’d flown in from La Paz, Mexico, where she runs her nonprofit with a handful of volunteers and a shoestring budget.  Brazilek urged the AI researchers in the room to train their models with synthetic documents that reflect concern for animal welfare. “Hopefully, future superintelligent systems consider nonhuman interest, and there is a world where AI amplifies the best of human values and not the worst,” she said.  The power of the purse  The technologically inclined side of the animal welfare movement has faced some major setbacks in recent years. Dreams of transitioning people away from a diet dependent on factory farming have been dampened by developments such as the decimation of the plant-based-meat company Beyond Meat’s stock price and the passage of laws banning cultivated meat in several US states. Related StoryAlternative meat could help the climate. Will anyone eat it?Read next AI has injected a shot of optimism. Like much of Silicon Valley, many attendees at the summit subscribe to the idea that AI might dramatically increase their productivity—though their goal is not to maximize their seed round but, rather, to prevent as much animal suffering as possible. Some brainstormed how to use Claude Code and custom agents to handle the coding and administrative tasks in their advocacy work. Others pitched the idea of developing new, cheaper methods for cultivating meat using scientific AI tools such as AlphaFold, which aids in molecular biology research by predicting the three-dimensional structures of proteins. But the real talk of the event was a flood of funding that advocates expect will soon be committed to animal welfare charities—not by individual megadonors, but by AI lab employees.  Much of the funding for the farm animal welfare movement, which includes nonprofits advocating for improved conditions on farms, promoting veganism, and endorsing cultivated meat, comes from people in the tech industry, says Lewis Bollard, the managing director of the farm animal welfare fund at Coefficient Giving, a philanthropic funder that used to be called Open Philanthropy. Coefficient Giving is backed by Facebook cofounder Dustin Moskovitz and his wife, Cari Tuna, who are among a handful of Silicon Valley billionaires who embrace effective altruism

“This has just been an area that was completely neglected by traditional philanthropies,” such as the Gates Foundation and the Ford Foundation, Bollard says. “It’s primarily been people in tech who have been open to [it].” Related StoryEric Schmidt: Why America needs an Apollo program for the age of AIRead next The next generation of big donors, Bollard expects, will be AI researchers—particularly those who work at Anthropic, the AI lab behind the chatbot Claude. Anthropic’s founding team also has connections to the effective altruism movement, and the company has a generous donation matching program. In February, Anthropic’s valuation reached $380 billion and it gave employees the option to cash in on their equity, so some of that money could soon be flowing into charitable coffers. The prospect of new funding sustained a constant buzz of conversation at the summit. Animal welfare advocates huddled in the “Arthropod Room” and scrawled big dollar figures and catchy acronyms for projects on a whiteboard. One person pitched a $100 million animal super PAC that would place staffers with Congress members and lobby for animal welfare legislation. Some wanted to start a media company that creates AI-generated content on TikTok promoting veganism. Others spoke about placing animal advocates inside AI labs. “The amount of new funding does give us more confidence to be bolder about things,” said Aaron Boddy, cofounder of the Shrimp Welfare Project, an organization that aims to reduce the suffering of farmed shrimp through humane slaughter, among other initiatives.  The question of AI welfare But animal welfare was only half the focus of the Sentient Futures summit. Some attendees probed far headier territory. They took seriously the controversial idea that AI systems might one day develop the capacity to feel and therefore suffer, and they worry that this future AI suffering, if ignored, could constitute a moral catastrophe. Related StoryMinds of machines: The great AI consciousness conundrumRead next AI suffering is a tricky research problem, not least because scientists don’t yet have a solid grip on why humans and other animals are sentient. But at the summit, a niche cadre of philosophers, largely funded by the effective altruism movement, and a handful of freewheeling academics grappled with the question. Some presented their research on using LLMs to evaluate whether other LLMs might be sentient. On Debate Night, attendees argued about whether we should ironically call sentient AI systems “clankers,” a derogatory term for robots from the film Star Wars, asking if the robot slur could shape how we treat a new kind of mind.  “It doesn’t matter if it’s a cow or a pig or an AI, as long as they have the capacity to feel happiness or suffering,” says Li.  In some ways, bringing AI sentience into an animal welfare conference isn’t as strange a move as it might seem. Researchers who work on machine sentience often draw on theories and approaches pioneered in the study of animal sentience, and if you accept that invertebrates likely feel pain and believe that AI systems might soon achieve superhuman intelligence, entertaining the possibility that those systems might also suffer may not be much of a leap.
“Animal welfare advocates are used to going against the grain,” says Derek Shiller, an AI consciousness researcher at the think tank Rethink Priorities, who was once a web developer at the animal advocacy nonprofit Humane League. “They’re more open to being concerned about AI welfare, even though other people think it’s silly.” Related StoryIs this the end of animal testing?Read nextBut outside the niche Bay Area circle, caring about the possibility of AI sentience is a harder sell. Li says she faced pushback from other animal welfare advocates when, inspired by a conference on AI sentience she attended in 2023, she rebranded her farm animal welfare advocacy organization as Sentient Futures last year. “Many people were extremely confident that AIs would never become sentient and [argued that] by investing any energy or money into AI welfare, we’re just burning money and throwing it away,” she says. Matt Dominguez, executive director of Compassion in World Farming, echoed the concern. “I would hate to see people pulling money out of farm animal welfare or animal welfare and moving it into something that is hypothetical at this particular moment,” he says. Still, Dominguez, who started partnering with the Shrimp Welfare Project after learning about invertebrate suffering, believes compassion is expansive. “When we get someone to care about one of those things, it creates capacity for their circle of compassion to grow to include others,” he says. by Michelle Kim & Grace HuckinsShareShare story on linkedinShare story on facebookShare story on emailPopularA “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptionsMichelle KimMoltbook was peak AI theaterWill Douglas HeavenHow Pokémon Go is giving delivery robots an inch-perfect view of the worldWill Douglas HeavenMeet the Vitalists: the hardcore longevity enthusiasts who believe death is “wrong”Jessica HamzelouDeep DiveArtificial intelligenceA “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptionsBacklash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
By Michelle Kimarchive pageMoltbook was peak AI theaterThe viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
By Will Douglas Heavenarchive pageHow Pokémon Go is giving delivery robots an inch-perfect view of the worldExclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
By Will Douglas Heavenarchive pageOpenAI is throwing everything into building a fully automated researcherAn exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI.
By Will Douglas Heavenarchive pageStay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories,
upcoming events, and more.Enter your emailPrivacy PolicyThank you for submitting your email!Explore more newslettersIt looks like something went wrong.
We’re having trouble saving your preferences.
Try refreshing this page and updating them one
more time. If you continue to get this message,
reach out to us at
customer-service@technologyreview.com with a list of newsletters you’d like to receive.The latest iteration of a legacyFounded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact.READ ABOUT OUR HISTORYAdvertise with MIT Technology ReviewElevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.ADVERTISE WITH US© 2026 MIT Technology ReviewAboutAbout usCareersCustom contentAdvertise with usInternational EditionsRepublishingMIT Alumni NewsHelpHelp & FAQMy subscriptionEditorial guidelinesPrivacy policyTerms of ServiceWrite for usContact uslinkedin opens in a new windowinstagram opens in a new windowreddit opens in a new windowfacebook opens in a new windowrss opens in a new window

The Bay Area’s animal welfare movement is exploring a radical new strategy: recruiting artificial intelligence to address animal suffering, a move underpinned by the belief that advanced AI, or Artificial General Intelligence (AGI), is on the horizon. This exploration was vividly illustrated at the Sentient Futures Summit in San Francisco, a gathering of animal welfare advocates and AI researchers convened by Constance Li and her organization, Sentient Futures. Attendees, largely influenced by the “AGI-pilled” perspective – the conviction that AGI is imminent – recognized that if AGI becomes a reality, it could fundamentally shift the landscape of solutions to societal problems, including animal welfare.

The movement’s approach, heavily influenced by effective altruism—a philosophy focused on maximizing positive impact—has drawn both support and criticism. Critics argue that effective altruism can prioritize long-term, hypothetical harms over immediate needs and may overlook systemic issues like racism and economic inequality. Despite these critiques, the group’s funding largely comes from tech industry figures, particularly those associated with Anthropic, reflecting a growing interest within the Silicon Valley community fueled by the potential of AI.

Jasmine Brazilek, co-founder of Compassion in Machine Learning, highlighted the nascent efforts to develop benchmarks for assessing an AI's understanding of animal welfare, showcasing her work evaluating Large Language Models (LLMs) for their reasoning capabilities. The group’s brainstorming encompassed various applications, including leveraging Claude Code and AlphaFold to streamline advocacy work and potentially revolutionize cultivated meat production. However, they also engaged in a more philosophical debate: the possibility of AI sentience and its associated moral implications.

This questioning of AI welfare wasn’t solely based on utilitarian grounds. Researchers, often funded by effective altruism, were exploring whether AI systems could potentially develop subjective experiences akin to suffering. The discussion extended to provocative considerations—such as ironically labeling sentient AI systems “clankers”—reflecting a willingness to confront challenging, speculative scenarios. Ultimately, the movement’s willingness to confront this prospect, stemming from concerns voiced by individuals like Derek Shiller at Rethink Priorities, distinguishes it from more traditional animal welfare approaches.

The shift towards AI is fueled by a changing funding landscape. While initial investments came from individual tech millionaires, like Dustin Moskovitz, the prospect of substantial funding from AI lab employees, particularly at Anthropic, has bolstered optimism. This convergence—connecting highly specialized AI research with a long-standing animal welfare movement—represents a profound and uniquely Bay Area development.