An AI model trained on prison phone calls now looks for planned crimes in those calls | MIT Technology Review
You need to enable JavaScript to view this site.
Skip to ContentMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioArtificial intelligenceAn AI model trained on prison phone calls now looks for planned crimes in those callsThe model is built to detect when crimes are being “contemplated.” By James O'Donnellarchive pageDecember 1, 2025Stephanie Arnett/MIT Technology Review | Adobe Stock A US telecom company trained an AI model on years of inmates’ phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes. Securus Technologies president Kevin Elder told MIT Technology Review that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on building other state- or county-specific models. Over the past year, Elder says, Securus has been piloting the AI tools to monitor inmate conversations in real time (the company declined to specify where this is taking place, but its customers include jails holding people awaiting trial, prisons for those serving sentences, and Immigrations and Customs Enforcement detention facilities). “We can point that large language model at an entire treasure trove [of data],” Elder says, “to detect and understand when crimes are being thought about or contemplated, so that you’re catching it much earlier in the cycle.” As with its other monitoring tools, investigators at detention facilities can deploy the AI features to monitor randomly selected conversations or those of individuals suspected by facility investigators of criminal activity, according to Elder. The model will analyze phone and video calls, text messages, and emails and then flag sections for human agents to review. These agents then send them to investigators for follow-up. In an interview, Elder said Securus’ monitoring efforts have helped disrupt human trafficking and gang activities organized from within prisons, among other crimes, and said its tools are also used to identify prison staff who are bringing in contraband. But the company did not provide MIT Technology Review with any cases specifically uncovered by its new AI models. People in prison, and those they call, are notified that their conversations are recorded. But this doesn’t mean they’re aware that those conversations could be used to train an AI model, says Bianca Tylek, executive director of the prison rights advocacy group Worth Rises. “That’s coercive consent; there’s literally no other way you can communicate with your family,” Tylek says. And since inmates in the vast majority of states pay for these calls, she adds, “not only are you not compensating them for the use of their data, but you’re actually charging them while collecting their data.” Related StoryHow a new type of AI is helping police skirt facial recognition bansRead next A Securus spokesperson said the use of data to train the tool "is not focused on surveilling or targeting specific individuals, but rather on identifying broader patterns, anomalies, and unlawful behaviors across the entire communication system." They added that correctional facilities determine their own recording and monitoring policies, which Securus follows, and did not directly answer whether inmates can opt out of having their recordings used to train AI. Other advocates for inmates say Securus has a history of violating their civil liberties. For example, leaks of its recordings databases showed the company had improperly recorded thousands of calls between inmates and their attorneys. Corene Kendrick, the deputy director of the ACLU’s National Prison Project, says that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power. “[Are we] going to stop crime before it happens because we’re monitoring every utterance and thought of incarcerated people?” Kendrick says. “I think this is one of many situations where the technology is way far ahead of the law.” The company spokesperson said the tool's function is to make monitoring more efficient amid staffing shortages, “not to surveil individuals without cause.” Securus will have an easier time funding its AI tool thanks to the company’s recent win in a battle with regulators over how telecom companies can spend the money they collect from inmates’ calls. In 2024, the Federal Communications Commission issued a major reform, shaped and lauded by advocates for prisoners’ rights, that forbade telecoms from passing the costs of recording and surveilling calls on to inmates. Companies were allowed to continue to charge inmates a capped rate for calls, but prisons and jails were ordered to pay for most security costs out of their own budgets.
Negative reactions to this change were swift. Associations of sheriffs (who typically run county jails) complained they could no longer afford proper monitoring of calls, and attorneys general from 14 states sued over the ruling. Some prisons and jails warned they would cut off access to phone calls. While it was building and piloting its AI tool, Securus held meetings with the FCC and lobbied for a rule change, arguing that the 2024 reform went too far and asking that the agency again allow companies to use fees collected from inmates to pay for security. In June, Brendan Carr, whom President Donald Trump appointed to lead the FCC, said it would postpone all deadlines for jails and prisons to adopt the 2024 reforms, and even signaled that the agency wants to help telecom companies fund their AI surveillance efforts with the fees paid by inmates. In a press release, Carr wrote that rolling back the 2024 reforms would “lead to broader adoption of beneficial public safety tools that include advanced AI and machine learning.” On October 28, the agency went further: It voted to pass new, higher rate caps and allow companies like Securus to pass security costs relating to recording and monitoring of calls—like storing recordings, transcribing them, or building AI tools to analyze such calls, for example—on to inmates. A spokesperson for Securus told MIT Technology Review that the company aims to balance affordability with the need to fund essential safety and security tools. “These tools, which include our advanced monitoring and AI capabilities, are fundamental to maintaining secure facilities for incarcerated individuals and correctional staff and to protecting the public,” they wrote. FCC commissioner Anna Gomez dissented in last month’s ruling. “Law enforcement,” she wrote in a statement, “should foot the bill for unrelated security and safety costs, not the families of incarcerated people.” The FCC will be seeking comment on these new rules before they take final effect. by James O'DonnellShareShare story on linkedinShare story on facebookShare story on emailPopularWe’re learning more about what vitamin D does to our bodiesJessica HamzelouHow AGI became the most consequential conspiracy theory of our timeWill Douglas HeavenOpenAI’s new LLM exposes the secrets of how AI really worksWill Douglas HeavenMeet the man building a starter kit for civilizationTiffany NgDeep DiveArtificial intelligenceHow AGI became the most consequential conspiracy theory of our timeThe idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do. By Will Douglas Heavenarchive pageOpenAI’s new LLM exposes the secrets of how AI really worksThe experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are. By Will Douglas Heavenarchive pageQuantum physicists have shrunk and “de-censored” DeepSeek R1They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems. By Caiwei Chenarchive pageAI toys are all the rage in China—and now they’re appearing on shelves in the US tooCompetition is heating up, with Mattel and OpenAI expected to launch a product for kids this year. By Caiwei Chenarchive pageStay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more.Enter your emailPrivacy PolicyThank you for submitting your email!Explore more newslettersIt looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.The latest iteration of a legacyFounded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact.READ ABOUT OUR HISTORYAdvertise with MIT Technology ReviewElevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.ADVERTISE WITH US© 2025 MIT Technology ReviewAboutAbout usCareersCustom contentAdvertise with usInternational EditionsRepublishingMIT Alumni NewsHelpHelp & FAQMy subscriptionEditorial guidelinesPrivacy policyTerms of ServiceWrite for usContact uslinkedin opens in a new windowinstagram opens in a new windowreddit opens in a new windowfacebook opens in a new windowrss opens in a new window |
The Securus Technologies company has developed an AI model trained on years of inmate phone calls, designed to detect contemplated criminal activity. This model, built upon a seven-year dataset of calls from Texas prison inmates, aims to identify these thoughts before they manifest into crimes, allowing for earlier intervention. Securus, led by Kevin Elder, is piloting this technology, which analyzes phone calls, texts, and emails, flagging sections for human agents to review. The model’s ability to identify human trafficking, gang activity, and contraband smuggling within prisons is central to its application. However, the deployment raises significant ethical concerns.
The company's approach relies on the fact that inmates, in most states, are required to pay for calls, and the monitoring is conducted in real-time. As with all recorded communications, inmates and those they contact are notified, yet the nature of the training—using their conversations to build an AI—is not explicitly disclosed to them. This “coercive consent,” according to Worth Rises' Bianca Tylek, creates a situation where individuals are essentially providing data without fully understanding its purpose. The fact that inmates are paying for these calls amplifies this issue, representing a form of data extraction without compensation.
Despite initial concerns, the FCC, under Brendan Carr, reversed a 2024 reform that had previously prohibited telecoms from charging inmates for monitoring and surveillance. This change, spurred by Securus’ lobbying efforts, allows the company to continue using fees collected from inmates to fund the development and operation of its AI tools. This shift has broad implications, as it potentially undermines efforts to protect inmates’ rights and privacy.
The FCC's decision has ignited a backlash from sheriffs' associations and state attorneys general, highlighting the potential for this technology to overstep boundaries and create an invasive surveillance system. Critics argue that authorizing the use of inmate fees to fund such AI capabilities represents a departure from the principle of law enforcement funding security and safety measures independently. The company’s stance emphasizes a pragmatic “balancing act” between affordability and necessary safety tools.
The FCC’s recent vote, led by Anna Gomez who dissented, solidifies this direction, aiming to allow telecoms to continue funding the AI surveillance efforts with the fees paid by inmates. This continued authorization creates a complex situation, one in which an AI model, trained on sensitive and private communications, is deployed within correctional facilities, raising questions about the future of inmate privacy and the potential for misuse of this powerful technology. The debate underscores the urgent need for clear legal frameworks governing the use of AI in correctional settings, and the significant ethical challenges presented by this rapidly evolving technology. |