Why chatbots are starting to check your age | MIT Technology Review
You need to enable JavaScript to view this site.
Skip to ContentMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioMIT Technology ReviewFeaturedTopicsNewslettersEventsAudioArtificial intelligenceWhy chatbots are starting to check your ageConfirming which users are kids is politically fraught and a technical nightmare. Here’s what moves from OpenAI and the FTC tell us. By James O'Donnellarchive pageJanuary 26, 2026Photo Illustration by Sarah Rogers/MITTR | Photos Getty This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. How do tech companies check if their users are kids? This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates. In one corner is the Republican Party, which has supported laws passed in several states that require sites with adult content to verify users’ ages. Critics say this provides cover to block anything deemed “harmful to minors,” which could include sex education. Other states, like California, are coming after AI companies with laws to protect kids who talk to chatbots (by requiring them to verify who’s a kid). Meanwhile, President Trump is attempting to keep AI regulation a national issue rather than allowing states to make their own rules. Support for various bills in Congress is constantly in flux. So what might happen? The debate is quickly moving away from whether age verification is necessary and toward who will be responsible for it. This responsibility is a hot potato that no company wants to hold. In a blog post last Tuesday, OpenAI revealed that it plans to roll out automatic age prediction. In short, the company will apply a model that uses factors like the time of day, among others, to predict whether a person chatting is under 18. For those identified as teens or children, ChatGPT will apply filters to “reduce exposure” to content like graphic violence or sexual role-play. YouTube launched something similar last year. If you support age verification but are concerned about privacy, this might sound like a win. But there's a catch. The system is not perfect, of course, so it could classify a child as an adult or vice versa. People who are wrongly labeled under 18 can verify their identity by submitting a selfie or government ID to a company called Persona. Selfie verifications have issues: They fail more often for people of color and those with certain disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that Persona will need to hold millions of government IDs and masses of biometric data is another weak point. “When those get breached, we’ve exposed massive populations all at once,” he says. Hinduja instead advocates for device-level verification, where a parent specifies a child’s age when setting up the child’s phone for the first time. This information is then kept on the device and shared securely with apps and websites. That’s more or less what Tim Cook, the CEO of Apple, recently lobbied US lawmakers to call for. Cook was fighting lawmakers who wanted to require app stores to verify ages, which would saddle Apple with lots of liability. Related StoryAn AI companion site is hosting sexually charged conversations with underage celebrity botsRead nextMore signals of where this is all headed will come on Wednesday, when the Federal Trade Commission—the agency that would be responsible for enforcing these new laws—is holding an all-day workshop on age verification. Apple’s head of government affairs, Nick Rossi, will be there. He’ll be joined by higher-ups in child safety at Google and Meta, as well as a company that specializes in marketing to children. The FTC has become increasingly politicized under President Trump (his firing of the sole Democratic commissioner was struck down by a federal court, a decision that is now pending review by the US Supreme Court). In July, I wrote about signals that the agency is softening its stance toward AI companies. Indeed, in December, the FTC overturned a Biden-era ruling against an AI company that allowed people to flood the internet with fake product reviews, writing that it clashed with President Trump’s AI Action Plan. Wednesday’s workshop may shed light on how partisan the FTC’s approach to age verification will be. Red states favor laws that require porn websites to verify ages (but critics warn this could be used to block a much wider range of content). Bethany Soye, a Republican state representative who is leading an effort to pass such a bill in her state of South Dakota, is scheduled to speak at the FTC meeting. The ACLU generally opposes laws requiring IDs to visit websites and has instead advocated for an expansion of existing parental controls. While all this gets debated, though, AI has set the world of child safety on fire. We’re dealing with increased generation of child sexual abuse material, concerns (and lawsuits) about suicides and self-harm following chatbot conversations, and troubling evidence of kids’ forming attachments to AI companions. Colliding stances on privacy, politics, free expression, and surveillance will complicate any effort to find a solution. Write to me with your thoughts. by James O'DonnellShareShare story on linkedinShare story on facebookShare story on emailPopular10 Breakthrough Technologies 2026Amy NordrumThe great AI hype correction of 2025Will Douglas HeavenChina figured out how to sell EVs. Now it has to deal with their aging batteries.Caiwei ChenThe 8 worst technology flops of 2025Antonio RegaladoDeep DiveArtificial intelligenceThe great AI hype correction of 2025Four ways to think about this year's reckoning. By Will Douglas Heavenarchive pageWhat’s next for AI in 2026Our AI writers make their big bets for the coming year—here are five hot trends to watch. By Rhiannon Williamsarchive pageWill Douglas Heavenarchive pageCaiwei Chenarchive pageJames O'Donnellarchive pageMichelle Kimarchive pageMeet the new biologists treating LLMs like aliensBy studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time. By Will Douglas Heavenarchive pageYann LeCun’s new venture is a contrarian bet against large language models In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs. By Caiwei Chenarchive pageStay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more.Enter your emailPrivacy PolicyThank you for submitting your email!Explore more newslettersIt looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.The latest iteration of a legacyFounded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact.READ ABOUT OUR HISTORYAdvertise with MIT Technology ReviewElevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.ADVERTISE WITH US© 2026 MIT Technology ReviewAboutAbout usCareersCustom contentAdvertise with usInternational EditionsRepublishingMIT Alumni NewsHelpHelp & FAQMy subscriptionEditorial guidelinesPrivacy policyTerms of ServiceWrite for usContact uslinkedin opens in a new windowinstagram opens in a new windowreddit opens in a new windowfacebook opens in a new windowrss opens in a new window |
The increasing concern surrounding children’s interactions with AI chatbots is driving a significant shift in how tech companies are approaching age verification. This situation, fueled by anxieties regarding child safety – including the rise of child sexual abuse material, potential self-harm related to chatbot conversations, and the development of unhealthy attachments – has prompted a flurry of legislative activity and corporate responses. Initially, Big Tech relied on collecting birthdays, a practice that was ultimately deemed insufficient due to the ease of fabrication and lack of corresponding content moderation. This has now evolved into a complex and politically charged landscape.
OpenAI, a leading developer of AI models like ChatGPT, is spearheading this change with plans to implement automatic age prediction. Utilizing factors such as the time of day as a data point, the company intends to categorize users based on predicted age, applying filters to limit exposure to potentially harmful content for those identified as minors. This initiative mirrors similar efforts by YouTube, which also rolled out age verification measures last year. However, the effectiveness of this system is immediately questioned; inevitable inaccuracies could lead to misclassification, with individuals incorrectly labeled as adults or vice versa.
To address this, OpenAI has partnered with Persona, a company specializing in identity verification, to allow users to substantiate their age. This process involves submitting a selfie or providing a government-issued ID. Despite this mechanism, challenges remain. The system’s performance has been demonstrably flawed, particularly for individuals of color and those with disabilities, highlighting concerns about bias and accessibility within the verification process. As co-director of the Cyberbullying Research Center, Sameer Hinduja, points out, the potential for a massive breach involving the storage of millions of government IDs and biometric data represents a substantial risk.
The drive for age verification is not solely a technological challenge; it's deeply intertwined with evolving political and regulatory frameworks. Across the United States, states are enacting laws requiring verification for sites containing adult content, though critics argue this could be used to suppress a wider range of lawful material. In South Dakota, Representative Bethany Soye is leading an effort to pass a similar bill. The ACLU advocates for an expansion of existing parental controls rather than necessitating IDs for accessing websites, reflecting a broader debate between surveillance and freedom of expression.
The situation is further complicated by the increasing politicization of the issue, particularly under the Trump administration. The FTC’s shifting stance, culminating in a reversal of a Biden-era ruling against an AI company, demonstrates a willingness to prioritize political considerations over regulatory standards. Wednesday’s FTC workshop, featuring key figures from Apple, Google, Meta, and a marketing firm specializing in children’s products, is expected to shed light on the agency’s approach to age verification. Nick Rossi, Apple’s head of government affairs, will be a central participant, alongside higher-ups from Google and Meta, alongside a company specializing in marketing to children.
Ultimately, the immediate focus is on the technological implementation of age prediction and verification. However, the conversation extends far beyond simple identification. The debate involves fundamental questions of privacy, surveillance, freedom of expression, and the appropriate role of technology in safeguarding children. The shifting and often contradictory stances of government agencies and private companies underscore the complexity of this emerging landscape. James O’Donnell’s reporting highlights the precarious and rapidly evolving nature of this technological and societal challenge. |