LmCast :: Stay tuned in

Our approach to age prediction

Recorded: Jan. 21, 2026, 11:03 a.m.

Original Summarized

Our approach to age prediction | OpenAISkip to main contentLog inSwitch toChatGPT(opens in a new window)Sora(opens in a new window)API Platform(opens in a new window)ResearchSafetyFor BusinessFor DevelopersChatGPTSoraStoriesCompanyNewsResearchBack to main menuResearch IndexResearch OverviewResearch ResidencyOpenAI for ScienceLatest AdvancementsGPT-5.2GPT-5.1Sora 2GPT-5OpenAI o3 and o4-miniGPT-4.5SafetyBack to main menuSafety ApproachSecurity & PrivacyFor BusinessBack to main menuBusiness OverviewSolutionsLearnStartupsChatGPT PricingAPI PricingContact SalesFor DevelopersBack to main menuAPI PlatformAPI PricingAgentsCodexOpen ModelsCommunity(opens in a new window)ChatGPTBack to main menuExplore ChatGPTBusinessEnterpriseEducationPricingDownloadSoraStoriesCompanyBack to main menuAbout UsOur CharterFoundationCareersBrand GuidelinesNewsLog inOpenAITable of contentsHow age prediction worksWhat’s nextJanuary 20, 2026SafetyCompanyOur approach to age predictionBuilding on our work to strengthen teen safety.Loading…ShareWe’re rolling out age prediction on ChatGPT consumer plans to help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens. As we’ve outlined in our Teen Safety Blueprint⁠ and Under-18 Principles for Model Behavior⁠, young people deserve technology that both expands opportunity and protects their well-being.Age prediction builds on protections already in place. Teens who tell us they are under 18 when they sign up automatically receive additional safeguards to reduce exposure to sensitive or potentially harmful content. This also enables us to treat adults like adults and use our tools in the way that they want, within the bounds of safety. We previously shared our early plans⁠ for age prediction, and today we’re sharing more detail as the rollout is underway.How age prediction worksChatGPT uses an age prediction model to help estimate whether an account likely belongs to someone under 18. The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age. Deploying age prediction helps us learn which signals improve accuracy, and we use those learnings to continuously refine the model over time.Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service. Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account.When the age prediction model estimates that an account may belong to someone under 18, ChatGPT automatically applies additional protections⁠ designed to reduce exposure to sensitive content, such as:Graphic violence or gory contentViral challenges that could encourage risky or harmful behavior in minorsSexual, romantic, or violent role playDepictions of self-harmContent that promotes extreme beauty standards, unhealthy dieting, or body shamingThis approach is guided by expert input and rooted in academic literature about the science of child development and acknowledges known teen differences in risk perception, impulse control, peer influence, and emotional regulation. While these content restrictions help reduce teens’ exposure to sensitive material, we are focused on continually improving these protections, especially to address attempts to bypass our safeguards. When we are not confident about someone’s age or have incomplete information, we default to a safer experience.In addition to these safeguards, parents can choose to customize their teen’s experience further through parental controls⁠(opens in a new window) including setting quiet hours when ChatGPT can not be used, controlling features such as memory or model training, and receiving notifications if signs of acute distress are detected.What’s nextWe’re learning from the initial rollout and continuing to improve the accuracy of age prediction over time. We will closely track rollout and use those signals to guide ongoing improvements.In the EU, age prediction will roll out in the coming weeks to account for regional requirements. For more detail, visit our help page⁠(opens in a new window). While this is an important milestone, our work to support teen safety is ongoing. We’ll continue to share updates on our progress and what we’re learning, in dialogue with experts including the American Psychological Association, ConnectSafely, and Global Physicians Network⁠.2026ChatGPTUser Safety & ControlAuthorOpenAIKeep readingView allAI literacy resources for teens and parentsSafetyDec 18, 2025Introducing the Teen Safety BlueprintCompanyNov 6, 2025Teen safety, freedom, and privacySafetySep 16, 2025Our ResearchResearch IndexResearch OverviewResearch ResidencyOpenAI for ScienceLatest AdvancementsGPT-5OpenAI o3OpenAI o4-miniGPT-4oGPT-4o miniSoraSafetySafety ApproachSecurity & PrivacyTrust & TransparencyChatGPTExplore ChatGPT(opens in a new window)BusinessEnterpriseEducationPricing(opens in a new window)Download(opens in a new window)SoraSora OverviewFeaturesPricingSora log in(opens in a new window)API PlatformPlatform OverviewPricingAPI log in(opens in a new window)Documentation(opens in a new window)Developer Forum(opens in a new window)For BusinessBusiness OverviewSolutionsContact SalesCompanyAbout UsOur CharterFoundationCareersBrandSupportHelp Center(opens in a new window)MoreNewsStoriesLivestreamsPodcastRSSTerms & PoliciesTerms of UsePrivacy PolicyOther Policies (opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)OpenAI © 2015–2026Manage CookiesEnglishUnited States

OpenAI’s approach to age prediction centers on bolstering the safety and appropriate experience for young users of ChatGPT, specifically those under the age of 18. This initiative, detailed within the “Teen Safety Blueprint” and “Under-18 Principles for Model Behavior,” reflects a commitment to balancing opportunity with protective measures, acknowledging key differences in risk perception, impulse control, peer influence, and emotional regulation observed in adolescents. The core of the system relies on a sophisticated age prediction model, continuously learning and refining its accuracy through the analysis of behavioral and account-level signals. These signals encompass a multifaceted assessment, including account longevity, typical usage patterns (time of day activity), evolving usage trends over time, and, crucially, the user’s directly stated age.

The immediate effect of the age prediction model is the automatic application of enhanced safeguards when an account is flagged as potentially belonging to a minor. These safeguards represent a targeted reduction in exposure to categories of content deemed particularly concerning for young users. Specifically, the system restricts access to graphic violence or gory content, content designed to encourage viral challenges that may promote risky behaviors, depictions of sexual, romantic, or violent role-play, portrayals of self-harm, depictions of extreme beauty standards, unhealthy dieting, or body-shaming. This layered approach isn’t a static solution; OpenAI emphasizes a continuous improvement process, prioritizing enhanced protections while actively addressing attempts to circumvent these safeguards.

A key element of the system—and a vital recourse for users incorrectly categorized—is the integration of Persona, a secure identity-verification service. When the model’s assessment yields uncertainty or incomplete data, users can confidently confirm their age and restore full access to the platform. This functionality—accessible through Settings > Account—mirrors a user-centric design focused on control and transparency.

Beyond the core age prediction mechanism, OpenAI offers parental controls providing an additional layer of customized safety. These controls extend to setting quiet hours (restricting ChatGPT usage), managing features like memory or model training, and receiving notifications regarding potential signs of acute distress. This combination of automated safeguarding and deliberate parental oversight represents a multi-pronged defensive strategy.

The program's development is not a static occurrence. OpenAI intends to closely monitor the rollout, leveraging the incoming data to systematically enhance the accuracy of the age prediction model over time. This iterative process underscores a dynamic commitment in learning and adaptability. Furthermore, to ensure the highest standards, OpenAI conducts ongoing dialogues with external experts including the American Psychological Association, ConnectSafely, and Global Physicians Network. These partnerships facilitate informed and research-backed strategies.

The rollout is phased, beginning with the European Union (“EU”) within the coming weeks, adjusting to comply with regional requirements. OpenAI details that this is a significant milestone, and the company’s work to support teen safety is ongoing, and they will continue to share updates on their progress and findings.