Age Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise?
Recorded: Jan. 24, 2026, 11:01 a.m.
| Original | Summarized |
Age Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise? | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoMerchSearchSearchSign InSign InJason ParhamCultureJan 23, 2026 11:11 AMAge Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise?TikTok’s new age-detection tech seems like a better solution than automatically banning youth accounts. But experts say it still requires social platforms to surveil users more closely.Photograph: Kristian Bocsi/Getty ImagesCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyGovernments worldwide are moving to limit children’s access to social media as lawmakers question whether platforms are capable of enforcing their own minimum age requirements. TikTok recently became the latest tech giant to give in to regulatory pressure when it announced that it would implement a new age-detection system across Europe to better keep kids under the age of 13 off the platform.The system, which follows a yearlong pilot in the UK meant to proactively identify and remove underage users, relies on a combination of profile data, content analysis, and behavioral signals to evaluate whether an account possibly belongs to a minor. (TikTok requires users to be at least 13 to sign up). According to a statement from the company, its age-detection system does not automatically ban users. The system flags accounts it suspects are run by users under 13 and forwards those accounts to human moderators for review. TikTok did not respond to a request for comment.The European rollout comes amid global conversation around the negative effects of social media on children, and as governments debate stricter age-based regulatory approaches. Australia last year became the first country to ban social media for children under 16, including the use of Instagram, YouTube, Snap, and TikTok. The European Parliament is also advocating for mandatory age limits, while Denmark and Malaysia are considering a ban for children under 16.“We are in the middle of an experiment where American and Chinese tech giants have unlimited access to the attention of our children and young people for hours every single day almost entirely without oversight,” Christel Schaldemose, a Danish lawmaker and vice president of the European Parliament, said in November during parliamentary session that, according to Reuters, “called for an EU-wide ban on access for children under 16 to online platforms, video-sharing sites, and AI companions without parental consent and an outright ban for those younger than 13.”Advocacy groups in Canada are similarly calling for the creation of a dedicated regulatory body to address online harms affecting young people following the flood of sexualized deepfakes on X by its AI chatbot Grok. ChatGPT likewise announced that it was rolling out age prediction software to determine whether an account likely belongs to someone under 18 so the correct safeguards can be applied. As age-verification becomes a new online norm, policy makers are attempting to profoundly reshape the internet of the future. In the US, 25 states have already enacted some form of age-verification legislation.“Legislatures in the US, just in the calendar year 2026, are likely to pass dozens or possibly hundreds of new laws requiring online age authentication,” says Eric Goldman, a law professor and associate dean at Santa Clara University who has argued that any “government-compelled censorship” should automatically be looked at as “constitutionally suspect.”“Unless something dramatically changes,” Goldman says, “regulators around the globe are building a legal infrastructure that will require most websites and apps to be age-authenticated.”As platforms act to properly address age verification, is TikTok’s strategy of monitoring users instead of banning kids outright a good compromise? That depends on how you feel about digital surveillance.“This is a fancy way of saying that TikTok will be surveilling its users’ activities and making inferences about them,” says Goldman. Because platform governance is often tied to political motives, and policy solutions sometimes expose children to more harm than help, Goldman refers to age verification mandates as “segregate-and-suppress laws.”“Users probably aren't thrilled about this extra surveillance, and any false positives—like incorrectly identifying an adult as a child—will have potentially major consequences for the wrongly identified user.” Goldman adds that even if this is the right approach for TikTok, most services don’t have enough data about their users to reliably guess peoples' ages, so the approach is not really scalable across other platforms.Though TikTok’s UK pilot led to the removal of thousands of accounts belonging to children under 13, the company also acknowledged that no globally accepted method exists to verify age without undermining user privacy.Alice Marwick, director of research at the tech policy nonprofit Data & Society, says TikTok’s age-detection tech does seem marginally better than automatic bans, but it still requires the platform to surveil users ever more closely. “This will inevitably expand systematic data collection, creating new privacy risks without any clear evidence that it improves youth safety,” she says. “Any systems that try to infer age from either behavior or content are based on probabilistic guesses, not certainty, which inevitably proceed with errors and bias that are more likely to impact groups that TikTok’s moderators do not have cultural familiarity with.”The hypocrisy of the process is not lost on Goldman. Last October, in testimony before the New Zealand Parliament’s education and workforce committee, he noted that if the goal of age verification is to keep children safer, “it is cruelly ironic to force children to regularly disclose highly sensitive private information and increase their exposure to potentially life-changing data-security violations.”The European Union functions as a test bed that nudges platforms toward global defaults, and other countries are already taking note.“Historically, if you look at the internet, it was borderless, no-holds-barred, and the rule of law within a country was irrelevant in many ways. But we’re starting to see a shift in that now,” says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. “Organizationally, we believe that the road they’re going down in Australia is the right approach in terms of having a social media delay.”Site-wide bans in Canada, what Richardson calls “the nuclear option,” seem unlikely anytime soon. In 2024, Canadian lawmakers introduced the Online Harms Act, which, among its many conditions, would have established a digital safety oversight board to manage and enforce legislation, in addition to appointing an ombudsman to field concerns from social media users. The bill never passed.“We shouldn’t really put the trust of what’s developmentally appropriate into the hands of big technology companies. We need to look at developmental experts to answer those questions,” Richardson adds. “We’re not suggesting that regulation is a silver bullet that's going to solve all those problems, but it would certainly be a helpful place to start with all of these issues we’re dealing with. A delay to 16 years old is much better for children.”The debate around online child safety raises broader questions about whether technology alone can resolve what is fundamentally a policy and societal challenge. Marwick worries that the core issue isn’t the sophistication of the age-detection method, “it’s whether large-scale age-gating is the right tool to make kids safer online, improve their health and wellness, or give them more control over their digital experiences.” The current system just “creates lots of friction and data collection without necessarily improving outcomes for users.”While TikTok’s new approach may be viable under EU regulatory frameworks, it’s much harder to see it working in the US. “Here, the legal exposure is significantly higher,” says Jess Miers, an assistant professor at the University of Akron School of Law, given that many state laws are getting repeatedly tied up in First Amendment litigation. She adds that without a federal privacy law, “there are no meaningful guardrails on how this data is stored, shared, or abused not just by the companies collecting it but by the government itself. It could be handed to ICE. It could be used to target women searching for reproductive care. It could be used against LGBTQ+ teens seeking information on gender-affirming treatment. And it absolutely will be used to chill speech.”TikTok is no exception. The company said the appeals process for its age-detection tech relies on the third-party verification vendor Yoti, and also uses traditional verification tools such as credit cards and government-issued IDs—mechanisms that raise concerns about privacy and trust. Yoti, which is also used by Spotify and Meta’s Facebook, has drawn criticism from users worried about excessive data collection and potential leaks. The UK company says it has done more than 1 billion age checks and completes, on average, an estimated 1 million per day.In an email, a Yoti spokesperson told WIRED it estimates ages without identifying individuals and, for systems like TikTok's, it permanently deletes images after an age result is given. The company says it has never reported a data breach related to facial age estimation.“We have no need and no desire to keep the image used for age checks, so we don’t," the spokesperson said.“People do a lot of fearmongering when it comes to age verification," Richardson says, adding that there is a lot of disinformation around the topic. "But there are absolutely ways to do age verification without AI face scanning, without the disclosure of personal information.”CommentsBack to topTriangleYou Might Also LikeIn your inbox: The biggest tech news coming out of ChinaThe real AI talent war is for plumbers and electriciansBig Story: How ICE uprooted normal life in MinneapolisDumbphone owners have lost their mindsListen: Wikipedia’s founder on the threats to its futureJason Parham is a senior writer at WIRED where he covers internet culture, the future of sex, and the intersection of race and power in America. His WIRED cover story “A People’s History of Black Twitter” was adapted into a Hulu docuseries in 2024, and won the AAFCA Award for ... Read MoreSenior WriterXblueskyTopicsTikTokSocial MediachildrensurveillanceprivacyRead MoreThis Mega Snowstorm Will Be a Test for the US Supply ChainShipping experts say the big winter storm across a wide swath of the country should be business as usual—if their safeguards hold.Clearly Filtered Is on a Sitewide Sale Right NowClearly Filtered water pitchers, bottles, and under-sink filters are 10 to 19 percent off. I tested three filters to see how they performed.Uncanny Valley: Donald Trump’s Davos Drama, AI Midterms, and ChatGPT’s Last ResortOn this episode of Uncanny Valley, our hosts unpack the news from Davos, where Trump and major AI companies shared the stage at the World Economic Forum.US Judge Rules ICE Raids Require Judicial Warrants, Contradicting Secret ICE MemoThe ruling in federal court in Minnesota lands as Immigration and Customs Enforcement faces scrutiny over an internal memo claiming judge-signed warrants aren’t needed to enter homes without consent.TikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest ChangesAccording to its new privacy policy, TikTok now collects more data on its users, including their precise location, after majority ownership officially switched to a group based in the US.Skip the Shovel: How to Prep for the Biggest Storm of the SeasonBitter cold, power outages, and impassible roads are a terrible cocktail. Here’s how to prep and bunker in for an extreme winter storm.Our Favorite Earbuds for Most People Are Over 25 Percent OffThese excellent earbuds were already a good deal before the discount.CBP Wants AI-Powered ‘Quantum Sensors’ for Finding Fentanyl in CarsUS Customs and Border Protection is paying General Dynamics to create prototype “quantum sensors,” to be used with an AI database to detect fentanyl and other narcotics.Age Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise?TikTok’s new age-detection tech seems like a better solution than automatically banning youth accounts. But experts say it still requires social platforms to surveil users more closely.The Math on AI Agents Doesn’t Add UpA research paper suggests AI agents are mathematically doomed to fail. The industry doesn’t agree.No, the Freecash App Won’t Pay You to Scroll TikTokFreecash will actually pay money out to users but not for watching videos. This misleading marketing coincides with the app’s rising popularity.149 Million Usernames and Passwords Exposed by Unsecured DatabaseThis “dream wish list for criminals” includes millions of Gmail, Facebook, banking logins, and more. The researcher who discovered it suspects they were collected using infostealing malware.WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.SubscribeNewslettersTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok |
Age Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise? The proliferation of regulations targeting children's access to social media platforms signals a significant, escalating global effort to limit minors' exposure to online content, driven by mounting concerns surrounding potential negative impacts on mental and emotional well-being. TikTok, the latest tech giant to respond, has implemented a new age-detection system across Europe. This approach, particularly following a yearlong pilot in the UK, relies on a multifaceted methodology: analyzing profile data, conducting content analysis, and leveraging behavioral signals to assess whether an account belongs to a minor. TikTok’s core policy, requiring users to be at least 13 to sign up, underscores this commitment. However, the system doesn't automatically ban users; instead, accounts suspected of belonging to under-13s are flagged for review by human moderators. Jason Parham, a senior writer for WIRED, highlights this as a “fancy way of saying that TikTok will be surveilling its users’ activities and making inferences about them,” emphasizing the inherent shift toward increased digital surveillance. The impetus for this regulatory wave stems from widespread anxieties surrounding the influence of social media on children. As noted by Christel Schaldemose, a Danish lawmaker, the situation represents “an experiment where American and Chinese tech giants have unlimited access to the attention of our children and young people for hours every single day almost entirely without oversight.” This concern has led to various responses, including Australia’s decisive ban on social media for individuals under 16, which includes platforms like Instagram, YouTube, Snap, and TikTok. The European Parliament is actively advocating for mandatory age limits, mirroring similar discussions in Denmark and Malaysia, which are considering bans for children under 16. The urgency is palpable, with policymakers recognizing the potential for platforms to exacerbate issues like cyberbullying, body image anxieties, and exposure to harmful content. TikTok’s strategy, however, is not without its critics. While arguably a preferable alternative to blanket bans, it still necessitates pervasive monitoring of user activity, raising fundamental privacy concerns. As Eric Goldman, a law professor at Santa Clara University, explains, this represents “segregate-and-suppress laws,” where users are subjected to enhanced surveillance. The system’s probabilistic nature, relying on inferences rather than definitive proof of age, introduces the potential for inaccuracies and biases, particularly affecting groups that TikTok’s moderators might not adequately understand. Data & Society’s Alice Marwick corroborates this perspective, stating that the technology “will inevitably expand systematic data collection, creating new privacy risks without any clear evidence that it improves youth safety.” Furthermore, the reliance on third-party verification vendors, exemplified by the use of Yoti, adds another layer of complexity and potential vulnerability. Yoti, utilized by TikTok, Spotify, and Meta’s Facebook, has itself faced criticism regarding excessive data collection and concerns about potential data breaches, despite claiming it has no need to retain image data from age checks. The practical limitations of the system are also apparent. Goldman points out that the approach is “harder to see it working in the US,” where the legal landscape is significantly more challenging due to ongoing litigation surrounding First Amendment rights. The absence of a federal privacy law – a critical safeguard – exacerbates this concern, leaving users exposed to potential misuse of their data by both companies and government entities. The potential for misuse, including targeted surveillance or restrictions on access to information like reproductive healthcare or LGBTQ+ resources, underscores the inherent risks. The broader debate surrounding age verification goes beyond merely technological solutions. As Lloyd Richardson, director of technology at the Canadian Centre for Child Protection, notes, it's “not really about the sophistication of the age-detection method; it’s about whether large-scale age-gating is the right tool to make kids safer online.” The reliance on technological interventions, he argues, shouldn’t overshadow the need for a broader societal approach that tackles the underlying issues driving children's engagement with social media – issues that, ultimately, are not purely technical. As Richardson concluded, “We need to look at developmental experts to answer those questions. We shouldn’t really put the trust of what’s developmentally appropriate into the hands of big technology companies.” |