Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re human
Recorded: March 25, 2026, 6:02 p.m.
| Original | Summarized |
Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re human | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Reddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re humanComments DrawerCommentsLoading commentsGetting the conversation ready...TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsReddit accounts with ‘fishy’ bot-like behavior will soon need to prove they’re humanHuman verification ‘will be rare and will not apply to most users,’ according to Reddit CEO Steve Huffman.Human verification ‘will be rare and will not apply to most users,’ according to Reddit CEO Steve Huffman.by Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothMar 25, 2026, 4:10 PM UTCLinkShareGiftIllustration: The VergeEmma RothCloseEmma RothPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.Reddit is taking new steps to identify bots on the platform — a process that may require some users to confirm that they’re human. In a post on Wednesday, Reddit CEO Steve Huffman writes that the company will introduce a labeling system for accounts registered as bots, and ask users with “automated” or “fishy behavior” to verify that they’re human using methods like fingerprint scanning or submitting their ID.With this update, developers can register automated accounts with Reddit, which will then receive an “[APP]” label. However, Reddit also notes that it will be on the lookout for unlabeled accounts with suspicious behavior. “If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it,” Huffman writes, adding that these cases “will be rare and will not apply to most users.”Reddit will ask users behind suspected bot accounts to verify that they’re human, and is exploring several verification methods to do so without actually identifying who the person is. That includes asking users to complete a passkey check, such as scanning their fingerprint on a smartphone, or entering a PIN. It’s also looking into using third-party biometric services, like the Sam Altman-backed World ID, which uses an eyeball-scanning orb to verify humanness.Huffman brings up third-party ID verification services as well, which he says are “the least secure, least private, and least preferred” verification method. He adds that the UK and Australia already require it to support this type of verification. Suspected bot accounts that are unable to verify their humanness “may be restricted,” according to Huffman.Last year, Reddit began testing account verification for brands and individual users. Huffman hinted at launching a bot verification system in a letter to shareholders in February, and floated the idea of using Face ID to verify a user’s humanness during an interview on TBPN this week.Along with this update, Huffman says Reddit is going to make reporting suspected bots “easier and more flexible” — though the platform isn’t going to come down too hard on all accounts using AI to write. “We’ll monitor its usage and see what happens as we crack down even more on automated accounts,” Huffman says. “Our current focus is to ensure there is a real, live human behind the accounts you’re seeing.”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAppsCloseAppsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AppsNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsRedditCloseRedditPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All RedditSocial MediaCloseSocial MediaPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Social MediaTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularNvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’Welp, I bought an iPhone againOpenAI just gave up on Sora and its billion-dollar Disney dealDonut Lab’s solid-state battery could barely hold a charge after getting damagedAyaneo says selling its Windows gaming handheld ‘is no longer sustainable’The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in TechGoogle Lyria 3 Pro makes longer AI songsYou can install these wireless security cameras over half a mile awayMark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’Ring finally brings 4K video to its battery-powered doorbell cameraKeychron’s mechanical keyboards are up to 60 percent offApple’s iOS 26.4 update adds age verification in the UKGoogle Lyria 3 Pro makes longer AI songsTerrence O'BrienTwo hours agoYou can install these wireless security cameras over half a mile awayAndrew Liszewski3:00 PM UTCMark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’Stevie Bonifield2:41 PM UTCRing finally brings 4K video to its battery-powered doorbell cameraJennifer Pattison Tuohy2:11 PM UTCKeychron’s mechanical keyboards are up to 60 percent offCameron Faulkner2:07 PM UTCApple’s iOS 26.4 update adds age verification in the UKEmma Roth1:54 PM UTCAdvertiser Content FromThis is the title for the native adTop Stories49 minutes agoThe TSA is broken — is privatization next?Two hours agoLive-service games are such a mess even Fortnite is strugglingTwo hours agoSenate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance12:40 PM UTCSony and Honda ain’t feelin’ the Afeela anymore11:00 AM UTCIt’s always a good time to revisit Super Mario Bros. WonderMar 24Welp, I bought an iPhone againThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved |
Reddit, the prominent social media platform, is implementing a multifaceted strategy to combat the proliferation of bot accounts, a move driven by concerns regarding platform integrity and user experience. According to a statement released by CEO Steve Huffman, the company will introduce a labeling system for accounts identified as bots, and will actively pursue verification of accounts exhibiting “automated” or “fishy” behavior. This initiative stems from a growing recognition of the detrimental impact of automated accounts – often referred to as “web agents” – on genuine user engagement and the overall quality of discussions within the platform. Huffman indicated that this action won’t apply to most users, but will be implemented where necessary. The core of Reddit’s approach hinges on a staged process. Initially, developers are permitted to register automated accounts, which will then receive an “[APP]” label, signifying their non-human origin. However, Reddit’s monitoring systems will remain active, scrutinizing unlabeled accounts for suspicious activity. If an account demonstrates characteristics consistent with automated behavior – encompassing aspects like high-frequency posting, repetitive content, or unnatural patterns – the platform will prompt the user to verify their humanity. This verification process will utilize a range of methods, with fingerprint scanning via smartphone or PIN entry representing the most immediate options. Furthermore, Reddit is exploring more advanced verification technologies, including biometric services such as the World ID system, which employs an eyeball-scanning device. Huffman highlighted the challenges associated with third-party ID verification services, noting that they are “the least secure, least private, and least preferred” and that the UK and Australia already mandate their use for verification. Should an account fail to successfully complete the verification process, Huffman stated that it could be subject to restrictions. This tiered approach balances the need to protect the platform from malicious automation with a desire to minimize disruption for legitimate users. A key element of the strategy involves simplifying the reporting process for suspected bot accounts, creating a more flexible and efficient method for users to flag potentially problematic accounts. While Huffman acknowledged that AI-generated content may be prevalent, the primary focus remains ensuring that genuine human users are driving conversations and contributing meaningfully to the Reddit community. Huffman’s earlier comments during an interview on TBPN foreshadowed this move, suggesting the consideration of Face ID verification during account login. The implementation of this system aligns with broader industry trends concerning the identification and mitigation of bots. The growing sophistication of automated accounts – often used for spam, manipulation, or spreading misinformation – demands proactive measures to safeguard the integrity of online platforms. Reddit’s strategy represents an evolution of previous account verification efforts, expanding the scope to encompass a wider range of automated activity and exploring more robust verification technologies. The continued monitoring of AI usage within the platform will be critical to its success, assessing the potential impact of automated content on community dynamics and ultimately, ensuring a productive and engaging user experience. |