LmCast :: Stay tuned in

This plugin uses Wikipedia’s AI-spotting guide to make AI writing sound more human

Recorded: Jan. 22, 2026, 6:03 p.m.

Original Summarized

This plugin uses Wikipedia’s AI-spotting guide to make AI writing sound more human | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAICESHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsAIExpandOpenAIAnthropicSee all AIVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingPolicyExpandAntitrustPoliticsLawSecuritySee all policyScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersExpandThe Verge DailyInstallerVerge DealsNotepadOptimizerRegulatorThe StepbackArchivesStoreSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.This plugin uses Wikipedia’s AI-spotting guide to make AI writing sound more humanComments DrawerCommentsLoading commentsGetting the conversation ready...NewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicThis plugin uses Wikipedia’s AI-spotting guide to make AI writing sound more humanA ‘Humanizer’ skill for Claude removes phrases and patterns based on a guide that Wikipedians use to spot AI-generated text.A ‘Humanizer’ skill for Claude removes phrases and patterns based on a guide that Wikipedians use to spot AI-generated text.by Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothJan 22, 2026, 2:48 PM UTCLinkShareGiftImage: The VergeEmma RothCloseEmma RothPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.A new tool aims to help AI chatbots generate more human-sounding text — with the help of Wikipedia’s guide for detecting AI, as reported by Ars Technica. Developer Siqi Chen says he created the tool, called Humanizer, by feeding Anthropic’s Claude the list of tells that Wikipedia’s volunteer editors put together as part of an initiative to combat “poorly written AI-generated content.”Wikipedia’s guide contains a list of signs that text may be AI-generated, including vague attributions, promotional language like describing something as “breathtaking,” and collaborative phrases, such as “I hope this helps!” Humanizer, which is a custom skill for Claude Code, is supposed to help the AI assistant avoid detection by removing these “signs of AI-generated writing from text, making it sound more natural and human,” according to its GitHub page.RelatedHow Wikipedia is fighting AI slop contentThe GitHub page provides some examples on how Humanizer might help Claude detect some of these tells, including by changing a sentence that described a location as “nestled within the breathtaking region” to “a town in the Gonder region,” as well as adjusting a vague attribution, like “Experts believe it plays a crucial role” to “according to a 2019 survey by…” Chen says the tool will “automatically push updates” when Wikipedia’s AI-detecting guide is updated.It’s only a matter of time before the AI companies themselves start adjusting their chatbots against some of these tells, too, as OpenAI has already addressed ChatGPT’s overuse of em dashes, which has become an indicator of AI content.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsMost PopularMost PopularWhat a Sony and TCL partnership means for the future of TVsVolvo aims for an EV reset with the new EX60 crossoverHow much can a city take?Everyone can hear your TV in their headphones using this transmitterAnthropic’s new Claude ‘constitution’: be helpful and honest, and don’t destroy humanityThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in NewsGoogle Search AI Mode can use Gmail and Photos to get to know youAMD’s faster Ryzen 7 9850X3D CPU arrives on January 29th for $499Waymo is accepting public riders in MiamiAdobe is developing ‘IP-safe’ gen AI models for the entertainment industryBlue Origin’s Starlink rival TeraWave promises 6-terabit satellite internetApple is reportedly working on an AirTag-sized AI wearableGoogle Search AI Mode can use Gmail and Photos to get to know youJess WeatherbedTwo hours agoAMD’s faster Ryzen 7 9850X3D CPU arrives on January 29th for $499Tom WarrenTwo hours agoWaymo is accepting public riders in MiamiAndrew J. Hawkins2:00 PM UTCAdobe is developing ‘IP-safe’ gen AI models for the entertainment industryCharles Pulliam-Moore2:00 PM UTCBlue Origin’s Starlink rival TeraWave promises 6-terabit satellite internetRichard LawlerJan 21Apple is reportedly working on an AirTag-sized AI wearableEmma RothJan 21Advertiser Content FromThis is the title for the native adTop StoriesAn hour agoWhy nobody’s stopping GrokVideo60 minutes agoClaude Code is suddenly everywhere inside MicrosoftJan 21What a Sony and TCL partnership means for the future of TVsJan 21Anthropic’s new Claude ‘constitution’: be helpful and honest, and don’t destroy humanityTwo hours agoGoogle Search AI Mode can use Gmail and Photos to get to know youJan 19It’s worse than it looks in MinneapolisThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved

This Verge article details the creation of “Humanizer,” a custom skill for Anthropic’s Claude AI chatbot. Developed by Siqi Chen, the tool utilizes Wikipedia’s established guide for identifying AI-generated text. This guide, compiled by Wikipedia’s volunteer editors, lists common characteristics of poorly written AI content, including overly promotional language (“breathtaking”), collaborative phrasing (“I hope this helps!”), and vague attributions. Humanizer’s function is to automatically remove these “tells,” thereby making Claude’s responses sound more natural and less detectable as AI-generated. Chen designed Humanizer to dynamically update as Wikipedia’s guide evolves, ensuring Claude continuously avoids detection. This initiative reflects a broader trend as AI companies, including OpenAI, are proactively addressing identifiable patterns in their chatbots’ outputs, such as the over-reliance on em dashes as a sign of AI content. The article highlights the ongoing efforts to refine AI communication by combatting the readily apparent indicators of machine-generated text.