LmCast :: Stay tuned in

Wikipedia bans AI-generated articles

Recorded: March 26, 2026, 5 p.m.

Original Summarized

Wikipedia bans AI-generated articles | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIPolicyHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessSee all techReviewsExpandSmart Home ReviewsPhone ReviewsTablet ReviewsHeadphone ReviewsSee all reviewsScienceExpandSpaceEnergyEnvironmentHealthSee all scienceEntertainmentExpandTV ShowsMoviesAudioSee all entertainmentAIExpandOpenAIAnthropicSee all AIPolicyExpandAntitrustPoliticsLawSecuritySee all policyGadgetsExpandLaptopsPhonesTVsHeadphonesSpeakersWearablesSee all gadgetsVerge ShoppingExpandBuying GuidesDealsGift GuidesSee all shoppingGamingExpandXboxPlayStationNintendoSee all gamingStreamingExpandDisneyHBONetflixYouTubeCreatorsSee all streamingTransportationExpandElectric CarsAutonomous CarsRide-sharingScootersSee all transportationFeaturesVerge VideoExpandTikTokYouTubeInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersArchivesStoreVerge Product UpdatesSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Wikipedia bans AI-generated articlesComments DrawerCommentsLoading commentsGetting the conversation ready...TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsWikipedia bans AI-generated articlesWikipedia editors can only use AI for basic copy editing or translations.Wikipedia editors can only use AI for basic copy editing or translations.by Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothMar 26, 2026, 3:02 PM UTCLinkShareGiftImage: The VergeEmma RothCloseEmma RothPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.Wikipedia will no longer allow editors to write or rewrite articles using AI. The update, which was added to Wikipedia’s guidelines late last week, cites the tendency for AI-written articles to violate “several of Wikipedia’s core content policies” as the reason for the ban.The change applies to the English version of Wikipedia and will still allow editors to use AI in certain scenarios. That includes using large language models to “suggest basic copyedits” to their writing, but only if it “does not introduce content of its own.” Editors can also use AI to translate articles from another language’s Wikipedia into English. However, they still must follow the site’s rules on LLM-assisted translations, which require editors to have enough knowledge of the original language to confirm the accuracy of the translation.The new policy warns that some people “may have similar writing styles to LLMs,” and editors will need to find more than just “stylistic or linguistic signs” to justify potential restrictions on their editing capabilities. “It is best to consider the text’s compliance with core content policies and recent edits by the editor in question,” the guidelines state.RelatedHow Wikipedia is fighting AI slop contentWikipedia editors have been contending with AI-generated articles for months now, leading the community to implement a new policy to allow for the “speedy deletion” of poorly written articles. Editors also formed WikiProject AI Cleanup, an initiative meant to combat AI-written content and help others identify it.This most recent change to Wikipedia’s guidelines was proposed by Chaotic Enby, sparking a lengthy discussion between editors. The proposal eventually passed with “overwhelming support,” concluding that the policy “targets blatantly problematic issues with LLM use, while still giving leeway for what are seen as decent uses for it.”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Emma RothCloseEmma RothNews WriterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Emma RothAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AINewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechWebCloseWebPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All WebMost PopularMost PopularThe United States router ban, explainedSeiko resurrected a 44-year-old digital watch NASA astronauts wore to spaceIntel and LG Display may have beaten Apple and Qualcomm with the best laptop battery life everThe best deals we’ve found from Amazon’s Big Spring Sale (so far)Nintendo is going to charge less for digital Switch 2 gamesThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in TechThe best deals we’ve found from Amazon’s Big Spring Sale (so far)Apple’s latest, best gear is cheaper during Amazon’s Big Spring Sale8Verge ScoreThe versatile Play speaker is a great way into the Sonos worldDJI’s Avata 360 is a more functional, flexible 360 droneOpenAI shelves erotic chatbot ‘indefinitely’Intel and LG Display may have beaten Apple and Qualcomm with the best laptop battery life everThe best deals we’ve found from Amazon’s Big Spring Sale (so far)Sheena VasaniTwo hours agoApple’s latest, best gear is cheaper during Amazon’s Big Spring SaleSheena Vasani1:07 PM UTCThe versatile Play speaker is a great way into the Sonos worldJohn Higgins1:00 PM UTCDJI’s Avata 360 is a more functional, flexible 360 droneDominic Preston12:00 PM UTCOpenAI shelves erotic chatbot ‘indefinitely’Jess Weatherbed11:58 AM UTCIntel and LG Display may have beaten Apple and Qualcomm with the best laptop battery life everSean Hollister12:33 AM UTCAdvertiser Content FromThis is the title for the native adTop Stories2:00 PM UTCWhy a two-seater robotaxi makes more sense than you thinkAn hour agoEveryone hates Ticketmaster. Why’d Trump go easy on them?Video1:00 PM UTCThe versatile Play speaker is a great way into the Sonos world32 minutes agoMeta gets ready to launch two new Ray-Ban AI glasses12:00 PM UTCDJI’s Avata 360 is a more functional, flexible 360 droneMar 25The United States router ban, explainedThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2026 Vox Media, LLC. All Rights Reserved

Wikipedia has implemented a significant policy shift aimed at mitigating the proliferation of low-quality content generated by large language models (LLMs), signaling a proactive response to a growing challenge within the platform’s editorial landscape. This adjustment, spearheaded by the WikiProject AI Cleanup initiative and following extensive community discussion, restricts editors’ ability to utilize AI for article creation and substantial rewriting. The core rationale driving this change rests on the inherent limitations of LLMs in consistently adhering to Wikipedia’s stringent content policies, encompassing areas such as neutrality, verifiability, and sourcing. Specifically, the revised guidelines highlight the tendency of AI-generated content to frequently violate these standards, leading to the creation of articles deemed “slop content.”

The policy now permits AI usage only for tasks like basic copyediting and translations, contingent upon strict adherence to established protocols. This includes utilizing AI’s suggestion capabilities for stylistic refinements only, provided the AI does not introduce novel content. Furthermore, AI-assisted translations are allowed, but crucially, they require editors to possess sufficient knowledge of the original language to validate the accuracy of the translation, underscoring the need for human oversight and contextual understanding. The guidelines explicitly warn against relying on stylistic or linguistic similarities to LLMs as justification for editing, emphasizing a focus on demonstrable compliance with core content policies and recent editorial contributions. This strategic approach reflects a recognition that AI, in its current state, lacks the nuanced judgment and contextual awareness necessary to consistently uphold Wikipedia’s established standards.

The impetus for this formal policy shift came from the efforts of Chaotic Enby, who successfully advocated for the change through a rigorous community debate. The outcome—an overwhelmingly supportive consensus—reflects a widespread concern within the Wikipedia community regarding the potential for AI to undermine the platform's long-standing commitment to accuracy and reliability. The policy's implementation also builds upon existing efforts, notably the WikiProject AI Cleanup, which has been actively engaged in identifying and swiftly deleting AI-generated articles. This initiative demonstrates a concerted, collaborative approach to addressing the issue, highlighting a deep understanding of the challenges posed by rapidly evolving AI technology. The policy's allowance for the "speedy deletion" of problematic articles underscores the prioritization of maintaining editorial quality over immediate editing convenience.

Ultimately, this revised policy represents a calculated move by Wikipedia to safeguard the integrity and trustworthiness of its vast repository of knowledge. It acknowledges the potential utility of AI tools while simultaneously establishing firm boundaries to prevent the erosion of quality that could be detrimental to the platform’s core mission. This approach reflects a pragmatic, adaptive stance, consistent with Wikipedia’s history of responding to technological advancements and emerging challenges – one that clearly signals a willingness to take proactive measures and prioritize the fundamental principles that have underpinned its success.