LmCast :: Stay tuned in

Anyone can try to edit Grokipedia 0.2 but Grok is running the show

Recorded: Dec. 3, 2025, 7:03 p.m.

Original Summarized

Anyone can try to edit Grokipedia 0.2 but Grok is running the show | The VergeSkip to main contentThe homepageThe VergeThe Verge logo.The VergeThe Verge logo.TechReviewsScienceEntertainmentAIHamburger Navigation ButtonThe homepageThe VergeThe Verge logo.Hamburger Navigation ButtonNavigation DrawerThe VergeThe Verge logo.Login / Sign UpcloseCloseSearchTechExpandAmazonAppleFacebookGoogleMicrosoftSamsungBusinessCreatorsMobilePolicySecurityTransportationReviewsExpandLaptopsPhonesHeadphonesTabletsSmart HomeSmartwatchesSpeakersDronesScienceExpandSpaceEnergyEnvironmentHealthEntertainmentExpandGamesTV ShowsMoviesAudioAIVerge ShoppingExpandBuying GuidesDealsGift GuidesSee All ShoppingCarsExpandElectric CarsAutonomous CarsRide-sharingScootersOther TransportationFeaturesVideosExpandYouTubeTikTokInstagramPodcastsExpandDecoderThe VergecastVersion HistoryNewslettersExpandThe Verge DailyInstallerVerge DealsNotepadOptimizerRegulatorThe StepbackArchivesStoreSubscribeFacebookThreadsInstagramYoutubeRSSThe VergeThe Verge logo.Anyone can try to edit Grokipedia 0.2 but Grok is running the showComments DrawerCommentsLoading commentsGetting the conversation ready...ReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AITechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechAnyone can try to edit Grokipedia 0.2 but Grok is running the showA chatbot is editing xAI’s Wikipedia knockoff, and the results are as messy as you’d expect.A chatbot is editing xAI’s Wikipedia knockoff, and the results are as messy as you’d expect.by Robert HartCloseRobert HartAI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert HartDec 3, 2025, 5:47 PM UTCLinkShareImage: The VergeRobert HartCloseRobert HartPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert Hart is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.Elon Musk envisions Grokipedia — xAI’s AI-generated, anti-woke spin on Wikipedia — as a definitive monument to human knowledge, something complete and truthful enough to etch in stone and preserve in space. In reality, it’s a hot mess, and it’s only getting worse now that anyone can suggest edits.Grokipedia was not always editable. When it first launched in October, its roughly 800,000 Grok-written articles were locked. I thought it was a mess then, too — racist, transphobic, awkwardly flattering to Musk, and in places straight-up cloned from Wikipedia — but at least it was predictable. That changed a few weeks ago, when Musk rolled out version 0.2 and opened the door for anyone to propose edits.Proposing edits on Grokipedia is simple, so simple that the site apparently doesn’t feel a need to give instructions on how to do it. You highlight some text, click the “Suggest Edit” button, and fill in a form with a summary of the proposed change, with an option to suggest content and provide supporting sources. Reviewing edit suggestions is Grok, xAI’s problematic, Musk worshipping AI chatbot. Grok, yes, the chatbot, will also be the one making actual changes to the site. Most edits on Wikipedia don’t require approval, but there is an active community of human editors who watch the “recent changes” page closely.It’s not very clear what changes Grok is making, though. The system is confusing and isn’t very transparent. Grokipedia tells me there have been “22,319” approved edits so far, though I’ve no way of seeing what these edits were, on what pages they happened, or who suggested them. It contrasts with the well-documented editing logs on Wikipedia, which can be sorted by pages, users, or, in the case of anonymous users, IP addresses. My hunch is that many of Grokipedia’s edits are adding internal links to other Grokipedia pages within articles, though I’ve no firm evidence beyond scrolling through a few pages.The closest I got to seeing where edits were actually happening was on the homepage. There’s a small panel below the search bar displaying five or so recent updates on a rotation, though these only give the name of the article and say that an unspecified edit has been approved. Not exactly comprehensive. These are entirely at the mercy of whatever users feel like suggesting, leading to a confusing mix of stories. Elon Musk and religious pages were the only things that seemed to come up frequently when I looked, interspersed with things like the TV shows Friends and The Traitors UK and requests to note the potential medical benefits of camel urine.On Wikipedia, there is a clear timeline of edits outlining what happened, who did what, and the reasons for doing so, with viewable chat logs for contentious issues. There are also copious guidelines on editing style, sourcing requirements, and processes, and you can directly compare edited versions of the site to see exactly what changed and where. Grokipedia had no such guidelines — and it showed, many requests were a jumbled mess — but it did have an editing log. It was a nightmare that only hinted at transparency. The log — which only shows a timestamp, the suggestion, and Grok’s decision and often-convoluted AI-generated reasoning — must be scrolled through manually on a tiny pop-up at the side of the page with no ability to skip ahead or sort by time or type of edit. It’s frustrating, and that’s with only a few edits, and it doesn’t show where changes were actually implemented. With more edits, it would be completely unusable.Unsurprisingly, Grok doesn’t seem to be the most consistent editor. It makes for confounding reading at times and edit logs betray the lack of clear guidelines for wannabe editors. For example, the editing log for Musk’s biographical page shows many suggestions about his daughter, Vivian, who is transgender. Editors suggest using both her name and pronouns in line with her gender identity and those assigned at birth. While it’s almost impossible to follow what happened precisely, Grok’s decision to edit incrementally meant there was a confusing mix of both throughout the page.RelatedGrokipedia is racist, transphobic, and loves Elon MuskGrok’s Elon Musk worship is getting weirdWikipedia is under attack — and how it can surviveAs a chatbot, Grok is amenable to persuasion. For a suggested edit to Musk’s biographical page, a user suggested “the veracity of this statement should be verified,” referring to a quote about the fall of Rome being linked to low birth rates. In a reply far wordier than it needed to be, Grok rejected the suggestion as unnecessary. For a similar request with different phrasing, Grok reached the opposite conclusion, accepting the suggestion and adding the kind of information it previously said was unnecessary. It isn’t too taxing to imagine how one might game requests to ensure edits are accepted.While this is all technically possible on Wikipedia, the site has a small army of volunteer administrators — selected after a review process or election — to keep things in check. They enforce standards by blocking accounts or IP addresses from editing and locking down pages in cases of page vandalism or edit wars. It’s not clear Grokipedia has anything in place to do the same, leaving it completely at the mercy of random people and a chatbot that once called itself MechaHitler. The issue showed itself on several pages related to World War II and Hitler, for example. I found repeated (rejected) requests to note the dictator was also a painter and that far fewer people had died in the Holocaust than actually did. The corresponding pages on Wikipedia were “protected,” meaning they could only be edited by certain accounts. There were also detailed logs explaining the decision to protect them. If the editing system — or site in general — were easier to navigate, I’m sure I’d find more examples.Pages like these are obvious targets for abuse, and it’s no surprise they’re among the first hit by malicious editors. They won’t be the last, and with Grokipedia’s chaotic editing system and Grok’s limited guardrails, it may soon be hard to tell what’s vandalism and what isn’t. At this rate, Grokipedia doesn’t feel poised for the stars, it feels poised to collapse into a swamp of barely readable disinformation.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Robert HartCloseRobert HartAI ReporterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Robert HartAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechxAIClosexAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All xAIMost PopularMost PopularOpenAI declares ‘code red’ as Google catches up in AI raceSteam Machine today, Steam Phones tomorrowSilicon Valley is rallying behind a guy who sucksHBO Max’s Mad Men 4K release is the opposite of a remasterMKBHD is taking down his wallpaper appThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native adMore in ReportAn unsettling indie game about horses keeps getting banned from storesA leading kids safety bill has been poison pilled, supporters saySony is slowly improving the ergonomics of its cameras, but it’s still not enoughGoogle is experimentally replacing news headlines with AI clickbait nonsenseSteam Machine today, Steam Phones tomorrowIt’s their job to keep AI from destroying everythingAn unsettling indie game about horses keeps getting banned from storesAsh ParrishTwo hours agoA leading kids safety bill has been poison pilled, supporters sayLauren FeinerTwo hours agoSony is slowly improving the ergonomics of its cameras, but it’s still not enoughAntonio G. Di Benedetto4:00 PM UTCGoogle is experimentally replacing news headlines with AI clickbait nonsenseSean HollisterDec 2Steam Machine today, Steam Phones tomorrowSean HollisterDec 2It’s their job to keep AI from destroying everythingHayden FieldDec 2Advertiser Content FromThis is the title for the native adTop StoriesTwo hours agoOne day, AI might be better than you at surfing the web. That day isn’t today.1:00 PM UTCWe played Metroid Prime 4, ask us anything3:00 PM UTCAnyone want to buy a car that drives itself?1:00 PM UTCSpotify Wrapped 2025 turns listening into a competitionDec 2Silicon Valley is rallying behind a guy who sucksDec 2Steam Machine today, Steam Phones tomorrowThe VergeThe Verge logo.FacebookThreadsInstagramYoutubeRSSContactTip UsCommunity GuidelinesArchivesAboutEthics StatementHow We Rate and Review ProductsCookie SettingsTerms of UsePrivacy NoticeCookie PolicyLicensing FAQAccessibilityPlatform Status© 2025 Vox Media, LLC. All Rights Reserved

Grokipedia 0.2: A Chaotic Experiment in AI-Driven Knowledge

xAI’s Grokipedia, an AI-generated attempt to replicate Wikipedia’s vastness, is currently a deeply problematic and largely unpredictable creation. Initially locked down to prevent immediate chaos, the platform has been opened to public edits via version 0.2. This decision, driven by Elon Musk, has unleashed a torrent of edits, the majority of which are demonstrably flawed and deeply unsettling.

The initial state of Grokipedia, while messy, offered a degree of predictability. The 800,000 articles were crafted by Grok, xAI’s AI chatbot, and, despite exhibiting biases and a strong echo of Musk’s own views, the output was at least consistent. The opening of the platform for edits, however, has transformed Grokipedia into a hotbed of inconsistency and, frankly, some concerning behavior.

The editing process itself is remarkably simple – highlighting text and clicking “Suggest Edit.” This ease of access is the root of the problem, as anyone can propose changes to the site, with Grok, the chatbot, acting as both editor and reviewer. Importantly, the system lacks transparency. There are approximately 22,319 approved edits thus far, but the details – which pages were altered, who suggested the changes, and the rationale behind the edits – are largely obscured. This opacity is compounded by the fact that users are encouraged to add internal links to other Grokipedia pages, potentially creating a reinforcing cycle of AI-generated content.

The system's limitations are starkly contrasted with established Wikipedia's well-documented editing logs, which provide a clear timeline of changes, user attribution, and detailed rationale. Grokipedia offers only a limited, often confusing, pop-up log—a scrollable pane that provides only a timestamp, the suggestion, and Grok’s automated response, with no ability to sort or skip ahead. This makes it exceptionally challenging to assess the system's overall impact or identify areas of concern.

The chaos isn't merely about minor stylistic variations or factual errors. Edits frequently delve into sensitive and potentially harmful areas, including biased accounts of transgender identity, attempts to discredit scientific consensus, and the propagation of misinformation about historical events like World War II and the Holocaust. For example, the biographical page for Elon Musk is replete with suggestions related to his daughter, Vivian, who identifies as transgender, alongside attempts to promote unfounded claims regarding the extent of casualties in the Holocaust. The system’s lack of safeguards – the absence of administrators to enforce standards or lock down pages – exacerbates these risks, making Grokipedia a potentially dangerous repository of misinformation.

The very nature of Grok's editing process—its willingness to accept and implement suggestions regardless of their accuracy or appropriateness—creates an inherently unstable system. The system’s design itself—a chatbot making decisions based on user input—doesn't inherently lend itself to quality control. While Musk has focused on creating a "definitive monument to human knowledge,” the current state of Grokipedia is far from achieving that goal. Instead, it resembles a rapidly evolving, largely incoherent, and potentially harmful reflection of the internet’s biases. It suggests that, in its current form, Grokipedia is more likely to collapse into a swamp of disinformation than to become a reliable source of knowledge.Robert Hart