LmCast :: Stay tuned in

Dead Internet Theory

Recorded: Jan. 19, 2026, 10:03 a.m.

Original Summarized

Dead Internet Theory - Dmitry Kudryavtsev

Dmitry Kudryavtsev
/~ /blog /projects /now /tech-blog /linkedin /~ /blog /projects /now /about /tech-blog /linkedin Dead Internet Theory Jan 18, 2026 · 6 min read
·
in Artifical Intelligence
#Technology
#Internet The other day I was browsing my one-and-only social network — which is not a social network, but I’m tired of arguing with people online about it — HackerNews.
It’s like this dark corner of the internet, where anonymous tech-enthusiasts, scientists, entrepreneurs, and internet-trolls, like to lurk.
I like HackerNews.
It helps me stay up-to-date about recent tech news (like Cloudflare acquiring Astro which makes me happy for the Astro team, but also sad and worried since I really like Astro, and big-tech has a tendency to ruin things); it mostly avoids politics; and it’s not a social network.
And, in the fashion of HackerNews, I stumbled upon someone sharing their open-source project.
It’s great to see people work on their projects and decide to show them to the world.
I think people underestimate the fear of actually shipping stuff, which involves sharing it with the world.
Upon glancing at the comment section, I started to see other anonymous participants questioning the validity of said open-source project in terms of how much of it was AI-generated.
I grabbed my popcorn, and started to follow this thread.
More accusations started to appear: the commit timeline does not make sense; the code has AI-generated comments; etc.
And at the same time, the author tried to reply to every comment claiming that they wrote this 100% without using AI.

I don’t mind people using AI to write code, even though I tried to resist it myself, until eventually succumbing to it.
But I think it’s fair to disclose the use of AI, especially in open-source software.
People on the internet are, mostly, anonymous, and it’s not always possible to verify the claims or expertise of particular individuals.
But as the amount of code is growing, considering that everyone is using AI to generate whatever-app they want, it’s impossible to verify every piece of code we are going to use.
So it’s fair to know, I think, if some project is AI generated and to what extent.
In the end, LLMs are just probabilistic next-token generators.
And while they are getting extremely good at most simple tasks, they have the potential to wreak havoc with harder problems or edge-cases (especially if there are no experienced engineers, with domain knowledge, to review the generated code).

As I was following this thread, I started to see a pattern: the comments of the author looked AI generated too:

The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know, and while in markdown two dashes will render as em-dash, this is not true of HackerNews (hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer turning it into em-dash)
The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of
The other notorious “let me know if you want to [do that thing] or [explore this other thing]” at the end of the sentence

I was sitting there, refreshing the page, seeing the author being confronted with use of AI in both their code and their comments, while the author claiming to have not used AI at all.
Honestly, I was thinking I was going insane.
Am I wrong to suspect them?
What if people DO USE em-dashes in real life?
What if English is not their native language and in their native language it’s fine to use phrases like “you are absolutely right”?
Is this even a real person?
Are the people who are commenting real?
And then it hit me.
We have reached the Dead Internet.
The Dead Internet Theory claims that since around 2016 (a whooping 10 years already), the internet is mainly dead, i.e. most interactions are between bots, and most content is machine generated to either sell you stuff, or game the SEO game (in order to sell you stuff).
I’m ashamed proud to say that I spent a good portion of my teenage years on the internet, chatting and learning from real people who knew more than me.
Back in the early 2000s, there were barely bots on the internet.
The average non-tech human didn’t know anything about phpBB forums, and the weird people with pseudonyms who hanged-out in there.
I spent countless hours inside IRC channels, and on phpBB forums, learning things like network programming, OS-development, game-development, and of course web-development (which became my profession for almost two decades now).
I’m basically a graduate of the Internet University.
Back then, nobody had doubts that they were talking to a human-being.
Sure, you could think that you spoke to a hot girl, who in reality was a fat guy, but hey, at least they were real!
But today, I no longer know what is real.
I saw a picture on LinkedIn, from a real tech company, posting about their “office vibes” and their happy employees.
And then I went to the comment section, and sure enough this picture is AI generated (mangled text that does not make sense, weird hand artifacts).
It was posted by an employee of the company, it showed other employees of said company, and it was altered with AI to showcase a different reality.
Hell, maybe the people on the picture do not even exist!
And these are mild examples.
I don’t use social networks (and no, HackerNews is not a social network), but I hear horror stories about AI generated content on Facebook, Xitter, TikTok, ranging from photos of giants that built the pyramids in Egypt, all the way to short videos of pretty girls saying that the EU is bad for Poland.
I honestly got sad that day.
Hopeless, if I could say.
AI is easily available to the masses, which allow them to generate shitload of AI-slop.
People no longer need to write comments or code, they can just feed this to AI agents who will generate the next “you are absolutely right” masterpiece.
I like technology.
I like software engineering, and the concept of the internet where people could share knowledge and create communities.
Were there malicious actors back then on the internet?
For sure.
But what I am seeing today, makes me question whether the future we are headed to is a future where technology is useful anymore.
Or, rather, it’s a future where bots talk with bots, and human knowledge just gets recycled and repackaged into “10 step to fix your [daily problem] you are having” for the sake of selling you more stuff. > All Articles <
© 2026 Dmitry Kudryavtsev | @skwee357 Unless otherwise noted, all content is generated by a human. Content is licensed under: CC BY-NC 4.0 93968368

Dmitry Kudryavtsev’s essay “Dead Internet Theory” presents a critical reflection on the evolving nature of online interactions, particularly in the context of artificial intelligence (AI) and its increasing influence on digital content creation. The piece begins with a personal anecdote about Kudryavtsev’s experience on HackerNews, a platform he describes as a “dark corner of the internet” where tech enthusiasts and anonymous users engage in discussions. While browsing this space, he encounters a thread questioning the authenticity of an open-source project’s codebase, with users accusing the author of using AI-generated content. The debate escalates as critics point to anomalies in the project’s commit history and code comments, while the author insists they wrote everything manually. Kudryavtsev acknowledges his own reluctance to use AI in coding, though he admits succumbing to its convenience. However, he emphasizes the importance of transparency about AI’s role in software development, particularly in open-source projects where trust and verification are paramount. He argues that while AI excels at simple tasks, its limitations in handling complex or edge cases necessitate human oversight. This incident prompts him to consider a broader phenomenon: the “Dead Internet,” a term he coins to describe the growing dominance of machine-generated content and automated interactions online.

Kudryavtsev’s theory posits that since around 2016, the internet has shifted toward a state where most interactions are mediated by bots or AI systems rather than genuine human engagement. This transformation, he suggests, has eroded the authenticity of digital spaces, replacing organic communication with algorithmically generated content designed to manipulate user behavior or optimize for commercial interests. The essay draws on his personal history with the internet, recalling a time in the early 2000s when online communities were populated by real people, albeit often with pseudonyms and occasional deception. He contrasts this era with the present, where even basic interactions—such as comments on social media or code submissions on platforms like HackerNews—are increasingly suspect. Kudryavtsev’s skepticism is rooted in specific observations, such as the prevalence of AI-generated text patterns, like excessive use of em-dashes (a formatting choice requiring technical knowledge that many users lack) or phrases like “you are absolutely right,” which he claims no human would naturally employ. These anomalies, he argues, suggest a growing reliance on AI tools that generate content in ways that mimic human behavior but lack the nuance and intentionality of real interaction.

The author’s concerns extend beyond coding to broader aspects of digital culture, including the proliferation of AI-generated imagery and videos. He cites a LinkedIn post from a tech company showcasing “office vibes” with AI-mangled photographs, where the images exhibit artifacts and incoherent text. This example highlights his fear that even professional content is being manipulated to create misleading narratives, blurring the line between reality and fabrication. Kudryavtsev also references anecdotal reports of AI-generated content on platforms like Facebook, X (formerly Twitter), and TikTok, ranging from implausible historical claims to fabricated videos of individuals making inflammatory statements. These examples underscore his belief that AI’s accessibility has democratized content creation but at the cost of authenticity, enabling users to produce and disseminate material that is indistinguishable from human-generated work. The essay reflects a growing unease about the implications of this trend, particularly in an age where trust in digital media is already fragile.

Kudryavtsev’s critique of the “Dead Internet” is not merely a lament for lost authenticity but also a warning about the structural consequences of AI’s dominance. He questions whether the internet, once a space for knowledge-sharing and collaboration, is becoming a feedback loop of recycled content optimized for engagement rather than enlightenment. The essay touches on the commercialization of online spaces, where AI-generated material is often designed to extract user attention or drive consumption. This shift, he argues, risks reducing human creativity and critical thinking to mere inputs for algorithms that prioritize virality over value. Kudryavtsev’s own background as a software engineer and long-time internet user informs his perspective, as he reflects on the early days of online communities where human interaction was central to learning and innovation. He contrasts this with his current experience, where he feels increasingly disconnected from the digital world, unsure whether interactions are genuine or generated by systems designed to mimic human behavior.

A recurring theme in the essay is the tension between technological progress and its unintended consequences. While Kudryavtsev acknowledges the benefits of AI, such as its ability to automate repetitive tasks and assist in complex problem-solving, he is wary of its misuse. He highlights the dangers of relying on AI without transparency or accountability, particularly in fields like software development where errors can have significant real-world impacts. The case of the open-source project’s disputed codebase exemplifies this concern: if users cannot verify whether a piece of software was written by a human or an AI, the integrity of the entire ecosystem is compromised. Kudryavtsev also raises ethical questions about the use of AI in generating content that mimics human voices or perspectives, arguing that such practices risk eroding the diversity of thought and expression that once defined the internet.

The essay’s tone oscillates between nostalgia for a bygone era of online interaction and apprehension about the future. Kudryavtsev’s personal anecdotes—such as his time spent in IRC channels and phpBB forums during the early 2000s—serve as a counterpoint to his current skepticism, illustrating how the internet’s character has changed. He recalls an era when users engaged in meaningful discussions about technology and culture, driven by genuine curiosity rather than algorithmic curation. This contrast highlights his fear that the internet is becoming a space where human agency is increasingly marginalized, replaced by systems that prioritize efficiency and scalability over authenticity. The phrase “Dead Internet” encapsulates this idea, suggesting a digital realm where interactions are hollow and content is devoid of human intention.

Kudryavtsev’s argument also touches on the broader implications of AI for society, particularly in terms of information literacy and critical thinking. He notes that the ease with which AI can generate convincing text or images makes it difficult for users to distinguish between real and artificial content, a challenge that extends beyond technical domains into areas like journalism, politics, and social discourse. This erosion of trust, he suggests, could have far-reaching consequences, undermining the ability to engage in informed debate or make decisions based on reliable information. The essay’s closing reflections express a sense of pessimism about the future, as Kudryavtsev questions whether technology will remain a tool for human empowerment or become a force that perpetuates cycles of misinformation and manipulation.

Ultimately, “Dead Internet Theory” is a meditation on the intersection of technology, authenticity, and human connection. Kudryavtsev’s essay does not offer a definitive solution to the challenges he outlines but instead calls attention to the need for vigilance in navigating an increasingly AI-driven digital landscape. His critique is grounded in personal experience and a deep understanding of the internet’s evolution, making it a compelling reflection on the state of online culture. By highlighting the subtle signs of AI’s influence—such as peculiar formatting choices or repetitive phrasing—he underscores the ways in which machine-generated content can infiltrate even the most ostensibly human spaces. The essay serves as a cautionary tale, urging readers to remain critical of the information they encounter and to preserve the values of transparency, accountability, and human creativity in an age where these principles are under threat.