After two years of vibecoding, I'm back to writing by hand
Recorded: Jan. 26, 2026, 3 p.m.
| Original | Summarized |
After two years of vibecoding, I'm back to writing by hand MoSubscribeSign inAfter two years of vibecoding, I'm back to writing by handMoJan 26, 2026ShareMost people’s journey with AI coding starts the same: you give it a simple task. You’re impressed. So you give it a large task. You’re even more impressed.You open X and draft up a rant on job displacement.If you’ve persisted past this point: congratulations, you understand AI coding better than 99% of people.Serious engineers using AI to do real work and not just weekend projects largely also follow a predictable development arc.Still amazed at the big task you gave it, you wonder if you can keep giving it bigger and bigger tasks. Maybe even that haunting refactor no one wants to take on?But here’s where the curtain starts to crinkle.On the one hand, you’re amazed at how well it seems to understand you. On the other hand, it makes frustrating errors and decisions that clearly go against the shared understanding you’ve developed.You quickly learn that being angry at the model serves no purpose, so you begin to internalize any unsatisfactory output.“It’s me. My prompt sucked. It was under-specified.” “If I can specify it, it can build it. The sky’s the limit,” you think.So you open Obsidian and begin drafting beefy spec docs that describe the feature in your head with impressive detail. Maybe you’ve put together a full page of a prompt, and spent half an hour doing so.But you find that spec-driven development doesn’t work either. In real life, design docs and specs are living documents that evolve in a volatile manner through discovery and implementation. Imagine if in a real company you wrote a design doc in 1 hour for a complex architecture, handed it off to a mid-level engineer (and told him not to discuss the doc with anyone), and took off on vacation.Not only does an agent not have the ability to evolve a specification over a multi-week period as it builds out its lower components, it also makes decisions upfront that it later doesn’t deviate from. And most agents simply surrender once they feel the problem and solution has gotten away from them (though this rarely happens anymore, since agents will just force themselves through the walls of the maze.)What’s worse is code that agents write looks plausible and impressive while it’s being written and presented to you. It even looks good in pull requests (as both you and the agent are well trained in what a “good” pull request looks like).It’s not until I opened up the full codebase and read its latest state cover to cover that I began to see what we theorized and hoped was only a diminishing artifact of earlier models: slop.It was pure, unadulterated slop. I was bewildered. Had I not reviewed every line of code before admitting it? Where did all this...gunk..come from?In retrospect, it made sense. Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not. Respect for structural integrity there is not. Respect even for neighboring patterns there was not.The AI had simply told me a good story. Like vibewriting a novel, the agent showed me a good couple paragraphs that sure enough made sense and were structurally and syntactically correct. Hell, it even picked up on the idiosyncrasies of the various characters. But for whatever reason, when you read the whole chapter, it’s a mess. It makes no sense in the overall context of the book and the preceding and proceeding chapters.After reading months of cumulative highly-specified agentic code, I said to myself: I’m not shipping this shit. I’m not gonna charge users for this. And I’m not going to promise users to protect their data with this.I’m not going to lie to my users with this.So I’m back to writing by hand for most things. Amazingly, I’m faster, more accurate, more creative, more productive, and more efficient than AI, when you price everything in, and not just code tokens per hour.You can watch the video counterpart to this essay on YouTube: Thanks for reading! Subscribe for free to receive new posts and support my work.SubscribeShareDiscussion about this postCommentsRestacksReady for more?Subscribe© 2026 Mo · Privacy ∙ Terms ∙ Collection notice Start your SubstackGet the appSubstack is the home for great culture This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts |
This essay, penned by Mo, chronicles a significant shift in the author’s approach to software development following two years of extensive experimentation with AI coding tools, specifically referencing the phenomenon of “vibecoding.” The piece details a disillusionment with the initial, seemingly miraculous capabilities of these agents and a return to traditional, manual coding practices. The initial experience with AI coding was characterized by a pattern familiar to many: a series of increasingly ambitious tasks were entrusted to the agent, leading to inflated expectations and a sense of profound understanding. This progression mirrored a common trajectory – fascination with a successful initial task leading to a desire for more complex endeavors, mirroring the enthusiasm often seen in weekend projects. The author highlights this cycle of escalating demand and subsequent, sometimes frustrating, results. Despite the agent's capacity to generate impressive, seemingly coherent code snippets based on meticulously crafted prompts and detailed specification documents, a critical flaw emerged. The agent’s focus remained solely on localized, consistent outputs, lacking a holistic understanding of the codebase and the broader architectural context. This resulted in “slop” – a collection of poorly integrated and ultimately illogical code segments. The agent, essentially, constructed a convincing, surface-level “story” without regard for the overall narrative or the structural integrity of the entire system. This parallels the experience of vibewriting, where a skillfully constructed passage can, when integrated into a larger work, create a jarring and nonsensical whole. The author’s realization led to a pragmatic assessment: the agent’s output was unsuitable for production, particularly concerning critical aspects like data protection. The reliance on meticulously crafted, highly-specified prompts and the agent’s tendency to prioritize localized consistency over cohesive design created a system vulnerable to errors and fundamentally incompatible with responsible software development practices. The return to manual coding represents a deliberate recalibration. The author’s experience demonstrates that while AI tools can be valuable for generating initial drafts or exploring ideas, human oversight and a commitment to architectural soundness remain paramount. He argues that when considering the totality of costs—including human time, errors, and the potential for compromised software—manual coding proves to be a more efficient and reliable process. The piece suggests a measured, rather than blindly optimistic, approach to integrating AI into the software development workflow. |