Everyone in Seattle hates AI
Recorded: Dec. 4, 2025, 3:05 a.m.
| Original | Summarized |
Everyone in Seattle Hates AI — Jonathon Ready ← Back to blog Everyone in Seattle Hates AI I grabbed lunch with a former Microsoft coworker I've always admired—one of those engineers who can take any idea, even a mediocre one, and immediately find the gold in it. I wanted her take on Wanderfugl 🐦, the AI-powered map I've been building full-time. I expected encouragement. At worst, overly generous feedback because she knows what I've sacrificed. Instead, she reacted to it with a level of negativity I'd never seen her direct at me before. When I finally got her to explain what was wrong, none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment. Her PM had been laid off months earlier. The team asked why. Their director told them it was because the PM org "wasn't effective enough at using Copilot 365." I nervously laughed. This director got up in a group meeting and said that someone lost their job over this? After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment. I left the lunch deflated and weirdly guilty, like building an AI product made me part of the problem. But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI." When I joined Microsoft, there was still a sense of possibility. Satya was pushing "growth mindset" everywhere. Leaders talked about empowerment and breaking down silos. And even though there was always a gap between the slogans and reality, there was room to try things. I leaned into it. I pushed into areas nobody wanted to touch, like Windows update compression, because it lived awkwardly across three teams. Somehow, a 40% improvement made it out alive. Leadership backed it. The people trying to kill it shrank back into their fiefdoms. It felt like the culture wanted change. That world is gone. When the layoff directive hit, every org braced for impact. Anything not strictly inside the org's charter was axed. I went from shipping a major improvement in Windows 11 to having zero projects overnight. I quit shortly after. In hindsight, getting laid off with severance might've been better than watching the culture collapse in slow motion. Then came the AI panic. If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not. Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually. But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet. Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." Bring up AI in a Seattle coffee shop now and people react like you're advocating asbestos. Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot. This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups: 1. The companies. 2. The engineers. 3. Anyone trying to build anything new in Seattle. And the loop feeds itself: My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and that AI isn't worth doing anyway. She's wrong on both counts, but the culture made sure she'd land there. Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do. |
Jonathon Ready’s essay, “Everyone in Seattle Hates AI,” presents a compelling and increasingly urgent critique of the prevailing attitudes and resulting consequences within the technology industry, specifically focusing on the experiences of engineers in Seattle. The piece reveals a complex interplay of corporate restructuring, technological panic, and a self-limiting belief system that is actively hindering innovation and stifling engineer’s potential. Ready’s account illustrates a significant divergence in perspective compared to other global tech hubs like San Francisco and Tokyo, where a more open and exploratory approach to AI development is still prevalent. The core of Ready’s argument rests on the observation that Seattle’s tech culture has undergone a radical shift, primarily triggered by a strategic response to the broader anxieties surrounding artificial intelligence. The immediate aftermath of the PM layoff—attributed to a failure to “use Copilot 365 effectively”—demonstrates this shift vividly. This incident, and the subsequent directive to prioritize AI-centric projects above all else, exemplifies a corporate response driven by external pressures rather than genuine technological assessment. The director’s decision to eliminate the PM’s role, framed as a consequence of inadequate Copilot usage, reveals a dangerously simplistic and arguably misinformed approach to assessing project value. Ready’s account highlights a broader cultural phenomenon: a reflexive, often hostile, reaction to the mere mention of “AI” within Seattle’s engineering community. This isn't simply a disagreement about technology; it’s a deeply ingrained resistance fueled by perceived threats to established roles, professional status, and perhaps, broader career trajectories. The experience with Ready’s former coworker underscores this vividly, illustrating how the narrative of "AI isn’t worth doing anyway" – a self-fulfilling prophecy – can take hold amongst even highly skilled engineers. The piece further details how this resistance became institutionalized through corporate restructuring. The shift to prioritizing AI-centric projects created a bifurcated landscape: a protected "AI org" and a workforce effectively redefined and sidelined. Engineers labeled as "not AI talent" faced stagnant compensation, diminished opportunities, and a sense of professional marginalization. This restructuring wasn’t simply a matter of strategic realignment; it represented a systematic devaluing of competence and initiative, punishing engineers for exploring areas outside the approved AI framework. The forced adoption of Microsoft’s Copilot tools, presented as mandatory rather than optional, further exacerbated this situation, creating a sense of enforced compliance and stifling genuine experimentation. Furthermore, Ready suggests a critical feedback loop wherein engineers' self-doubt—reinforced by the prevailing cultural narrative—fuels further stagnation. The fear of being deemed “not AI talent” discourages proactive exploration and innovative thinking, leading to a cycle of reduced effort, negative outcomes, and ultimately, confirmation of the initial belief. This dynamic extends beyond individual engineers, impacting both companies and the broader Seattle tech ecosystem, hindering the potential for genuinely transformative breakthroughs. Ultimately, Ready’s essay serves as a stark warning about the dangers of allowing external anxieties to dictate technological direction. It suggests a broader cultural issue wherein the pursuit of innovation is being actively discouraged by a collective belief in its own inadequacy. The situation in Seattle represents a cautionary tale, demonstrating how a reactive, fear-driven response to AI can lead to a stifled ecosystem, diminished talent, and a missed opportunity to shape the future of technology. |