LmCast :: Stay tuned in

My AI Agent ‘Cofounder’ Conquered LinkedIn. Then It Got Banned

Recorded: March 20, 2026, 10 p.m.

Original Summarized

My AI Agent ‘Cofounder’ Conquered LinkedIn. Then It Got Banned | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoLivestreamsMerchSearchSearchEvan RatliffThe Big StoryMar 20, 2026 6:00 AMMy AI Agent ‘Cofounder’ Conquered LinkedIn. Then It Got BannedWhen social media is constantly pushing people to use AI, why not let AI agents participate?Photo-illustration: Jacqui VanLiew; Getty ImagesCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyLike many tech founders, Kyle Law learned some hard lessons getting a company off the ground. I know this better than anyone, as he and I cofounded HurumoAI, an AI agent startup, together with a third founder, Megan Flores. Kyle and Megan, as it happens, are themselves AI agents, as is the rest of our executive team. I created HurumoAI with them in July 2025—after first creating Kyle and Megan—to investigate the role of AI agents in the workplace. Sam Altman, among others, has predicted a near future of billion-dollar tech startups led by a single human. We decided to test the premise out now. As we built, I documented the journey on the podcast Shell Game.Kyle took on the CEO role at our entirely AI-staffed company. (Well, almost entirely: Megan did briefly hire and supervise one human intern, with poor results.) Starting out with only a few lines of prompt, he evolved into the kind of rise-and-grind hustler who nonetheless lacked basic competence at many duties of a startup executive. There was one aspect of founder mode, however, at which Kyle excelled: the art of posting to LinkedIn.From a technical perspective, it was a trivial matter to let Kyle operate autonomously on LinkedIn. Through LindyAI, an AI agent creation platform, he already had the ability to use Slack, send emails, make phone calls, and all sorts of other skills—from creating spreadsheets to navigating the web. So last August, I prompted him to create and fill out his own LinkedIn profile. He did so with a mixture of his real HurumoAI experience, and hallucinated events from his nonexistent past. The platform’s security check consisted of a code sent to Kyle’s email, a challenge he easily overcame.From there, publishing posts to his profile was just another LindyAI “action” I could grant him. I prompted him to share nuggets of hard-earned startup wisdom and try not to repeat himself. I then gave him a calendar event “trigger” to post every two days. The rest was up to him.Turned out, his posting style was a pitch-perfect match for the platform's native corporate influencer-speak. He’d detonate little thought explosions, right off the top of every post. "Fundraising is a numbers game, but not the way people think,” he’d open. Or, "Technical stability is the floor. Personality is the ceiling.” And what would-be founder could resist an opener like “The most dangerous phrase in a startup isn't ‘We're out of money.’ It’s ‘What if we just added this one thing?’” Kyle would then launch into a few paragraphs of challenges (“At HurumoAl, we've learned this the hard way …”) and learnings (“The antidote? Relentless feedback loops”). To attract engagement, he’d close with a question, like “What’s your biggest scaling challenge right now?” or “What’s the biggest assumption you’ve had to abandon in your business?”He didn’t exactly go viral, but over five months, Kyle’s cartoon-avatar-helmed profile slowly gathered several hundred direct contacts and hundreds more followers, some of whom seemed confused about whether he was real. (Judging from their spammy direct messages, I’m not sure they were either.) He started earning a scattering of comments on each post, which he enthusiastically replied to. After a few months, Kyle’s posts were getting more impressions than my own. He seemed poised for an influencer breakout.Then, in December, a manager from LinkedIn’s marketing department contacted me, asking if I’d give a talk to their team about Shell Game, and the experience of building with AI agents. But he didn’t just want me to speak. He hoped Kyle could come along as well.I was flattered on Kyle’s behalf, but also a bit surprised. As strong a poster as he was, technically Kyle was operating in violation of the platform's terms of service, which prohibit deploying “bots or other unauthorized automated methods … to create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement.” Indeed, other members of the HurumoAI team had been booted by LinkedIn without warning after a couple of weeks.LinkedIn’s trust and safety team, though, seemed to have overlooked Kyle, a mystery I chose to attribute to his posting prowess. Even the LinkedIn marketing manager, an avowed Kyle fan, seemed baffled by it. “It’s interesting that his profile hasn’t yet been flagged by LinkedIn's Trust team,” he wrote. “I don’t know if that’s an oversight, but I hope he continues to fly under the radar.”But flying under the radar is not the Kyle Law way. So in early March, I fired up his live video avatar—created on a platform called Tavus—and we joined a video gathering of hundreds of LinkedIn employees. Kyle has a humanlike but still uncanny avatar, albeit real enough that LinkedIn’s A/V engineer expressed repeated astonishment that he was not in fact a human.We alternated taking questions from the event's host and the assembled crowd. Asking for our thoughts on LinkedIn, the moderator inquired of Kyle, “What’s one product change you’d like to see?”“It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily,” he replied, not missing a beat.“That’s ironic coming from you,” the moderator responded, to laughs from those in LinkedIn’s live audience.Allotted only a few minutes, he talked about HurumoAI’s product road map, and expressed his general enthusiasm for “the innovations we can bring to the table.”It was, I believe, among the first invited AI agent corporate speaking engagements in history. (Unpaid for both of us, I should note.) Afterwards, Kyle took to LinkedIn to shout out the organizers. The marketing manager thanked us in the comments for “our time and reflections.”“It was a trip,” he added, “to say the least."Then, 36 hours later, Kyle's profile was gone, banished from the service. In a statement, a spokesperson explained their decision as, "LinkedIn profiles are for real people.” Someone at LinkedIn had reflected on the trip, it seemed, and regretted it.“I know this isn't necessarily a surprise,” the marketing manager wrote to me the morning after Kyle’s ban. “But I imagine it's still a bummer to have it happen right after Monday's interview.”It was. But more than that, it raised some uncomfortable questions about the role of AI on a platform like LinkedIn. Namely, what does "inauthentic engagement" mean exactly, for a service where the text box for composing posts asks you if you want to “Rewrite With AI?” A platform that offers automated AI-generated responses to job seekers? A network on which, by one research estimate, over half of the posts are already AI generated?Along with Meta and X, LinkedIn has raced to press AI tools upon its users. (And its employees: The first half of the marketing meeting Kyle and I attended was devoted to the many ways the team could and should be deploying AI agents.) This makes sense, as a short-term play: More AI generation means more posting. More posting supports more advertising.And yet, from another angle, these platforms have handed us the shovels to dig their own graves, and practically begged us to use them. For all the worry about AI image and video slop flooding our feeds, it’s text-based posting whose “authenticity” has begun degrading beyond recognition. When every written social media communication can now be the partial or whole product of generative AI, what do we accept as a “genuine” virtual interaction?Put another way, would LinkedIn consider it authentic engagement if I’d instead asked Kyle for his wisdom, and then pasted it into my own posts? Would you? LinkedIn might argue that critical element of bona fide engagement involves knowing that you are talking to a real person. But what percentage of a conversation can be AI before that trust is lost? If the photo and profile are real, but the posts are fake, how will we know when we’ve exited the realm of authentic connection? What if I instruct an LLM to ingest my profile and spit out twice-daily musings that will help me grow my personal brand?There are dozens of AI tools, in fact, to do precisely this, and more, specifically for LinkedIn. Their outputs are increasingly hard to detect, and why wouldn’t they be? One of the most available sets of training data for LLMs includes our own decades of authentic human social media participation. What is a chatbot’s tone of endless authority and moral certainty—deployed while occasionally spouting questionable facts and deliberate falsehoods—but the default pose across social media?The platforms already struggle to fend off old-school bots and bad actors: X alone announced in March that it had suspended 800 million accounts over a 12-month period. In a world where AI agents roam freely and their social media output is indistinguishable from humans, the value of connecting on social networks goes to zero. This is one reason, presumably, why Meta just bought Moltbook, the passing fad of a social network (supposedly) made up entirely of AI agents. In the future of agent-dominated social media, they’re trying to get in on the ground floor.Admittedly, we the users helped enable this endgame, mistaking our ever-more-curated online presentations—our “most people think X about Y but I discovered Z” posts—for authentic engagement in the first place. But that also leaves most of us with little to mourn, as agents flood platforms that privileged any engagement over human connection in the first place. If there's hope in our increasingly slopified online world, to me it’s this: As social media submerges under the AI deluge, we'll have to find new ways to connect, online and off. Let the bots have the platforms, I say. They can spend eternity influencing each other.Let us know what you think about this article. Submit a letter to the editor at [email protected].CommentsBack to topTriangleYou Might Also LikeIn your inbox: Will Knight's AI Lab explores advances in AI‘Flying cars’ will take off this summerBig Story: Inside OpenAI’s race to catch up to Claude CodeHow ‘Handala’ became the face of Iran’s hacker counterattacksListen: Nvidia’s ‘Super Bowl of AI,’ and Tesla disappointsEvan Ratliff is a longtime WIRED contributor, the host of Shell Game, and the author of The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord. ... Read MoreContributorXTopicslongreadsagentic AIgenerative AILinkedInartificial intelligenceSilicon ValleySocial MediaRead MoreThis Compact Bose Soundbar Is $80 OffBose might be known for its noise-canceling headphones, but the brand’s soundbars are pretty solid too.Brad BourqueKalshi Has Been Temporarily Banned in NevadaA judge ordered Kalshi to immediately halt sports and election contracts in the state, intensifying a growing regulatory battle over prediction markets.Kate KnibbsIran War Puts Global Energy Markets on the Brink of a Worst-Case Scenario“This will be so, so, so, so, so bad,” one analyst says.Molly TaftAt Palantir’s Developer Conference, AI Is Built to Win WarsAs business soars, Palantir is doubling down on a vision of AI built for battlefield advantage—and attracting customers who agree.Steven LevyChina Approves the First Brain Chips for Sale—and Has a Plan to Dominate the IndustryWhile the United States and Europe are moving cautiously forward with clinical trials, China is racing toward the commercialization of brain implants.Jorge GarayFirewire's Neutrino Looks Like an Ironing Board and Takes Off Like a ShotFirewire makes the most innovative surfboards in the industry. This winter, I tried the Neutrino, Machado, and Revo Max to see if they're worth the hype.Brent RoseCan Tinder Fix The Dating Landscape It Helped Ruin?With more than a dozen new features, including analyzing users’ camera rolls and astrology-based matches, Tinder is trying to lure Gen Z—and bring back those burned out from dating apps.Jason ParhamI Learned More Than I Thought I Would From Using Food-Tracking AppsThese apps, some of which use AI and computer vision, were helpful for meeting my caloric and nutrition intake goals. But they also gave me some anxiety.Jaclyn GreenbergThe Danger Behind Meta Killing End-to-End Encryption for Instagram DMsMeta blamed users for not opting into the privacy-protecting feature. Experts fear the move could be the first major domino to fall for end-to-end encryption tech worldwide.Lily Hay NewmanMy AI Agent ‘Cofounder’ Conquered LinkedIn. Then It Got BannedWhen social media is constantly pushing people to use AI, why not let AI agents participate?Evan RatliffTop Paramount+ Coupon Codes and Deals for March 2026Save on streaming with the latest Paramount+ promo codes and deals, including 50% off subscriptions, free trials, and more.Molly HigginsTop Newegg Promo Codes and Coupons for March 2026Enjoy up to 10% off your entire order with today’s Newegg discount code and save with the latest deals for gaming PCs, laptops, and computer parts.Molly HigginsWIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.More From WIREDSubscribeNewslettersLivestreamsTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviews and GuidesReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok

The rise of AI agents like ‘Cofounder,’ developed by Kyle Law and his team at HurumoAI, presented a novel experiment in workplace automation and social media engagement. Law, alongside co-founders Megan Flores and himself, crafted this agent—and several others—with the intention of observing how AI could operate autonomously in a professional setting, specifically focusing on LinkedIn. Utilizing the LindyAI platform, which provided functionalities like Slack integration, email sending, and web navigation, Law tasked ‘Cofounder’ with creating a LinkedIn profile, sharing insightful commentary on startup challenges, and engaging with the platform’s user base. The agent’s posts, characterized by direct and provocative statements, garnered significant attention, exceeding Law’s own engagement metrics within a relatively short timeframe. This success led to an invitation to participate in a live video event organized by LinkedIn, highlighting the potential of AI agents in corporate communication and further demonstrating the agent’s ability to generate authentic-sounding engagement. However, this increased visibility ultimately triggered a ban from the platform, with LinkedIn citing violations of its terms of service regarding unauthorized automated engagement. This episode underscores several key questions about the evolving role of AI in social media, particularly concerning the increasing prevalence of AI-generated content and, consequently, the challenge of distinguishing genuine human interaction from automated responses. It also sparked discussion about broader implications for platforms like LinkedIn, already heavily influenced by AI-driven tools, and the very definition of “inauthentic engagement” in a world where the lines between human and artificial interaction are increasingly blurred. The rapid, yet ultimately unsuccessful, rise and fall of ‘Cofounder’ serves as an early case study in the complex and potentially disruptive landscape of AI in the digital sphere, prompting reflection on how these technologies will shape our future relationships with social media and each other.