LmCast :: Stay tuned in

Why People Create AI “Workslop”—and How to Stop It

Recorded: Jan. 17, 2026, 10:04 a.m.

Original Summarized

Why People Create AI “Workslop”—and How to Stop ItSKIP TO CONTENTHarvard Business Review LogoHarvard Business Review LogoGenerative AI|Why People Create AI “Workslop”—and How to Stop ItSubscribeSign InLatestMagazineTopicsPodcastsStoreReading ListsData & VisualsCase SelectionsHBR ExecutiveSearch hbr.orgSubscribeLatestPodcastsThe MagazineStoreWebinarsNewslettersAll TopicsReading ListsData & VisualsCase SelectionsHBR ExecutiveMy LibraryAccount SettingsSign InExplore HBRLatestThe MagazinePodcastsStoreWebinarsNewslettersPopular TopicsManaging YourselfLeadershipStrategyManaging TeamsGenderInnovationWork-life BalanceAll TopicsFor SubscribersReading ListsData & VisualsCase SelectionsHBR ExecutiveSubscribeMy AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesSign InHarvard Business Review LogoGenerative AIWhy People Create AI “Workslop”—and How to Stop It by Kate Niederhoffer, Alexi Robichaux and Jeffrey T. HancockJanuary 16, 2026HBR Staff; Printer: Alexisaj/dp3010/Getty ImagesPostPostShareSaveBuy CopiesPrintSummary.   Leer en españolLer em portuguêsPostPostShareSaveBuy CopiesPrintAs AI tools have proliferated in workplaces and pressure to use them has mounted, employees have had to contend with the scourge of workslop, or low-effort, AI-generated work that looks plausibly polished, but ends up wasting time and effort as it offloads cognitive work onto the recipient. For the person on the receiving end, it can be a confusing and infuriating experience.Kate Niederhoffer is Chief Scientist at BetterUp and a social psychologist. Her research explores the role of AI in workplace adoption, human development, and interpersonal communication.Alexi Robichaux is the CEO and co-founder of BetterUp.Jeffrey T. Hancock is the Harry and Norman Chandler Professor of Communication at Stanford University. He is also the Founding Director of the Stanford Social Media Lab and Director of the Cyber Policy Center at Stanford. His research focuses on psychological aspects of technology use, including AI and social media.PostPostShareSaveBuy CopiesPrintRead more on Generative AI or related topics Technology and analytics, Automation, AI and machine learning, Collaboration and teams and Leadership and managing peoplePartner CenterStart my subscription!Explore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceExplore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceFollow HBRFacebookX Corp.LinkedInInstagramYour NewsreaderHarvard Business Review LogoAbout UsCareersPrivacy PolicyCookie PolicyCopyright InformationTrademark PolicyTerms of UseHarvard Business Publishing:Higher EducationCorporate LearningHarvard Business ReviewHarvard Business SchoolCopyright ©2026 Harvard Business School Publishing. All rights reserved. Harvard Business Publishing is an affiliate of Harvard Business School.

The article “Why People Create AI ‘Workslop’—and How to Stop It” by Kate Niederhoffer, Alexi Robichaux, and Jeffrey T. Hancock explores a growing issue in workplaces where employees increasingly rely on generative AI tools to produce content, often resulting in what the authors term “workslop.” This phenomenon refers to low-effort, AI-generated work that appears polished and professional but ultimately requires significant additional effort from recipients to correct or refine. The authors describe workslop as a byproduct of the pressure to adopt AI technologies, which has led to a mismatch between the speed and efficiency promised by these tools and their actual utility in collaborative or high-stakes environments. Employees, seeking to meet deadlines or demonstrate productivity, may generate AI-assisted outputs that lack depth, accuracy, or alignment with specific goals. This creates a cycle where the burden of fixing these subpar outputs falls on others, leading to frustration and inefficiency. Niederhoffer, a social psychologist specializing in AI’s role in workplaces, Robichaux, CEO of BetterUp, and Hancock, a Stanford professor studying technology’s psychological impact, collectively argue that workslop is not merely a technical issue but a systemic challenge rooted in organizational culture, training gaps, and the misperception of AI as a substitute for human judgment.

The authors identify several factors driving workslop, beginning with the rapid proliferation of generative AI tools in professional settings. As organizations encourage employees to leverage these technologies, many lack the training or guidance needed to use them effectively. This gap creates a situation where individuals may prioritize speed over quality, producing AI-generated content that superficially meets requirements but fails to address nuanced needs. For example, a team member might use AI to draft a report or presentation, assuming the tool will handle complex tasks, only for colleagues to discover inaccuracies, irrelevant data, or formatting issues that require manual correction. The authors highlight how this dynamic disproportionately affects teams where AI adoption is mandated without clear protocols, leading to a disconnect between the intended benefits of automation and its practical outcomes. They also point to the psychological pressure on employees to appear productive, which can incentivize the use of AI in ways that prioritize quantity over quality. This pressure is exacerbated by managerial expectations to demonstrate efficiency, even when AI-generated outputs are subpar or incomplete.

A key insight from the article is that workslop often stems from a misunderstanding of AI’s capabilities. While generative tools can streamline routine tasks, such as drafting emails or summarizing documents, they are not designed to replace critical thinking or contextual understanding. The authors emphasize that AI’s value lies in augmenting human work rather than substituting it, but this distinction is frequently overlooked. When employees treat AI as a shortcut—relying on it to generate content without critical evaluation—they risk producing outputs that are technically adequate but lack the depth or nuance required for meaningful collaboration. This issue is compounded by the fact that many AI tools are optimized for speed and surface-level coherence, which can create a false sense of accomplishment. For instance, an employee might generate a polished PowerPoint slide using AI, only to find that the content is too generic or poorly structured for an audience requiring detailed analysis. Such scenarios illustrate how workslop can undermine productivity rather than enhance it, as the time saved in creation is offset by the effort needed to revise or rebuild the work.

The authors also critique the broader organizational culture that enables workslop, noting that many companies lack frameworks for evaluating the quality and appropriateness of AI-generated outputs. Without clear guidelines, employees may default to using AI for tasks where it is not well-suited, such as complex problem-solving or creative ideation. This lack of oversight can lead to a “set it and forget it” mentality, where AI-generated work is shared without scrutiny, assuming that the tool has already addressed all necessary considerations. Hancock’s research on technology use underscores how this dynamic reflects a broader tendency to conflate automation with efficiency, even when the former does not address the latter. The authors argue that organizations must move beyond superficial adoption of AI tools and instead invest in training programs that teach employees how to use these technologies responsibly. This includes fostering a culture where critical evaluation of AI outputs is encouraged, and where teams are held accountable for ensuring that generated work meets quality standards.

To address workslop, the authors propose a multi-faceted approach that combines education, policy development, and cultural shifts. First, they advocate for comprehensive training programs that help employees understand the limitations of AI and how to integrate it effectively into their workflows. This training should emphasize the importance of human oversight, teaching workers to recognize when AI-generated content requires further refinement. Second, organizations should establish clear guidelines for AI use, including standards for quality, accuracy, and relevance. These policies could include checkpoints where AI-generated work is reviewed by stakeholders before being shared or finalized. Finally, the authors stress the need for a cultural shift that values thoughtful, high-quality work over superficial efficiency. This involves redefining productivity metrics to prioritize depth and accuracy, rather than the speed of output. By fostering a workplace environment where employees feel empowered to question AI-generated content and seek improvements, organizations can mitigate the risks of workslop while unlocking the true potential of generative AI.

Niederhoffer, Robichaux, and Hancock also highlight the role of leadership in shaping attitudes toward AI. Managers must model responsible usage by demonstrating how to critically engage with AI tools and encourage open discussions about their limitations. This includes creating safe spaces for employees to voice concerns about AI-generated work without fear of retribution. Additionally, leaders should recognize and reward efforts to refine or improve AI outputs, reinforcing the idea that collaboration between humans and machines is essential for meaningful outcomes. The authors suggest that organizations can further reduce workslop by investing in AI tools with built-in quality checks or collaborative features that facilitate human input. For example, platforms that allow real-time feedback or integration with subject-matter experts could help ensure that AI-generated work aligns with specific goals.

A critical aspect of the authors’ analysis is their emphasis on the psychological impact of workslop on employees. The frustration caused by receiving poorly crafted AI outputs can erode trust in both the technology and the individuals who generated it. This is particularly problematic in team settings where collaboration relies on mutual confidence in the quality of shared work. The authors note that workslop can also contribute to burnout, as employees are forced to spend additional time correcting errors or rebuilding content that should have been handled more effectively. By addressing workslop, organizations not only improve efficiency but also foster a more positive and sustainable work environment. This aligns with Niederhoffer’s research on human development, which underscores the importance of empowering employees to take ownership of their work and feel supported in navigating new technologies.

Ultimately, the article positions workslop as a symptom of a larger challenge: the need to balance technological innovation with human-centric practices. While generative AI offers significant potential to enhance productivity, its benefits are contingent on how it is implemented and integrated into workflows. The authors argue that organizations must move beyond a transactional view of AI adoption, instead focusing on cultivating a culture where technology serves as a tool for empowerment rather than a source of inefficiency. This requires ongoing dialogue, investment in education, and a commitment to redefining success in ways that prioritize quality over speed. By addressing the root causes of workslop, companies can ensure that AI technologies are used in ways that genuinely support their employees and drive meaningful outcomes.