When AI Amplifies the Biases of Its Users
Recorded: Jan. 24, 2026, 1 p.m.
| Original | Summarized |
When AI Amplifies the Biases of Its UsersSKIP TO CONTENTHarvard Business Review LogoHarvard Business Review LogoGenerative AI|When AI Amplifies the Biases of Its UsersSubscribeSign InLatestMagazineTopicsPodcastsStoreReading ListsData & VisualsCase SelectionsHBR ExecutiveSearch hbr.orgSubscribeLatestPodcastsThe MagazineStoreWebinarsNewslettersAll TopicsReading ListsData & VisualsCase SelectionsHBR ExecutiveMy LibraryAccount SettingsSign InExplore HBRLatestThe MagazinePodcastsStoreWebinarsNewslettersPopular TopicsManaging YourselfLeadershipStrategyManaging TeamsGenderInnovationWork-life BalanceAll TopicsFor SubscribersReading ListsData & VisualsCase SelectionsHBR ExecutiveSubscribeMy AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesSign InHarvard Business Review LogoGenerative AIWhen AI Amplifies the Biases of Its Users by Grace Chang and Heidi GrantJanuary 23, 2026HBR Staff/PexelsPostPostShareSaveBuy CopiesPrintSummary. Leer en españolLer em portuguêsPostPostShareSaveBuy CopiesPrintA widely discussed concern about generative AI is that systems trained on biased data can perpetuate and even amplify those biases, leading to inaccurate outputs or unfair decisions. But that’s only the tip of the iceberg. As companies increasingly integrate AI into their systems and decision-making processes, one critical factor often goes overlooked: the role of cognitive bias.Grace Chang , PhD, Associate Director of Behavioral Science & Insights at Ernst & Young LLP, is a cognitive neuroscientist who bridges science and practice to design effective learning programs. Previously, she was Chief Scientific Officer at The Regis Company, taught global management professionals through the NeuroLeadership Institute, conducted assessment research for UCLA CRESST, and lectured at UCLA. She frequently presents at major conferences and publishes in academic and practitioner outlets.Heidi Grant is a social psychologist who researches, writes, and speaks about the science of motivation. Her books include Reinforcements: How to Get People to Help You, Nine Things Successful People Do Differently, and No One Understands You and What to Do About It. She is the Director of Behavioral Science & Insights, EY Americas.PostPostShareSaveBuy CopiesPrintRead more on Generative AI or related topics Cognitive bias, Behavioral science and Psychology and neurosciencePartner CenterStart my subscription!Explore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceExplore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceFollow HBRFacebookX Corp.LinkedInInstagramYour NewsreaderHarvard Business Review LogoAbout UsCareersPrivacy PolicyCookie PolicyCopyright InformationTrademark PolicyTerms of UseHarvard Business Publishing:Higher EducationCorporate LearningHarvard Business ReviewHarvard Business SchoolCopyright ©2026 Harvard Business School Publishing. All rights reserved. Harvard Business Publishing is an affiliate of Harvard Business School. |
Generative AI, with its capacity to produce remarkably realistic and nuanced text, images, and other content, presents a significant challenge beyond simply addressing inherent biases within the data it consumes. Grace Chang and Heidi Grant’s exploration in “When AI Amplifies the Biases of Its Users” highlights a critical, often overlooked factor: the profound influence of cognitive biases on how individuals interact with and interpret outputs generated by these systems. The core argument revolves around the idea that AI doesn’t merely reflect existing biases; it actively amplifies them through the lens of human cognition. The authors begin by establishing the widely acknowledged concern regarding biased data – that AI systems trained on skewed datasets will inevitably produce biased outputs, leading to inaccurate conclusions and potentially discriminatory decisions. However, they posit that this is merely the most visible symptom of a deeper problem. The real amplification occurs when users, influenced by their pre-existing cognitive biases, selectively interpret, accept, and act upon the information generated by AI. These biases, deeply ingrained in human thinking, shape how individuals perceive, evaluate, and ultimately, utilize the insights offered by AI. Chang, drawing on her expertise in behavioral science and neuroscience, emphasizes that cognitive biases are not random deviations from rationality. Instead, they represent evolved mental shortcuts that have historically aided human survival and decision-making. Confirmation bias, for instance, – the tendency to favor information confirming existing beliefs – is a particularly potent force. Users, confronted with AI-generated content, are more likely to accept outputs that align with their preconceptions, while dismissing or downplaying those that challenge them. This selective acceptance reinforces existing biases, creating a feedback loop that further distorts understanding. Grant’s contribution, rooted in social psychology, delves into the motivations underlying these cognitive biases. She examines how factors such as loss aversion (a tendency to feel the pain of a loss more strongly than the pleasure of an equivalent gain), self-serving bias (the inclination to attribute successes to internal factors and failures to external ones), and the availability heuristic (relying on readily available information, often emotionally charged, when making judgments) significantly impact how individuals respond to AI-generated insights. The authors illustrate how these biases can lead to errors in judgment, flawed strategies, and ultimately, suboptimal outcomes when integrating AI into decision-making processes. Furthermore, the piece underscores the potential for AI to exploit these biases. Because AI systems are designed to provide efficient and persuasive responses, they can be leveraged to subtly reinforce existing biases, creating a situation where users unknowingly operate within a framework shaped by their own flawed thinking, reinforced by a technology that appears objective and impartial. The system, subtly, directs the user to a pre-ordained conclusion. The authors do not frame AI as a purely negative force, but rather highlight the critical need for awareness and mitigation strategies. They suggest that organizations must actively educate their employees about cognitive biases and provide training on how to identify and counteract their influence when interacting with AI. The piece implicitly calls for a shift in perspective, moving beyond simply addressing data bias to fostering a more critical and nuanced understanding of the cognitive processes involved in utilizing AI. This involves promoting a culture of intellectual humility, encouraging users to actively seek out dissenting viewpoints, and rigorously testing AI outputs against diverse criteria. Ultimately, reducing the amplification of biases requires a concerted effort to recognize, understand, and manage the complex interplay between human cognition and artificial intelligence. |