Saying The Quiet Part Out Loud: AI Isn’t Neutral, So Let’s Stop Pretending | AdExchanger
image/svg+xml:
Topics Latest Marketers Agencies Publishers Technology Platforms Identity Measurement Data Privacy Artificial Intelligence CTV Commerce AdExplainer Exclusive Report Daily News Roundup
Opinion All Columns Data-Driven Thinking On TV & Video The Sell Sider Content Studio Comic Contributor Guidelines
About Us Advertise Newsletter AdExchanger Advisory Board About Us Contact Us
Events Programmatic AI Las Vegas AdExchanger Awards Webinars All Events Network Events
Podcasts AdExchanger Talks The Big Story Inside the Stack
NEW! Programmatic AI 2026
Become an AdHero
Subscribe
Sign In
Sign In
Topics Latest Marketers Agencies Publishers Technology Platforms Identity Measurement Data Privacy Artificial Intelligence CTV Commerce AdExplainer Exclusive Report Daily News Roundup Opinion All Columns Data-Driven Thinking On TV & Video The Sell Sider Content Studio Comic Contributor Guidelines Events & Awards Programmatic AI Las Vegas AdExchanger Awards Webinars All Events Network Events Podcasts AdExchanger Talks The Big Story Inside the Stack Subscribe Free Sign Up About Us Advertise Newsletter AdExchanger Advisory Board About Us Contact Us CONNECT
Home Data-Driven Thinking Saying The Quiet Part Out Loud: AI Isn’t Neutral, So Let’s Stop Pretending
OPINION: Data-Driven Thinking Saying The Quiet Part Out Loud: AI Isn’t Neutral, So Let’s Stop Pretending By Joanna Gerber
Wednesday, March 25th, 2026 – 10:55 am SHARE:
Joanna Gerber Associate Editor
It’s not every day that a journalist on the AI beat for an ad tech trade pub gets to watch a play about a fictional software company set in the agentic age. That experience is all the more jarring when the fictional company is developing a morally questionable AI tool and conversations begin cropping up among the characters about whether to blow the whistle and involve the press. It was very (lowercase “m”) meta. Collecting Data Last week, several AdExchanger reporters went to see “Data,” an off-Broadway play about AI, surveillance, data tracking and predictive modeling that parallels many of the ways ad technology is used to profile and target people. The fictional software company in the play, called Athena, is based heavily on Palantir, playwright Matthew Libby said during a post-performance talkback. But the story makes it very clear that even those who are working on AI with good intentions aren’t safe from becoming implicated in more harmful uses. The play revolves around a recent college graduate, Maneesh, who is working at Athena as a designer on the UX team despite his talent for programming and data science. He’s quickly recruited by the data science team, whose leader seems intent on getting his hands on a powerful algorithm that Maneesh developed in college.
Maneesh, for his part, is apprehensive about joining the team, and adamant that the algorithm remain closed-source. Otherwise, what started as a somewhat innocuous school project to predict rare events in baseball games could easily be used for more sinister purposes. Without spoiling what happens, this proves to be the case. Quiet bias The play repeatedly challenges the idea that humans can be defined by a series of data points. As one character points out, it’s all too easy to hide behind mathematical code and call AI “objective.” But even if you were to eliminate the use of AI and automation, humans are still imperfect creatures with the same biases that informed the code in the first place. That’s the quandary I found myself stuck on as the play drew to a close, and I asked Libby about it during the talkback. AI doesn’t create bias, per se, but it does exacerbate existing biases that have now been built into purportedly objective algorithms. Dehumanization in any form – digital or otherwise – poses a great danger to identity and safety, Libby said. It’s tempting to think you can know who someone is, what they’ll become or “what their value is” if you collect enough data about them, he added. But that’s a dangerous assumption, regardless of whether you make it face-to-face or through a predictive algorithm. At what cost? But these sorts of predictions power every corner of ad tech. Advertising is all about data collection and predictive modeling, which is not so different from the tools and algorithms created by companies like the fictional Athena. For instance, Palantir has been developing a mapping tool for ICE to target immigrants for detention and deportation based on details like geographic location. The parallels between the data collected for advertising and for government purposes like immigration aren’t just abstract analogies. Earlier this year, ICE put out an RFI that asked data providers and tech vendors to share information on how their tools and services could help with investigations. Just because data is initially collected for one purpose doesn’t mean it can’t be used (or misused) for another. For advertisers, the stakes aren’t as high. If you target the wrong person, maybe you’ll waste some media and have a lower-than-expected ROAS. Algorithms may not create bias, but they scale it at warp speed. And when they’re used to make major decisions about people’s futures, even a small mistake can upend someone’s entire life – which Maneesh proves firsthand when he tests his algorithm for a more personal use case with distressing results. And that’s not to mention the ambiguities of consent. Consumers legally consent to data tracking all the time without thinking twice when signing up for social media platforms or accessing websites with cookie opt-in widgets. Still, for most people outside of the marketing world, “opting in” is a vague term that doesn’t make clear just how much personal data they’re relinquishing. Now imagine that same data being handed over to immigration enforcement without direct consent. According to 404 Media, ICE is collecting addresses from the Department of Health and Human Services, effectively turning information people share to access basic services into a tool for surveillance. Even when backed by good intentions, the development of technology can quickly lead to harm. We’ve seen this before. Facial recognition technology consistently misidentifies Black people, and ChatGPT often assumes that female users in male-dominated categories (think leadership roles, cybersecurity, etc.) are men. If you’re building technology, it’s imperative that you address potential biases and speak out against possibly harmful use cases. AI isn’t a magic wand. It’s a powerful and sometimes dangerous tool. Which isn’t to say AI doesn’t have its place. “Data” acknowledges that AI’s role is complicated, and Libby is “very sympathetic” to Maneesh’s AI-enthused supervisor, who eagerly points out all of the clerical errors and latencies that automation can bypass. But we can’t talk about the wins without the losses – or without acknowledging the risks. The ad tech world likes to talk about AI solely as progress, but progress comes with a cost. How much are we willing to pay? “Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media. This column is part of a series of perspectives from AdExchanger’s editorial team.
Tagged in:
data
// ICE
// Matthew Libby
// Palantir
// surveillance
// US Immigration and Customs Enforcement
Next In Data-Driven Thinking
AI Perfectionism Is Slowing Marketing Down. Decision Velocity Is The New Advantage
Related Stories
OPINION: Data-Driven Thinking Ad Tech Says It’s Not In The Surveillance Business. Now Is The Time To Prove It
Must Read
Marketers AI Helps Manscaped Trim Social Chatter Down To The Bare Essentials
Meet Clamor, a new social listening product that pulls cultural insights from online conversations in real time. Clamor helped Manscaped freshen up its marketing, including for this year’s Super Bowl.
AI How Red Roof Is Bringing In More Customers With Zeta’s Voice-Activated AI Agent
Hotel chain Red Roof is using Zeta’s new voice-activated AI agent to guide its campaign creation, deployment timing and audience development.
Platforms Why Ad-Blocking Browser Brave Introduced Its Own Ads
Brave’s chief of ads Jean-Paul Schmetz on competition in the search and browser markets, the fallout from the Google Search antitrust ruling and whether AI search will help smaller upstarts compete with Big Tech.
Newfronts 2026 Vizio Helps Walmart Cut A Bigger Slice Of The CTV Ad Pie
Walmart and Vizio announced at NewFronts that unified account logins are coming to smart TVs using Vizio’s operating system.
Marketers Carl’s Jr. And Hardee’s Marketing Goes Regional With Amazon Ads’ Streaming Media
The age-old question for streaming TV advertisers is, how to target the viewers they want while reaching the scale their businesses need. The quick-serve restaurant operator CKE, which owns Carl’s Jr. and Hardee’s, sought an answer in a case study with Attain and Amazon Ads.
CTV America’s Test Kitchen Puts Direct And Programmatic Access On Its Menu
America’s Test Kitchen introduced direct and programmatic buying for its free ad-supported TV channels – marking the first time it’s selling ad inventory as a standalone package.
Popular
CTV America’s Test Kitchen Puts Direct And Programmatic Access On Its Menu
America’s Test Kitchen introduced direct and programmatic buying for its free ad-supported TV channels – marking the first time it’s selling ad inventory as a standalone package.
OPINION: Data-Driven Thinking What Happens When A Brand Fails To Deliver On Its Basic Promise
Customers don’t need perfection. But they do need to know that a brand will make it right when something goes wrong.
AI How Red Roof Is Bringing In More Customers With Zeta’s Voice-Activated AI Agent
Hotel chain Red Roof is using Zeta’s new voice-activated AI agent to guide its campaign creation, deployment timing and audience development.
Newfronts 2026 Google Is Pitching Buyers On Gemini And YouTube Creators At The NewFronts
Google is getting to talk about its two favorite things during IAB’s NewFronts this week: AI and creators.
Marketers The Rise Of Principal Media And The End Of The Agencies As We Knew Them
Ad agency holding companies are among the most adaptable businesses out there. In recent years holdcos like Publicis, WPP and Omnicom-IPG have stretched our notions of what an agency business even is exactly.
Join the AdExchanger Community Join Now
Your trusted source for in-depth programmatic news, views, education and events. AdExchanger is where marketers, agencies, publishers and tech companies go for the latest information on the trends that are transforming digital media and marketing, from data, privacy, identity and AI to commerce, CTV, measurement and mobile.
NEXT EVENT Programmatic AI May 18-20, 2026Park MGM, Las Vegas Learn More
ABOUT ADEXCHANGER About Us Advertise Contact Us Events Subscribe RSS Cookie Settings Privacy & Terms Accessibility Diversity, Equity, Inclusion & Belonging
CONNECT
© 2026 Access Intelligence, LLC - All Rights Reserved |
Saying The Quiet Part Out Loud: AI Isn’t Neutral, So Let’s Stop Pretending delivers a sobering critique of the prevailing attitude surrounding artificial intelligence within the advertising technology sector, as articulated by Joanna Gerber. The core argument posits that AI is not inherently neutral but rather amplifies and perpetuates existing biases, regardless of the intentions of its developers. The piece utilizes the fictionalized account of a software designer, Maneesh, within a company similar to Palantir, Athena, to illustrate this point; his project to predict rare baseball events quickly spirals into a potentially harmful application when repurposed. Gerber emphasizes that while humans are demonstrably flawed and prone to bias, the algorithmic scaling of these biases dramatically exacerbates the problem, representing a significant danger to identity and safety.
The author contends that the industry’s tendency to frame AI as objective overlooks the crucial reality that data itself is shaped by human biases, and that algorithms merely reproduce and accelerate these distortions. The piece powerfully illustrates the potential consequences of this oversight, referencing ICE’s use of data for immigration enforcement—initially collected for advertising purposes—as a stark example. The concern isn’t about the technology itself, but rather the lack of critical examination into its potential applications and the ease with which data can be repurposed for unintended, and potentially harmful, purposes. Gerber highlights the deceptive nature of “opting in” to data collection, emphasizing the opacity surrounding consent and the substantial amount of personal information relinquished without full understanding.
Furthermore, the exploration of biases in AI beyond simple data points reveals a deeper issue – the dehumanizing effect of relying on predictive technology. The play’s exploration of a supervisor’s enthusiasm for automation, seemingly oblivious to the potential pitfalls, underscores a critical blind spot in the industry. The author stresses that while AI might offer gains in efficiency, these achievements come coupled with significant costs. The piece doesn't advocate for abandoning AI altogether but instead implores for a more cautious and critically informed approach, acknowledging the inherent risk and demanding accountability from those developing and deploying these technologies. Gerber ultimately calls for a cessation of the illusion of neutrality within the AI conversation, arguing for a more honest and frankly complex assessment of the technology’s implications and its potential for harm. |