LmCast :: Stay tuned in

Surveillance and ICE Are Driving Patients Away From Medical Care, Report Warns

Recorded: Jan. 22, 2026, 9:03 a.m.

Original Summarized

Surveillance and ICE Are Driving Patients Away From Medical Care, Report Warns | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoMerchSearchSearchSign InSign InDell CameronSecurityJan 21, 2026 1:04 PMSurveillance and ICE Are Driving Patients Away From Medical Care, Report WarnsA new EPIC report says data brokers, ad-tech surveillance, and ICE enforcement are among the factors leading to a “health privacy crisis” that is eroding trust and deterring people from seeking care.PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGESSave StorySave this storySave StorySave this storyWhen immigration agents enter hospitals, and private companies are allowed to buy and sell data that reveals who seeks medical care, patients retreat, treatment is delayed, and health outcomes worsen, according to a new report that describes a growing “health privacy crisis” in the United States driven by surveillance and weak law enforcement limits.The report, published by the Electronic Privacy Information Center (EPIC), attributes the problem to outdated privacy laws and rapidly expanding digital systems that allow health-related information to be tracked, analyzed, breached, and accessed by both private companies and government agencies.EPIC, a Washington-based nonprofit focused on privacy and civil liberties, based its findings on a review of federal and state laws, court rulings, agency policies, technical research, and documented case studies examining how health data is collected, shared, and used across government and commercial systems.“Unregulated digital technologies, mass surveillance, and weak privacy laws have created a health privacy crisis,” the report says. “Our health data is increasingly being harvested, sold, and used beyond our control.”The organization found that health data routinely escapes medical settings and gets repurposed for surveillance and enforcement and is increasingly deterring patients from seeking care.EPIC identifies the sale of medical and health-related data as a central driver of the crisis. “Trafficking in individuals’ personal information has become a booming industry in the absence of a federal data privacy law,” it says, “and health information is no exception.”The report describes a largely unregulated market in which data brokers buy, aggregate and resell information that can reveal diagnoses, treatments, medications and visits to medical facilities. This data is often collected outside traditional health care settings—through apps, websites, location tracking and online searches—and can be repurposed for advertising, insurance risk scoring, or government surveillance without patients’ knowledge or consent.Once sold, EPIC notes, the information can be difficult or impossible to control, increasing the risks of profiling, discrimination, and higher costs for care, while discouraging people from seeking treatment in the first place.Last year, WIRED reported that Google’s advertising ecosystem allowed marketers to target US consumers based on sensitive health indicators, including chronic illness, using data supplied by third-party brokers, despite company rules barring such use. The investigation found that advertisers could reach millions of devices linked to conditions such as diabetes, asthma, or heart disease through audience segments circulating inside Google’s ad-tech platform.In a 2022 investigation, The Markup examined the websites of Newsweek’s top 100 US hospitals and found that 33 were sending sensitive patient information to Facebook through the Meta Pixel, an online tracking tool. Reporters documented the pixel transmitting details when users attempted to schedule appointments, including doctors’ names, medical specialties and search terms such as “pregnancy termination,” along with IP addresses that can often be linked to individuals.Health privacy experts told The Markup that some of the data sharing may have violated the Health Insurance Portability and Accountability Act (HIPAA), the nation’s primary law governing the privacy of medical records, which is supposed to limit how hospitals can disclose identifiable patient information to third parties without consent or specific contracts.EPIC argues that large technology companies have become central actors in the health privacy crisis by embedding surveillance tools across health, advertising, and data-broker ecosystems while pressing policymakers to loosen constraints on data collection. The report warns that those practices have public-health consequences, particularly for people already wary of surveillance or government scrutiny.“We face a health privacy crisis where care is inaccessible due to criminalization, costs, stigma, and the rise of government intrusion into medical care which forces people to delay or retreat from care, worsening their health,” says Sara Geoghegan, senior counsel at EPIC.Geoghegan and her colleagues point to incidents of Immigration and Customs Enforcement (ICE) agents occupying emergency rooms, waiting rooms, and hospital lobbies, noting that clinicians have reported agents blocking treatment and listening in on conversations between patients and health workers.For nearly a decade, Department of Homeland Security guidelines advised immigration agents to avoid enforcement actions in sensitive locations, such as schools, places of worship, and medical or mental health care facilities, unless exigent circumstances existed or prior approval was obtained. Those protections were rescinded in January 2025 when then–acting DHS secretary Benjamine Huffman revoked the Biden-era memo and replaced it with an interim directive for officers to use “common sense” in deciding when and where to conduct enforcement actions.Legal and health care observers have since reported visible ICE activity in medical facilities, such as agents in reception areas and interference with care, and noted that uncertainty about enforcement roles—even in public spaces—is contributing to fear among patients and clinicians.In August, a CalMatters investigation documented the real-world impact of increased ICE presence at health facilities in California. It found that federal immigration agents were appearing more frequently at emergency rooms and clinics amid a ramp-up in deportations, sometimes accompanying detained patients through reception areas and waiting in lobbies, and that the visible presence of armed agents had left workers and patients uneasy and without guidance on how to respond.Hospital staff told reporters that the sight of ICE agents “makes many wary” and exacerbates concerns about privacy, legal rights, and the ability to care for sick patients without interference.Federal immigration authorities have gone beyond traditional enforcement tactics by tapping into vast commercial and insurance data systems to locate people for deportation. Reporting by 404 Media shows ICE agents have gained access to ISO ClaimSearch—a private insurance and medical billing database said to contain more than 1.8 billion insurance claims and 58 million medical bills—and are using it to identify individuals for deportation.“Immigration status should not be collected by providers unless required by law,” EPIC says.ICE did not respond to a request for comment.EPIC warns that artificial intelligence, which is increasingly used in health care and consumer applications, can magnify existing privacy harms by processing vast amounts of health-related data with little regulatory oversight. AI systems often “feed on the commercial surveillance system,” it says, ingesting tracking and behavioral data to make predictions, recommendations, or decisions that affect access to care. There is no comprehensive federal law governing their use in these contexts.EPIC connects this to concerns that unregulated AI can automate profiling, entrench bias, and amplify surveillance risks due to its ability to draw inferences from data gathered outside traditional clinical settings. Current privacy frameworks like HIPAA do not address the unique challenges posed by real-time inference, algorithmic decisionmaking, or the use of third-party AI tools on sensitive information.The report is critical of the dominant “notice-and-choice” model that governs much of US privacy law—whereby companies satisfy legal obligations by disclosing data practices in privacy policies and obtaining nominal consent, even as individuals have little realistic ability to understand or negotiate how their health-related information is used. Privacy protections have increasingly been reduced to lengthy disclosures and opt-ins that place the burden on individuals to navigate complex systems they cannot realistically understand or avoid, the report says.“Big Tech is making us sicker, and it’s using its influence on the federal government to demand more of our data and have even fewer regulations for surveillance,” says Geoghegan. “People should not need to choose between quality care and data privacy.”You Might Also LikeIn your inbox: WIRED's most ambitious, future-defining storiesThe ‘super flu’ is spreadingBig Interview: Margaret Atwood wants to keep up with the latest doomThe age of the all-access AI agent Is hereLivestream AMA: Welcome to the Chinese centuryDell Cameron is an investigative reporter from Texas covering privacy and national security. He's the recipient of multiple Society of Professional Journalists awards and is co-recipient of an Edward R. Murrow Award for Investigative Reporting. Previously, he was a senior reporter at Gizmodo and a staff writer for the Daily ... Read MoreSenior Reporter, National SecurityTopicsprivacysurveillancedataRegulationgovernmenthealth caredata privacyRead MoreHow to Protest Safely in the Age of SurveillanceLaw enforcement has more tools than ever to track your movements and access your communications. Here’s how to protect your privacy if you plan to protest.What to Do if ICE Invades Your NeighborhoodWith federal agents storming the streets of American communities, there’s no single right way to approach this dangerous moment. But there are steps you can take to stay safe—and have an impact.US Hackers Reportedly Caused a Blackout in VenezuelaPlus: AI reportedly caused ICE to send agents into the field without training, Palantir’s app for targeting immigrants gets exposed, and more.Trump Warned of a Tren de Aragua ‘Invasion.’ US Intel Told a Different StoryHundreds of records obtained by WIRED show thin intelligence on the Venezuelan gang in the United States, describing fragmented, low-level crime rather than a coordinated terrorist threat.The Most Dangerous People on the Internet in 2025From Donald Trump to DOGE to Chinese hackers, this year the internet’s chaos caused outsize real-world harm.How to Protect Your iPhone or Android Device From SpywareBeing targeted by sophisticated spyware is relatively rare, but experts say that everyone needs to stay vigilant as this dangerous malware continues to proliferate worldwide.Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety TrialThe trial starts soon in New Mexico’s case against Meta—and the company is pulling out all the stops to protect its reputation.Dozens of ICE Vehicles in Minnesota Lack ‘Necessary’ Lights and SirensA contract justification published in a federal register on Tuesday says that 31 ICE vehicles operating in the Twin Cities area “lack the necessary emergency lights and sirens” to be “compliant.”Why ICE Can Kill With ImpunityOver the past decade, US immigration agents have shot and killed more than two dozen people. Not a single agent appears to have faced criminal charges.ICE Can Now Spy on Every Phone in Your NeighborhoodPlus: Iran shuts down its internet amid sweeping protests, an alleged scam boss gets extradited to China, and more.‘We Ain’t Seen Nothing Yet’—Trump’s Mass Deportations Will Only Grow From HereMilitias and far-right extremists believed they would be central to Trump’s mass deportation plans. Instead he militarized law enforcement agencies.Ads Are Coming to ChatGPT. Here’s How They’ll WorkOpenAI says ads will not influence ChatGPT’s responses, and that it won’t sell user data to advertisers.WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.SubscribeNewslettersTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok

The Electronic Privacy Information Center (EPIC) has released a report detailing a growing “health privacy crisis” in the United States, driven primarily by surveillance practices involving data brokers, ad tech, and Immigration and Customs Enforcement (ICE). The report argues that outdated privacy laws coupled with rapidly expanding digital systems are eroding patient trust and leading to delayed care.

At the core of the problem, EPIC identifies the unregulated sale of health data as a key driver. Data brokers aggressively collect information—often outside traditional healthcare settings through apps, websites, and location tracking—and resell it to advertisers, insurance companies, and, crucially, government agencies like ICE. This sale, the report states, has created a “booming industry” with little oversight, allowing for the repurposing of sensitive patient information for surveillance and enforcement.

The report meticulously outlines several high-profile instances supporting this claim. Google’s advertising ecosystem, as previously reported by WIRED, demonstrated the ability to target individuals with chronic illnesses—such as diabetes or asthma—based on data supplied by brokers. Meta’s sharing of patient information with Facebook through the Meta Pixel revealed details like doctors’ names and medical specialties, further illustrating the vulnerability of health data. The Markup’s investigation uncovered similar practices, with numerous US hospitals sending sensitive patient information to Facebook via the Meta Pixel.

However, the most alarming aspect of the crisis, according to EPIC, is the increased activity of ICE. The report details how ICE agents have occupied hospital lobbies and waiting rooms, disrupting care and reportedly listening in on patient conversations. This behavior has been amplified by a recent policy reversal in January 2025, rescinding a Biden-era memo that restricted ICE’s enforcement activities in healthcare facilities. This change has created a climate of fear and uncertainty, leading to reduced patient access to care.

Furthermore, EPIC highlights unsettling data obtained by 404 Media, revealing ICE’s access to ISO ClaimSearch, a private insurance and medical billing database containing over 1.8 billion claims and 58 million medical bills. This access enables ICE to identify individuals for deportation, exacerbating the crisis.

The report doesn't shy away from criticizing the current regulatory landscape. The “notice-and-choice” model, allowing companies to satisfy legal obligations through lengthy disclosures and opt-in agreements, is deemed largely ineffective in empowering patients to control their health data. EPIC argues that these frameworks place the burden on individuals to navigate complex systems they don’t fully understand, effectively granting companies dominion over sensitive medical information.

Adding another layer to the problem is the rise of artificial intelligence (AI). EPIC warns that AI, increasingly deployed in healthcare and consumer applications, amplifies existing privacy harms. By “feeding on the commercial surveillance system,” AI systems ingest tracking and behavioral data to make predictions or decisions about patient care. The report emphasizes the lack of comprehensive legal frameworks governing the use of AI in these contexts, creating significant risks, particularly regarding algorithmic bias and unwarranted surveillance.

EPIC concludes with a stark warning: “Big Tech is making us sicker, and it’s using its influence on the federal government to demand more of our data and have even fewer regulations for surveillance.” The organization stresses the need for stronger privacy protections, advocating for a fundamental shift in how health data is collected, shared, and used. Ultimately, EPIC argues, protecting patient privacy is not just a legal issue; it's a public health imperative.