Europe's new war on privacy - UnHerd
Log In Search
Home Mission Newsroom Our Writers Watch & Listen Polling Events UnHerd Club Black Friday Politics Culture Science Faith War Society UK US Europe
Search for:
Home Mission Newsroom Our Writers Watch & Listen Polling Events UnHerd Club Black Friday
Log In | Select Edition:
Search for:
X Close
Europe’s new war on privacy Chat Control will violate fundamental rights
The EU wants more power to access people’s messages. Image: Getty composite
The EU wants more power to access people’s messages. Image: Getty composite
Child sexual abuseEncryptionEUEuropean UnionGroomingPrivacySurveillance
Thomas Fazi 27 Nov 2025 - 6 mins
In theory, Chat Control should have been buried last month. The EU’s ominous plan to mass-scan citizens’ private messages was met with overwhelming public resistance in Germany, with the country’s government refusing to approve it. But Brussels rarely retreats merely because the public demands it. And so, true to form, a reworked version of the text is already being pushed forward — this time out of sight, behind closed doors. Chat Control, formally known as the Child Sexual Abuse Regulation, was first proposed by the European Commission in 2022. The original plan would have made it mandatory for email and messenger providers to scan private, even encrypted, communications — with the purported aim of detecting child sexual abuse material. The tool was sold as a noble crusade against some of the world’s most horrific crimes. But critics argued that the tool risked becoming a blueprint for generalised surveillance, by essentially giving states and EU institutions the ability to scan every private message. Indeed, a public consultation preceding the proposal revealed that a majority of respondents opposed such obligations, with over 80% explicitly rejecting its application to end-to-end encrypted communications. Yet despite repeated blockages, and widespread criticism for violating privacy and fundamental rights, the text was never abandoned. Instead, it was repackaged, and continually pushed forward from one Council presidency to the next. Each time democratic resistance stopped the original plan, it kept returning in new forms, under new labels, each time dressed up as a “necessary” and “urgent” tool to protect children online, yet always preserving its core logic: normalising government-mandated monitoring of private communications on an unprecedented scale. “The tool was sold as a noble crusade against some of the world’s most horrific crimes.” In May, the European Commission once again presented its proposal. Yet several states objected. That included Germany, but also Poland, Austria and the Netherlands. As a result, Denmark, which currently holds the rotating presidency of the European Council, immediately began drafting a new version, known as “Chat Control 2.0” and unveiled earlier this month, which removed the requirement for general monitoring of private chats; the searches would now remain formally voluntary for providers. All this happened under the auspices of Coreper, the Committee of Permanent Representatives — one of the most powerful, but least visible, institutions in the EU decision-making process. It is where most EU legislation is actually negotiated; if Coreper agrees on a legislative file, member states almost always rubber-stamp it. The gamble worked. Yesterday, this revised version was quietly greenlit by Coreper, essentially paving the way for the text’s adoption by the Council, possibly as early as December. As digital rights campaigner and former MEP Patrick Breyer put it, this manoeuvre amounts to “a deceptive sleight of hand” aimed at bypassing meaningful democratic debate and oversight. While the removal of mandatory on-device detection is an improvement on the first draft, the new text still contains two extremely problematic features. First, it encourages “voluntary” mass scanning by online platforms — a practice already allowed in “temporary” form, which would now become a lasting feature of EU law. Second, it effectively outlaws anonymous communication by introducing mandatory age-verification systems. Suggested readingThe EU’s new censorship machineBy Thomas Fazi An open letter signed by 18 of Europe’s leading cybersecurity and privacy academics warned that the latest proposal poses “high risks to society without clear benefits for children”. The first, in their view, is the expansion of “voluntary” scanning, including automated text analysis using AI to identify ambiguous “grooming” behaviours. This approach, they argue, is deeply flawed. Current AI systems are incapable of properly distinguishing between innocent conversation and abusive behaviour. As the experts explain, AI-driven grooming detection risks sweeping vast numbers of normal, private conversations into a dragnet, overwhelming investigators with false positives and exposing intimate communications to third parties. Breyer further emphasised this danger by noting that no AI can reliably distinguish between innocent flirtation, humorous sarcasm — and criminal grooming. He warned that this amounts to a form of digital witch-hunt, whereby the mere appearance of words like “love” or “meet” in a conversation between family members, partners or friends could trigger intrusive scrutiny. This is not child protection, Breyer has argued, but mass suspicion directed at the entire population. Even under the existing voluntary regime, German federal police warn that roughly half of all reports received are criminally irrelevant, representing tens of thousands of leaked legal chats annually. According the Swiss Federal Police, meanwhile, 80% of machine-reported content is not illegal. It might, for example, encompass harmless holiday photos showing nude children playing at a beach. The new text would expand these risks dramatically. Further concerns arise from Article 4 of the new compromise proposal, which requires providers to implement “all appropriate risk mitigation measures”. This clause could allow authorities to pressure encrypted messaging services to enable scanning, even if this undermines their core security model. In practice, this could mean requiring providers such as WhatsApp, Signal or Telegram to scan messages on users’ devices before encryption is applied. The Electronic Frontier Foundation has noted that this approach risks creating a permanent security infrastructure, one which could gradually become universal. Meta, Google and Microsoft already scan unencrypted content voluntarily; extending this practice to encrypted content would merely require technical changes. Moreover, what begins as a voluntary option can easily become compulsory in practice, as platforms face reputational, legal and market pressure to “cooperate” with the authorities. Furthermore, this doesn’t affect just people in the EU, but everyone around the world, including the United States. If platforms decide to stay in the EU, they would be forced to scan the conversations of everyone in the bloc. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Another major danger is the introduction of mandatory age-verification systems for app stores and private messaging services. Though the Council claims these systems can be designed to “preserve privacy”, critics insist that the very concept is technologically unworkable. Age assessments inevitably rely on biometric and behavioural data, both of which require invasive data collection. Far from protecting children, these systems would increase the volume of sensitive personal information being stored and potentially exploited. Suggested readingWill we ever escape the ECHR?By Thomas Fazi Requiring official identity documents for online verification would exclude millions of people who lack easy access to digital IDs or who won’t provide such sensitive documentation merely to use a messaging service. In practice, this would spell the end of anonymous communication online, forcing users to present ID or face scans simply to open an email or messaging account. Breyer has warned that such measures would be particularly disastrous for whistleblowers, journalists, political activists and others reliant on online anonymity. It would also push under-16s towards less safe, poorly regulated alternatives that lack encryption or basic safety protections. Ultimately, critics argue that mass surveillance is simply the wrong approach to combating child sexual exploitation. Scanning private messages does not stop the circulation of child abuse material. Platforms such as Facebook have used scanning technologies for years, yet the number of automated reports continues to rise. Moreover, mandatory scanning would still fail to detect perpetrators who distribute material through decentralised secret forums or via encrypted archives shared using only links and passwords — methods that scanning algorithms cannot successfully penetrate. The most effective strategy would be to delete known abuse material from online hosts, something Europol has repeatedly failed to do. Chat Control, in short, would do little to actually help victims of child sexual exploitation while harming everyone else. Every message would become subject to surveillance, without any judicial oversight, contrary to long-standing guarantees of private correspondence. There’s a legal question here too. The EU Court of Justice has previously ruled that general and automatic analysis of private communications violates fundamental rights, yet the EU is now poised to adopt legislation that contravenes this precedent. Once adopted, it could take years for a new judicial challenge to overturn it. The confidentiality of electronic communication — essential for personal privacy, business secrecy and democratic participation — would be sacrificed. Sensitive conversations could be read, analysed, wrongly flagged or even misused, as past scandals involving intelligence officials and tech employees have shown. One of the most notorious cases of intelligence abuse came from the US National Security Agency, in which multiple NSA employees were caught using the agency’s powerful surveillance tools to spy on romantic partners and ex-lovers. Leaked documents have also shown that the UK intelligence agency GCHQ captured and stored images from Yahoo webcam chats, including millions of sexually explicit images of completely innocent users. There have also been several cases of Big Tech employees — from Google to Facebook — using internal tools to spy on unsuspecting users. Furthermore, secure encryption, a foundation of cybersecurity, would be compromised by introducing backdoors or client-side scanning tools that foreign intelligence services or criminal actors could exploit. At the same time, the responsibility for criminal investigations would shift from democratically accountable authorities to opaque corporate algorithms, with minimal transparency or oversight. Opponents therefore argue that the EU should instead adopt a fundamentally different approach: one that protects children without undermining fundamental rights. They propose ending the current voluntary scanning of private messages by US internet companies — restoring the principle that targeted surveillance requires a judicial warrant and must be limited to individuals reasonably suspected of wrongdoing — and maintain that secure end-to-end encryption, and the right to anonymous communication, must be preserved. Particularly worrying is the issue of function creep, the process by which a technology introduced for a narrowly defined purpose gradually expands to serve broader, and sometimes entirely different, purposes over time. The UK’s Online Safety Act, passed in October 2023, obliges firms to develop child sexual abuse detection systems, even though the British government itself admits that such infrastructure is not yet technically available, creating legal authority awaiting technical capability. In the United States, “temporary” surveillance measures introduced under the post-9/11 Patriot Act became permanent, and indeed expanded in scope. Once a technological infrastructure for comprehensive online surveillance exists, it can easily be repurposed and is hard to dismantle. Technologies designed to detect harmful content can quickly be extended to political repression; examples from authoritarian states demonstrate how similar systems are used to identify and target dissidents. Breyer summarised this pattern starkly: “They are selling us security but delivering a total surveillance machine. They promise child protection but punish our children and criminalise privacy.” The implications are ominous. Europe effectively stands on the threshold of building a machine that can see everything. Once constructed, it will serve not only the current political authorities — the idea of Ursula von der Leyen spying on everyone’s messages is disturbing enough — but whoever wields power next. With yet another vote approaching, the window to stop Chat Control is narrowing.
Thomas Fazi is an UnHerd columnist and translator. His latest book is The Covid Consensus, co-authored with Toby Green. battleforeurope
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber Subscribe to comment
To join the discussion in the comments, become a paid subscriber. Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits. Subscribe
Subscribe
Notify of
new follow-up comments new replies to my comments
Label
{} [+]
Name*
Email*
Δ
Label
{} [+]
Name*
Email*
Δ
37 Comments
Most Voted
Newest Oldest
Inline Feedbacks View all comments
Load More Comments
Explore Politics Culture Science Faith War Society
Quick Links Events Calendar UnHerd Club Our Mission Terms & Conditions Privacy Policy Community Guidelines Current Vacancies Contact Us Support
Contribute to UnHerd We welcome applications to contribute to UnHerd – please fill out the form below including examples of your previously published work. Please click here to submit your pitch.
Advertise with us Please click here to view our media pack for more information on advertising and partnership opportunities with UnHerd.
Insert
Report this comment
This comment is spam
This comment should be marked mature
This comment is abusive
This comment promotes self-harm
Other
Send |
Europe’s new war on privacy – UnHerd
The European Union is embarking on a troubling new initiative, Chat Control, with the potential to fundamentally alter the landscape of digital privacy. This plan, ostensibly designed to combat child sexual abuse material, actually represents a significant encroachment on fundamental rights and risks establishing a model for mass surveillance. As Thomas Fazi meticulously outlines, the pursuit of this goal has been characterized by a series of evasive maneuvers, repeated proposals, and ultimately, a disregard for democratic processes and the core tenets of privacy.
Initially proposed in 2022, Chat Control sought to mandate that email and messaging providers scan private communications for evidence of abuse. This wasn’t presented as a subtle enhancement of existing efforts; it was a radical shift towards a system of blanket monitoring, threatening to dismantle the principle that private correspondence is inviolable. Public consultation revealed overwhelming opposition to this approach, with over 80% of respondents explicitly rejecting the imposition of such obligations on end-to-end encrypted communications. Despite this resounding rejection, the European Commission, and subsequently various member states, relentlessly pushed forward, repackaging the proposal and repeatedly attempting to force its adoption.
The current iteration, “Chat Control 2.0,” represents a further escalation of this concerning trend. While it removes the initial requirement for mandated mass scanning – a victory, albeit a grudging one – it introduces even more problematic elements. The proposal encourages “voluntary” mass scanning by online platforms, a practice already in place but now formalized into law. Simultaneously, it seeks to outlaw anonymous communication by introducing mandatory age-verification systems. This latter measure, presented as a tool to protect children, is deeply flawed and poses a grave threat to digital freedom.
The core of the problem lies in the inherent unreliability and potential for abuse of AI-driven grooming detection. Current artificial intelligence systems are simply incapable of accurately distinguishing between innocent conversation and genuine abuse. The risk of false positives—overwhelming investigators with irrelevant reports—is enormous. As flagged by a coalition of cybersecurity and privacy experts, this approach risks transforming private chats into a digital witch hunt, subjecting individuals to unwarranted scrutiny for innocuous words or phrases. The historical record demonstrates a consistent pattern: existing methods of detecting harmful content have proven ineffective, and expanded surveillance measures are typically employed without demonstrable improvements.
Furthermore, the proposed measures extend beyond mere detection; they seek to fundamentally reshape the architecture of the internet. The requirement for “all appropriate risk mitigation measures” could pressure encrypted messaging services to enable scanning, even if this undermines their core security model—a model designed to protect user privacy. The potential for foreign intelligence services or criminal actors to exploit these backdoors raises serious concerns about national security.
The implications of Chat Control extend far beyond the EU’s borders. The requirement for providers to comply with regulations would impact everyone utilizing services within the bloc. The potential for “function creep,” the tendency for technologies initially designed for one purpose to be repurposed for broader, potentially oppressive, applications, is particularly alarming. The UK’s Online Safety Act, which obliges firms to develop child sexual abuse detection systems, serves as a cautionary tale.
Perhaps the most profound danger lies in the erosion of fundamental rights. The right to private communication—a cornerstone of democracy—is threatened. Historical instances of intelligence agencies abusing surveillance technologies, including the misuse of NSA tools to spy on personal relationships, underscore the risks. Moreover, the potential for a permanent, technologically advanced surveillance infrastructure—one that could be extended to political repression— is a very real concern.
Thomas Fazi powerfully argues that the pursuit of Chat Control is not a genuine effort to combat child abuse. Instead, it represents a misguided attempt to impose control through surveillance. The problem of child sexual abuse cannot be solved with technology; it requires targeted law enforcement, judicial processes, and support services. The proposed solution—mass surveillance—is not only ineffective but also deeply damaging to individual liberty and freedom of expression. As Fazi concludes, "They are selling us security but delivering a total surveillance machine." The race is now on to prevent the implementation of Chat Control before its profound and lasting consequences are irreversibly established – a chilling prospect for the future of digital privacy. |