AI-Powered Disinformation Swarms Are Coming for Democracy
Recorded: Jan. 23, 2026, 10 a.m.
| Original | Summarized |
AI-Powered Disinformation Swarms Are Coming for Democracy | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoMerchSearchSearchSign InSign InDavid GilbertPoliticsJan 22, 2026 2:00 PMAI-Powered Disinformation Swarms Are Coming for DemocracyAdvances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.Photo-Illustration: WIRED Staff; Getty ImagesSave StorySave this storySave StorySave this storyIn 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election.When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal—certainly compared to that of another Russia-linked campaign that saw Hilary Clinton’s emails leaked just before the election.A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command “swarms” of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time—all without constant human oversight.These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy—unless steps are taken now to prevent it.“Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” the report says. “By adaptively mimicking human social dynamics, they threaten democracy.”The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy.The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper.“To target chosen individuals or communities is going to be much easier and powerful,” says Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. “This is an extremely challenging environment for a democratic society. We're in big trouble.”Even those who are optimistic about AI’s potential to help humans believe the paper highlights a threat that needs to be taken seriously.“AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response,” says Barry O’Sullivan, a professor at the School of Computer Science and IT at University College Cork.In recent months, as AI companies seek to prove they are worth the hundreds of billions of dollars that have been poured into them, many have pointed to the most recent crop of AI agents as evidence that the technology will finally live up to the hype. But the very same technology could soon be deployed, the authors argue, to disseminate disinformation and propaganda at a scale never before seen.The swarms the authors describe would consist of AI-controlled agents capable of maintaining persistent identities and, crucially, memory, allowing for the simulation of believable online identities. The agents would coordinate in order to achieve shared objectives, while at the same time creating individual personas and output to avoid detection. These systems would also be able to adapt in real time to respond to signals shared by the social media platforms and in conversation with real humans.“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” says Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the coauthors of the report.For experts who have spent years tracking and combating disinformation campaigns, the paper presents a terrifying future."What if AI wasn't just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That's the future this paper imagines—Russian troll farms on steroids,” says Nina Jankowicz, the former Biden administration disinformation czar who is now CEO of the American Sunlight Project.The researchers say it’s unclear whether this tactic is already being used because the current systems in place to track and identify coordinated inauthentic behavior are not capable of detecting them.“Because of their elusive features to mimic humans, it's very hard to actually detect them and to assess to what extent they are present,” says Kunst. “We lack access to most [social media] platforms because platforms have become increasingly restrictive, so it's difficult to get an insight there. Technically, it's definitely possible. We are pretty sure that it's being tested.”Kunst added that these systems are likely to still have some human oversight as they are being developed, and predicts that while they may not have a massive impact on the 2026 US midterms in November, they will very likely be deployed to disrupt the 2028 presidential election.Accounts indistinguishable from humans on social media platforms are only one issue. In addition, the ability to map social networks at scale will, the researchers say, allow those coordinating disinformation campaigns to target agents at specific communities, ensuring the biggest impact.“Equipped with such capabilities, swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than that with previous botnets,” they write.Such systems could be essentially self-improving, using the responses to their posts as feedback to improve reasoning in order to better deliver a message. “With sufficient signals, they may run millions of microA/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers write.In order to combat the threat posed by AI swarms, the researchers suggest the establishment of an “AI Influence Observatory,” which would consist of people from academic groups and nongovernmental organizations working to “standardize evidence, improve situational awareness, and enable faster collective response rather than impose top-down reputational penalties.”One group not included is executives from the social media platforms themselves, primarily because the researchers believe that their companies incentivize engagement over everything else, and therefore have little incentive to identify these swarms.“Let's say AI swarms become so frequent that you can't trust anybody and people leave the platform,” says Kunst. “Of course, then it threatens the model. If they just increase engagement, for a platform it's better to not reveal this, because it seems like there's more engagement, more ads being seen, that would be positive for the valuation of a certain company.”As well as a lack of action from the platforms, experts believe that there is little incentive for governments to get involved. “The current geopolitical landscape might not be friendly for ‘Observatories’ essentially monitoring online discussions,” Olejnik says. Jankowicz agrees: “What's scariest about this future is that there's very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.”You Might Also LikeIn your inbox: WIRED's most ambitious, future-defining storiesDoes the “war on protein” exist?Big Story: China’s renewable energy revolution might save the worldThe race to build the DeepSeek of Europe is onWatch our livestream replay: Welcome to the Chinese centuryDavid Gilbert is a reporter at WIRED covering disinformation, online extremism, and how these two online trends impact people’s lives across the globe, with a special focus on the 2024 US presidential election. Prior to joining WIRED, he worked at VICE News. He lives in Ireland. ... Read MoreReporterXTopicsartificial intelligenceelectionsSocial Mediamachine learningdisinformationRead MoreChewy Promo Codes: $30 Off January 2026Explore Chewy coupon codes for $30 off, $20 off your first order $49, 50% off pet food, and more January 2026 discounts.Sorry MAGA, Turns Out People Still Like ‘Woke’ ArtFrom Black vampires gobbling up Oscar nominations to gay pro hockey players dominating the culture, diverse stories broke through in an environment that’s increasingly hostile to them.Legislators Push to Make Companies Tell Customers When Their Products Will DieA pair of bills in Massachusetts would require manufacturers to tell consumers when their connected gadgets are going dark. It should be a boon for cybersecurity as connected devices grow obsolete.Elon Musk Sure Made Lots of Predictions at DavosHumanoid robots, space travel, the science of aging—Musk weighed in on all of it at this week’s World Economic Forum. But his predictions rarely work out the way he says they will.What Happens When a Chinese Battery Factory Comes to TownChinese firms are building battery plants from Europe to North America, promising jobs while prompting local concerns about the environment, politics, and who really benefits.The 28 Best Movies on Apple TV Right NowF1: The Movie, CODA, and Highest 2 Lowest are just a few of the movies you should be watching on Apple TV this month.How Claude Code Is Reshaping Software—and AnthropicWIRED spoke with Boris Cherny, head of Claude Code, about how the viral coding tool is changing the way Anthropic works.AI-Powered Disinformation Swarms Are Coming for DemocracyAdvances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.One of Our Favorite Smart Plugs for Apple Users Is $15 OffThe Meross Smart Plug Mini boasts excellent compatibility and slim construction.ICE Agents Are ‘Doxing’ ThemselvesThe alleged risks of being publicly identified have not stopped DHS and ICE employees from creating profiles on LinkedIn, even as Kristi Noem threatens to treat revealing agents' identities as a crime.Which Motorola Phone Should You Buy?Motorola phones may seem old-school, but their reasonable prices, colorful designs, and simple software make them good, wallet-friendly Android smartphones.The Best Smart Locks for Every Kind of DoorUpgrade your locks with fingerprint-scanning or a keypad, whether it’s at the front door or a sliding glass entryway.WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.SubscribeNewslettersTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok |
The specter of sophisticated disinformation campaigns, once exemplified by the Russian Internet Research Agency’s operations, is rapidly evolving into a new, far more potent threat: AI-powered disinformation swarms. According to a recently published report in *Science*, the combination of advancements in artificial intelligence and the willingness of malicious actors to exploit this technology poses an imminent and potentially catastrophic risk to democratic societies. This isn’t simply about coordinated bot networks; the research posits a shift towards autonomous AI systems capable of generating and disseminating deceptive information at an unprecedented scale, potentially reshaping public opinion and undermining electoral processes. The report, authored by 22 experts from diverse fields – including computer science, cybersecurity, psychology, and journalism – argues that the shift away from manually-driven disinformation operations, as seen with the Internet Research Agency, represents a critical turning point. Instead of hundreds of individuals meticulously crafting and posting fabricated content, a single person with access to the appropriate AI tools can now orchestrate an army of thousands of virtual accounts. These accounts wouldn’t just generate content; they would autonomously evolve and adapt in real-time, mimicking human social dynamics and engaging in dynamic conversations—all without constant human oversight. This represents a profound change in the tactics of information manipulation. The core concern, highlighted by the authors and echoed by other experts within the field, is the potential for these AI systems to fundamentally destabilize democratic institutions. The report explicitly states that “advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” suggesting that the capability to influence large-scale social trends—and, crucially, elections—is now within reach. The very nature of these swarms—capable of simulating believable online identities and possessing memory—further magnifies the potential for deceit. These systems aren’t merely hallucinating information; they are collectively constructing and propagating narratives. Several key elements contribute to this heightened threat. Firstly, the ability of AI to generate content indistinguishable from human-created material is rapidly advancing. Secondly, the coordinated efforts of these swarms, designed to achieve shared objectives while maintaining individual personas, are incredibly difficult to detect. Thirdly, the potential for these systems to adapt in real-time to interactions with individuals and social media platforms, learning and refining their strategies in a feedback loop, creates an almost impossible defense. The authors predict that these swarms will likely be deployed in the near future, potentially impacting the 2026 US midterms and certainly the 2028 presidential election. The current systems designed to track and identify coordinated inauthentic behavior are, according to the researchers, ill-equipped to handle this new sophistication. The ability of the AI swarms to mimic human interaction is a considerable challenge to detection. To address this emerging threat, the report suggests the establishment of an “AI Influence Observatory,” comprised of academics and non-governmental organizations, to standardize evidence, improve situational awareness, and facilitate coordinated responses. Notably, the authors express skepticism about the willingness of social media platforms themselves to take decisive action, citing their primary motivation as engagement rather than truth. The report’s cynicism reflects growing concerns about the influence of commercial interests over social discourse. Experts believe this new landscape poses a significant challenge to governments, citing geopolitical constraints that may hinder collaboration and information sharing. Nina Jankowicz, former Biden administration disinformation czar, aptly describes the situation, “What’s scariest about this future is that there’s very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.” The potential for these AI swarms to operate autonomously, learning and adapting to human behavior, represents a fundamental shift in the nature of information warfare. As Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies elaborates, “This is an extremely challenging environment for a democratic society.” The researchers acknowledge the difficulty of detecting these swarms, highlighting the challenge of accessing social media platforms due to increased restrictions. Beyond the immediate threat to electoral processes, the report raises broader concerns about the erosion of trust and the potential for social division. The authors emphasize concerns about the impact on everyday society, noting how, if AI swarms become so frequent that individuals cannot trust anything they encounter online, it could lead to widespread disengagement and a decline in civic participation. The prediction—that these swarms will likely be deployed to disrupt the 2028 presidential election—underscores the urgency of addressing this emerging threat while the technology is still relatively nascent. Ultimately, the AI Influence Observatory isn't just about countering disinformation; it’s about mitigating a fundamental shift in the dynamics of information – one where human judgment and control are increasingly challenged by autonomous, adaptive intelligence. |