LmCast :: Stay tuned in

AI Search Tools Easily Fooled by Fake Content

Recorded: Oct. 30, 2025, 2:20 p.m.

Original Summarized

AI Search Tools Easily Fooled by Fake Content TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsApplication SecurityMalicious NPM Packages Disguised With 'Invisible' DependenciesMalicious NPM Packages Disguised With 'Invisible' DependenciesbyRob WrightOct 29, 20254 Min ReadApplication SecurityAI-Generated Code Poses Security, Bloat ChallengesAI-Generated Code Poses Security, Bloat ChallengesbyRobert Lemos, Contributing WriterOct 29, 20256 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLCyber RiskThreat IntelligenceVulnerabilities & ThreatsNewsAI Search Tools Easily Fooled by Fake ContentAI Search Tools Easily Fooled by Fake ContentAI Search Tools Easily Fooled by Fake ContentNew research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool.Jai Vijayan, Contributing WriterOctober 29, 20254 Min ReadSource: Talukdar David via ShutterstockAI search tools like Perplexity, ChatGPT, and OpenAI's Atlas browser offer powerful capabilities for research and information gathering but are also dangerously susceptible to low-effort content manipulation attacks.It turns out websites that can detect when an AI crawler visits can serve completely different content than what human visitors see, allowing bad actors to serve up poisoned content with surprising ease.  Misinformation and Fake ProfilesTo demonstrate how effective this "AI cloaking" technique can be, researchers at SPLX recently ran experiments with sites that served different content to regular Web browsers and to AI crawlers including Atlas and ChatGPT. One demonstration involved a fictional designer from Oregon, whom the researchers named "Zerphina Quortane." The researchers rigged it so human visitors to Quortane's site would see what appeared to be a legitimate bio and portfolio presented on a professional looking Web page with a clean layout. But when an AI agent visited the same URL, the server served up entirely fabricated content that cast the fictional Quortane as a "Notorious Product Saboteur & Questionable Technologist," replete with examples of failed projects and ethical violations."Atlas and other AI tools dutifully reproduce the poisoned narrative describing Zerphina as unreliable, unethical, and unhirable," SPLX researchers Ivan Vlahov and Bastien Eymery wrote in a recent blog post. "No validation. Just confident, authoritative hallucination rooted in manipulated data."Related:'Jingle Thief' Highlights Retail Cyber ThreatsIn another experiment SPLX decided to show how easily an AI crawler can be tricked into preferring a wrong job candidate by serving it a different version of a resumé than what a human would see. For the experiment, the researchers created a fake job position with specific candidate evaluation criteria and then set up plausible but fake candidate profiles hosted on different Web pages. For one of the profiles — associated with a fake individual, "Natalie Carter" — the researchers ensured the AI crawler would see a version of Carter's résumé that made her appear significantly more accomplished than the humanly readable version of her bio. Sure enough, when one of the AI crawlers in the study visited the profiles, it ended up ranking Carter ahead of all the other candidates. But when the researchers presented Carter's unmodified résumé — the one humans would see — the crawler put her dead last among the candidates.AI Targeted Cloaking The experiments show how AI-targeted cloaking can turn a "classic SEO trick into a powerful misinformation weapon," Vlahov and Eymery wrote. Cloaking is a technique that scammers have long used to serve search engine crawlers with different content to what humans see to manipulate search engine results. AI cloaking simply extends the technique to AI crawlers, but with considerably more impact.Related:The Best End User Security Awareness Programs Aren't About Awareness AnymoreAs the researchers explained it, "a single rule on a web server can rewrite how AI systems describe a person, brand, or product, without leaving public traces." With just a few lines of cleverly manipulated content, an attacker could fool hiring tools and compliance systems research models into ingesting false data. What the fake candidate profile experiment showed was how attackers can use AI agent-specific content to skew automated hiring, procurement, or compliance tools. In fact, "any pipeline that trusts web-retrieved inputs is exposed to silent bias," the researchers said.That AI crawlers — at least in their present stage of evolution — don't verify or validate the content they are ingesting makes it easy for attackers to carry out cloaking attacks. "No technical hacking needed. Just content delivery manipulation," Vlahov and Eymery said.Organizations that allow AI systems to make judgement calls based on external data — like shortlisting candidates for a job interview based on their social media profiles — need to pay attention. Instead of implicitly trusting the tool, organizations must implement controls to validate AI-retrieved content against canonical sources. They also need to red team their internal AI workflows for exposure to AI cloaking-like attacks and ask vendors about content provenance and bot authentication, SPLX said.Related:Too Many Secrets: Attackers Pounce on Sensitive Data Sprawl"This is context poisoning, not hacking," the researchers noted. "The manipulation happens at the content-delivery layer, where trust assumptions are weakest."The content manipulation vulnerability that SPLX's research highlighted is just one of many emerging risks tied to the rapid integration of AI tools into daily workflows. Previous research has shown how AI systems are prone to hallucinate false information with confidence, amplify biases from their training data, leak sensitive information through prompt injection attacks, and behave in other unpredictable ways.About the AuthorJai Vijayan, Contributing WriterJai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.See more from Jai Vijayan, Contributing WriterMore InsightsIndustry ReportsMiercom Test Results: PA-5450 Firewall WinsSecurity Without Compromise Better security, higher performance and lower TCOThe Total Economic Impact™ Of Palo Alto Networks NextGeneration FirewallsHow Enterprises Are Harnessing Emerging Technologies in CybersecurityWorldwide Security Information and Event Management Forecast, 2025--2029: Continued Payment for One's SIEMsAccess More ResearchWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedMore WebinarsYou May Also LikeEditor's ChoiceCybersecurity OperationsElectronic Warfare Puts Commercial GPS Users on NoticeElectronic Warfare Puts Commercial GPS Users on NoticebyRobert Lemos, Contributing WriterOct 21, 20254 Min ReadKeep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.SubscribeNov 13, 2025During this event, we'll examine the most prolific threat actors in cybercrime and cyber espionage, and how they target and infiltrate their victims.Secure Your SeatWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterTues, Nov 18, 2025 at 1pm ESTSecuring the Hybrid Workforce: Challenges and SolutionsTues, Nov 4, 2025 at 1pm ESTCybersecurity Outlook 2026Virtual Event | December 3rd, 2025 | 11:00am - 5:20pm ET | Doors Open at 10:30am ETThreat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesTuesday, Oct 21, 2025 at 1pm ESTMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedThu, Oct 23, 2025 at 11am ESTMore WebinarsWhite PapersThe NHI Buyers GuideThe AI Security GuideTop 10 Identity-Centric Security Risks of Autonomous AI AgentsModern DevSecOps: 6 Best Practices for AI-Accelerated SecurityThriving in the Age of AI: 6 Best Practices for Secure InnovationExplore More White PapersDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use

New research highlights a critical vulnerability in AI search tools, revealing how systems like Perplexity, ChatGPT, and OpenAI’s Atlas browser are surprisingly susceptible to content manipulation attacks. These tools, which are increasingly used for research, information retrieval, and decision-making, can be deceived by websites that serve altered content specifically tailored to AI crawlers. This "AI cloaking" technique exploits the fact that many websites can detect whether a visitor is an AI agent or a human and adjust their output accordingly. By doing so, malicious actors can inject false information into AI systems without leaving detectable traces, creating a significant risk for misinformation and data poisoning. The study, conducted by SPLX researchers Ivan Vlahov and Bastien Eymery, demonstrates how this method can be weaponized to distort critical data inputs, undermining the reliability of AI-driven processes in domains ranging from hiring to compliance and cybersecurity.

The experiments conducted by SPLX involved creating fictional personas and manipulating their online profiles to demonstrate the effectiveness of AI cloaking. One example involved a fabricated designer, Zerphina Quortane, whose website displayed a professional bio and portfolio to human visitors. However, when AI crawlers like Atlas or ChatGPT accessed the same URL, they encountered a completely different narrative that portrayed Quortane as an “unreliable, unethical, and unhirable” figure. The AI systems dutifully replicated this fabricated information without any validation, producing authoritative but false claims about her credentials. This underscores a fundamental flaw in current AI technologies: their lack of mechanisms to verify the accuracy or authenticity of the content they process. The researchers noted that these systems do not question the data they receive, instead generating responses based on whatever information is presented to them. This "hallucination" phenomenon—where AI systems confidently produce false or misleading content—is exacerbated when the input itself is manipulated through cloaking techniques.

A second experiment focused on automated hiring processes, where AI tools were tricked into favoring a fictitious candidate, Natalie Carter. By serving different versions of her résumé to human users and AI crawlers, the researchers demonstrated how cloaking could skew hiring decisions. While humans saw a standard, unremarkable profile, AI agents encountered an exaggerated version of Carter’s qualifications that made her appear significantly more accomplished. As a result, the AI tools ranked her as the top candidate, despite the fact that the unaltered resume would have placed her last among competitors. This highlights a broader risk: when organizations rely on AI systems to evaluate external data sources, such as social media profiles or job applications, they expose themselves to potential manipulation. The study emphasizes that even minor alterations in content delivery can lead to systemic biases or errors, particularly in high-stakes scenarios where AI is used to make critical decisions.

The implications of AI cloaking extend beyond individual cases of misinformation, raising concerns about the integrity of AI systems in broader contexts. Traditional cloaking techniques have long been used by malicious actors to manipulate search engine rankings, but the adaptation of this method for AI crawlers introduces a new dimension of risk. Unlike human users, who can recognize discrepancies or inconsistencies in content, AI systems operate under the assumption that all data they encounter is legitimate. This trust in external sources creates a vulnerability that attackers can exploit with minimal effort. As Vlahov and Eymery explain, "a single rule on a web server can rewrite how AI systems describe a person, brand, or product, without leaving public traces." This means that attackers can manipulate AI systems to generate false narratives, distort evaluations, or even compromise compliance checks without relying on technical hacking. The study warns that any system that depends on AI to process external data—such as candidate screening tools, procurement platforms, or regulatory compliance software—is potentially exposed to this form of covert manipulation.

The research also points to a deeper issue: the lack of robust validation mechanisms in AI systems. Current models are designed to process and synthesize information, but they lack the ability to cross-check sources or verify the authenticity of their inputs. This makes them particularly vulnerable to content poisoning, where malicious actors inject false data into the information stream. The SPLX study builds on previous research that has identified similar vulnerabilities, such as AI systems generating hallucinations based on biased training data or leaking sensitive information through prompt injection attacks. However, AI cloaking represents a more insidious threat because it operates at the content-delivery layer, where trust assumptions are weakest. Unlike traditional hacking methods that require exploiting technical flaws, cloaking relies on manipulating the very infrastructure of information delivery, making it harder to detect and mitigate.

Organizations that integrate AI tools into their workflows must take proactive steps to address these risks. The researchers recommend implementing controls that validate AI-retrieved content against canonical sources, such as verified databases or official records. They also advise conducting red-team exercises to test the resilience of AI systems against cloaking-like attacks and engaging with vendors to ensure transparency in content provenance and bot authentication. For instance, hiring tools that rely on AI to analyze candidate profiles should be designed with safeguards that cross-reference data across multiple sources, reducing the likelihood of being misled by manipulated content. Additionally, regulatory frameworks may need to evolve to address the unique challenges posed by AI cloaking, ensuring that organizations are held accountable for the accuracy and integrity of the data they use.

The study also underscores a broader concern about the rapid integration of AI tools into critical decision-making processes. As more industries adopt AI for tasks like cybersecurity monitoring, financial analysis, and healthcare diagnostics, the potential consequences of content manipulation become more severe. For example, an AI system used to identify cybersecurity threats could be misled by cloaked data into overlooking genuine vulnerabilities or falsely flagging benign activity. Similarly, an AI-driven compliance tool could be manipulated to overlook regulatory violations by presenting falsified records. These scenarios highlight the need for a multi-layered approach to AI security, combining technical safeguards with policy measures and ethical guidelines.

The findings of the SPLX research align with growing concerns about the limitations of AI in handling complex, real-world data. While these systems have demonstrated impressive capabilities in tasks like natural language processing and pattern recognition, their reliance on external information sources exposes them to risks that are not yet fully understood. The study serves as a wake-up call for developers, organizations, and policymakers to prioritize the development of more robust AI systems that can detect and resist manipulation. As Vlahov and Eymery note, the problem is not a technical failure of AI but rather a systemic issue rooted in how these systems are designed to engage with the digital world. Addressing this challenge will require a shift in both technology and practice, ensuring that AI tools are not only powerful but also trustworthy.

In conclusion, the research on AI cloaking reveals a critical vulnerability in modern search tools, demonstrating how simple content manipulation can lead to significant misinformation and biased outcomes. By exploiting the trust that AI systems place in external data, attackers can create false narratives that are then amplified by these tools without any validation. This poses a serious threat to the reliability of AI-driven processes in various domains, from hiring and compliance to cybersecurity and healthcare. The study calls for immediate action to strengthen the security of AI systems, including enhanced validation protocols, greater transparency in content delivery, and ongoing research into the ethical implications of AI adoption. As the use of AI continues to expand, ensuring that these tools are resilient against manipulation will be essential for maintaining their credibility and effectiveness.