Prompt Injections Loom Large Over ChatGPT's Atlas Browser
Recorded: Nov. 26, 2025, 7:06 p.m.
| Original | Summarized |
Prompt Injections Loom Large Over ChatGPT Atlas Browser TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsApplication SecurityPrompt Injections Loom Large Over ChatGPT's Atlas BrowserPrompt Injections Loom Large Over ChatGPT's Atlas BrowserbyAlexander CulafiNov 26, 20256 Min ReadVulnerabilities & ThreatsCritical Flaw in Oracle Identity Manager Under ExploitationCritical Flaw in Oracle Identity Manager Under ExploitationbyRob WrightNov 24, 20252 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllApplication SecurityLINE Messaging Bugs Open Asian Users to Cyber EspionageLINE Messaging Bugs Open Asian Users to Cyber EspionagebyTara SealsNov 21, 20257 Min ReadEndpoint SecurityChina's 'PlushDaemon' Hackers Infect Routers to Hijack Software UpdatesChina's 'PlushDaemon' Hackers Infect Routers to Hijack Software UpdatesbyNate Nelson, Contributing WriterNov 20, 20253 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsResource LibraryNewslettersPodcastsReportsVideosWebinarsWhite Papers Partner PerspectivesDark Reading Resource LibraryApplication SecurityCyber RiskCybersecurity OperationsСloud SecurityNewsPrompt Injections Loom Large Over ChatGPT's Atlas BrowserIt's the law of unintended consequences: equipping browsers with agentic AI opens the door to an exponential volume of prompt injections.Alexander Culafi, Senior News Writer, Dark ReadingNovember 26, 20256 Min ReadSource: Michael Brooks via Alamy Stock PhotoAs a new AI-powered Web browser brings agentics closer to the masses, questions remain regarding whether prompt injections, the signature LLM attack type, could get even worse.ChatGPT Atlas is OpenAI's large language model (LLM)-powered Web browser launched Oct. 21 and based on Chromium. Currently available for macOS (with other platforms to come), Atlas comes with native ChatGPT functionality including text generation, Web page summarization, and agent capabilities.OpenAI advertises the agent as being able to "book appointments, create slideshows, and more, handling complex tasks from start to finish." ChatGPT's agentic capabilities are only available in Plus (for $20 per month) and Pro ($200 per month), though that is a fair bit more accessible than many of the far more premium agents seen earlier this year. And it's not alone. A quick search on Google shows a range of similar agentic browsers and extensions at various price levels.But here's where things start to get dicey with AI and LLMs. Prompt injections refer to the practice of using a natural language prompt to get an LLM, such as a chatbot, to do something otherwise not intended by the entity responsible for it.Prompt injections also exist in two forms: direct and indirect. A direct prompt injection, for example, might be to ask a chatbot a question that gets it to divulge sensitive company documentation. An indirect prompt injection is more complex because it involves the attacker inserting a prompt in a situation that does not directly instruct the LLM. This could mean the attacker sends the target an email with a malicious prompt hidden inside the body that an AI assistant reads and follows, or it could mean including a malicious prompt as a hidden element on a Web page that an agent could inadvertently take in as it works.Related:Infamous Shai-hulud Worm Resurfaces From the DepthsAI vendors have made progress over the years to curb the prompt injection problem, in part by stacking guardrails on top of models to make them less trusting. But with agents, an emerging category of LLM tools that can autonomously use tools and complete tasks, the issue gets so much more complex. Agents can do rudimentary coding, analysis, research, security, and other kinds of tasks, including those that require it to work with other agents. When you take LLMs, which already have a history of leaking sensitive data, and give them access to tools, it opens up an organization to an immense attack vector for prompt injections. OWASP's list of agentic AI threats is startling, as prompt injections enable models to use coding tools to create new vulnerabilities, conduct remote code execution attacks, and compromise entire networks of agents. While these might not be the types of attacks we see threat actors exploit every day, agents are an emerging LLM category (a technology which in itself remains nascent).Related:LINE Messaging Bugs Open Asian Users to Cyber EspionageAgents were first presented in security as an experimental tool to assist (or perhaps replace) SOC staff and could cost as much as an engineer's salary. Now, through products like the Atlas browser, the technology could go much wider, therefore opening up the potential for more agentic prompt injections. Opening Up the AtlasIn late October, Web browser security firm LayerX reported what it described as the first Atlas browser vulnerability, which would have enabled an attacker to inject malicious instructions into the browser's memory. The firm tells Dark Reading it is working on additional research involving the browser for publication in the future."We need to pay bigger attention to how AI is getting embedded into the browser," Or Eshed, LayerX cofounder and CEO, explains. "The big problem is what we're going to see in the next half of a year, where these browsers become more and more powerful. Whatever makes these products more successful is also what will make attackers happier. It's a kind of double-edged sword."Related:The AI Attack Surface: How Agents Raise the Cyber StakesThe day after Atlas launched, OpenAI chief information security officer (CISO) Dane Stuckey published a post to X both celebrating the launch while noting that prompt injection "remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks." This note was presented alongside a commitment to security, privacy, and safety as well as a list of relevant new features, but it must be said that if OpenAI hasn't cracked prompt injections yet — for the billions of dollars funneled into it — almost certainly no one else has either. And that's before one considers how shared responsibility for AI security is kind of a mess.Amit Chita, field chief technology officer (CTO) for application security vendor Mend.io, tells Dark Reading that even if prompt injection has gotten better in some respects, with agents it's not so simple. Every tool an agent has access to — and every interaction between tools — represents an additional vector for a prompt injection to exploit. Moreover, agents can't be held accountable the way human staff can."[Agentic AI] just makes the problem more complex, because every tool can take actions that cause data leakage or just harm to the organization," he says. "The more tools you have, the more opportunity for issues that you have."Suresh Batchu, COO and co-founder of browser security vendor Seraphic Security, tells Dark Reading that he expects wider agent availability to make prompt injections "worse in the near-to-medium term.""As agents gain autonomy and tool access, prompt injection shifts from 'make the model say something weird' to 'make the model do something dangerous,'" he says. "Cloud providers are already warning that agent toolchains enable exfiltration and remote code execution (RCE) when indirect injections land. More agents mean more targets, more varied implementations, and a long tail of smaller orgs deploying them without mature security. Long term, pressure from repeated incidents will improve defaults, but we're not there yet."Protect Your Agents, Atlas or OtherwiseLet's say you're either running an organization and want to try agentic AI or, possibly, a CISO tasked with implementing an agent. Chita advises organizations to conduct regular manual reviews to determine what tools and data the agent can access and take an inventory to ensure the agent only has access to what it needs. As he puts it, sometimes an organization may determine there is risk in doing something and are still willing to take the risk, "but you need to do it thoughtfully."Batchu recommends strict least-privilege tool access, executing tools in a locked sandbox, placing guardrails at every hop rather than user input and output, and including a human in the loop for high-risk actions. "Prompt injections aren't going away through 'better prompts,'" he says. "The issue improves when agents are architecturally constrained, tool use is least privileged and sandboxed, and untrusted content is treated as hostile by default."About the AuthorAlexander CulafiSenior News Writer, Dark ReadingAlex is an award-winning writer, journalist, and podcast host based in Boston. After cutting his teeth writing for independent gaming publications as a teenager, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts the weekly Nintendo podcast Talk Nintendo Podcast and works on personal writing projects, including two previously self-published science fiction novels.See more from Alexander CulafiMore InsightsIndustry Reports2025 State of Threat Intelligence: What it means for your cybersecurity strategyGartner Innovation Insight: AI SOC AgentsState of AI and Automation in Threat IntelligenceGuide to Network Analysis Visibility SolutionsOrganizations Require a New Approach to Handle Investigation and Response in the CloudAccess More ResearchWebinarsIdentity Security in the Agentic AI EraHow AI & Autonomous Patching Eliminate Exposure RisksSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMore WebinarsYou May Also LikeBlack Hat Middle East & AfricaCybersecurity OperationsAs Gen Z Enters Cybersecurity, Jury Is Out on AI's ImpactAs Gen Z Enters Cybersecurity, Jury Is Out on AI's ImpactbyRobert Lemos, Contributing WriterNov 25, 20254 Min ReadKeep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.SubscribeWebinarsIdentity Security in the Agentic AI EraTues, Dec 9, 2025 at 1pm ESTHow AI & Autonomous Patching Eliminate Exposure RisksOn-DemandSecuring the Hybrid Workforce: Challenges and SolutionsTues, Nov 4, 2025 at 1pm ESTCybersecurity Outlook 2026Virtual Event | December 3rd, 2025 | 11:00am - 5:20pm ET | Doors Open at 10:30am ETThreat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesTuesday, Oct 21, 2025 at 1pm ESTMore WebinarsWhite PapersSecure SAST. Innovate Fast: The future of SaaS and Cloud SecurityWhat Can an AI-Powered AppSec Engineer Do?How Squarespace and Semgrep Scaled Secure Development Across Thousands of ReposMissing 88% of Exploits: Rethinking KEV in the AI EraThe Straightforward Buyer's Guide to EDRExplore More White PapersDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use |
Prompt injections pose a significant and escalating threat to ChatGPT’s Atlas browser, driven by the emerging category of agentic AI. As highlighted by Dark Reading’s Alexander Culafi, this vulnerability stems from the increasing integration of AI agents—tools capable of autonomous task completion—into web browsers. These agents, designed to handle complex tasks like scheduling appointments and generating slideshows, are inadvertently opening up a vast attack surface for prompt injections. These injections, once a relatively minor concern with traditional chatbots, now threaten to leverage the expanded capabilities of these agents to execute malicious commands and compromise entire networks. The core issue lies in the agentic nature of these tools. Unlike static AI models, agents can utilize coding tools, conduct research, and interact with other agents, effectively amplifying the potential damage caused by a successful injection. Culafi notes that the combination of AI, which already struggles with data leakage, and agentic tools creates a dramatically increased attack vector. Multiple sources, including LayerX’s initial security research and subsequent commentary from OpenAI CISO Dane Stuckey, reinforce this risk. Stuckey’s direct acknowledgment of prompt injection as an “unsolved security problem” coupled with the substantial investment in Atlas development, underscores the severity of the challenge. Several individuals, including Mend.io’s Amit Chita and Seraphic Security’s Suresh Batchu, emphasize that simply “better prompts” are insufficient to mitigate this threat. Instead, the solution lies in architectural constraints—limiting agent tool access and employing least-privilege security models. Chita advocates for manual reviews, strict inventory management of agent capabilities, and a sandbox environment for tool interaction, while Batchu stresses the importance of guardrails at every step of interaction and treating untrusted content as hostile by default. This expanded attack surface is further compounded by the widespread deployment of agentic AI across various price points, introducing a “long tail” of potentially insecure organizations. The potential for increased vulnerability, rather than reduced vulnerability, is a key concern. As agents gain autonomy and tool access, Culafi explains, prompt injection shifts from simply causing a “weird response” to facilitating “dangerous actions,” such as remote code execution. The risks extend beyond individual organizations. Cloud providers have already issued warnings about the capability of agent toolchains to enable data exfiltration and remote code execution. Therefore, controlling the spread of agentic AI and its potential for misuse is of paramount importance. The implications are such that the broader the deployment of these agents, the greater the risks and exposure. |