LmCast :: Stay tuned in

AI Agent Security: Whose Responsibility Is It?

Recorded: Oct. 17, 2025, 7:01 p.m.

Original Summarized

AI Agent Security: Whose Responsibility Is It? TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsApplication SecurityAI Chat Data Is History’s Most Thorough Record of Enterprise Secrets, Secure it WiselyAI Chat Data Is History’s Most Thorough Record of Enterprise Secrets, Secure it WiselybyRob T. LeeOct 17, 20255 Min ReadCyberattacks & Data BreachesCyberattackers Target LastPass, Top Password ManagersCyberattackers Target LastPass, Top Password ManagersbyNate Nelson, Contributing WriterOct 16, 20255 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLCybersecurity OperationsCyber RiskApplication SecurityСloud SecurityNewsAI Agent Security: Whose Responsibility Is It?AI Agent Security: Whose Responsibility Is It?AI Agent Security: Whose Responsibility Is It?The shared responsibility model of data security, familiar from cloud deployments, is key to agentic services, but cybersecurity teams and corporate users often struggle with awareness and managing that risk.Alexander Culafi, Senior News Writer, Dark ReadingOctober 17, 20256 Min ReadSource: Brain Light via Alamy Stock PhotoAgentic AI deployments are becoming an imperative for organizations of all sizes looking to boost productivity and streamline processes, especially as major platforms like Microsoft and Salesforce build agents into their offerings. In the rush to deploy and use these helpers, it's important that businesses understand that there's a shared security responsibility between vendor and customer that will be critical to the success of any agentic AI project.The stakes in ignoring security are potentially high: last month for instance, AI security vendor Noma detailed how it discovered "ForcedLeak," a critical severity vulnerability chain in Salesforce's agentic AI offering Agentforce, which could have allowed a threat actor to exfiltrate sensitive CRM data from a customer with improper security controls through an indirect prompt injection attack. Although Salesforce addressed the issue through updates and access control recommendations, ForcedLeak is but one example of the potential for agents to leak sensitive data, either through improper access controls, ingested secrets, or a prompt injection attack. It's not an easy task to add agentic AI security to the mix; it's already challenging enough to determine where responsibility and culpability lie with traditional software and cloud deployments. With something like AI, where the technology can be hastily rolled out (by both vendor and customer alike) and is constantly evolving, establishing those barriers can prove even more complex. Related:Financial, Other Industries Urged to Prepare for Quantum ComputersMoreover, organizations are addressing other security awareness challenges like phishing, and have had to contend with determining the best way to offload as much risk as possible from the user, rather than relying on said user to catch every single malicious email. For phishing, that may take the form of physical FIDO keys and secure email gateways. This is similarly relevant for AI agents, which are imperfect autonomous processes that users may rely on to access sensitive information, or grant excessive permissions, or use to route insecure processes without proper oversight. Training users on how to use — or not use — their agent helpers is thus just one more layer of difficulty for security teams.Who Is Responsible for My AI Agents?Shared responsibility is a complex issue. On one hand, software vendors have long been criticized for putting too much of the security burden onto users while facing little to no culpability for selling insecure products. On the other, if an organization neglects basic security hygiene, it may not be fair to blame the vendor. Related:Generation AI: Why Today's Tech Graduates Are At a DisadvantageItay Ravia, head of Aim Labs at Aim Security (an AI security vendor that previously disclosed a Copilot data exfiltration exploit dubbed "EchoLeak"), tells Dark Reading that in an ideal world, an AI agent would have gone through rigorous security testing. But the AI boom has created a "race to make AI smarter, stronger, and more capable at the expense of security."Ravia says agentic AI customers must take ownership of their security posture by ensuring the right guardrails are in place. Varonis field CTO Brian Vecci explains that the first thing to understand is, data isn't stored in an AI agent directly, but rather within the enterprise data repositories that agents are granted access to. "That access control can be individual to the agent or the user(s) that are prompting it, and it's the responsibility of the enterprise — not the agent's vendor or the hyperscaler provider — to secure that data appropriately," he says. "The shared responsibility model of data security is core to cloud and agentic services, and customers often struggle with managing that risk."Melissa Ruzzi, director of AI at AppOmni, says that keeping data secure for AI agents should be considered akin to keeping data secure in software-as-a-service (SaaS) applications. Related:Chinese Hackers Use Velociraptor IR Tool in Ransomware Attacks"The provider is responsible for the security of the infrastructure itself, and the customer is responsible for securing the data and users," she says. "The most important aspect to understand in AI applications is about the data flow and access, such as where data is coming from, where it’s going and who has access to it. Just because the data is being used by AI does not mean that a rigorous security review process can be skipped."Though Ravia, Vecci, and Ruzzi offer different perspectives on the responsibility angle, together they paint a larger, more complex picture. As data is stored separately from the AI agent, it's up to the user to ensure proper data security controls are in place when they hand over the keys to a non-human user. That said, vendors selling AI should ensure all measures are in place to limit the potential for an agent to leak sensitive data, a responsibility where some vendors falter. Security Awareness: Saving AI Users From ThemselvesRegardless of who's to blame, AI-powered data exposures happen often enough that, like in the case of phishing, it's worth asking whether AI vendors should begin to protect customers from themselves with things like mandatory multifactor authentication (MFA) and secrets scanning. Though by no means an industry standard, there are instances of such things happening; a Salesforce spokesperson tells Dark Reading that the company now requires all customers to use MFA with Salesforce products. Aim Security's Ravia says that although in recent months large vendors have begun to take their first steps toward deploying such protections, "unfortunately, they are still well behind attackers and do not account for novel bypass methods." And, as Varonis' Vecci puts it, vendors like Salesforce and Microsoft that offer built-in AI agents can only secure perimeters. David Brauchler, technical director and head of AI and machine learning security at consultancy NCC Group, tells Dark Reading that while responsibility for applying proper authentication and authorization controls falls on the organization using the agent, and the vendor is responsible for providing appropriate tools to enable the customer to manage access to AI systems — this is where it gets murky, from a user-awareness perspective."AI vendors may reasonably enforce certain security best practices, but none of the tools available to vendors fundamentally solve the underlying data access problem," he says. "Tools like secrets scanning and [data loss prevention] often lead to a false sense of security and a greater proliferation of vulnerabilities, resulting from a 'the vendor will handle it' thought process. These problems fundamentally cannot be solved within the agentic model itself and need to be handled by the architecture of the customer’s AI infrastructure."Bottom line? There have plenty of issues with the rollout of the AI in the last few years, and issues with how agentic AI products in particular are deployed into customer environments. So before an organization decides to invest in an agent or other LLM products to become an "AI winner," it should put security first. The organization should know what data the agents have access to, ensure best practices and guardrails are in place, and have a full understanding of the risks of using the AI product and mitigating any risks. About the AuthorAlexander CulafiSenior News Writer, Dark ReadingAlex is an award-winning writer, journalist, and podcast host based in Boston. After cutting his teeth writing for independent gaming publications as a teenager, he graduated from Emerson College in 2016 with a Bachelor of Science in journalism. He has previously been published on VentureFizz, Search Security, Nintendo World Report, and elsewhere. In his spare time, Alex hosts the weekly Nintendo podcast Talk Nintendo Podcast and works on personal writing projects, including two previously self-published science fiction novels.See more from Alexander CulafiMore InsightsIndustry ReportsHow Enterprises Are Harnessing Emerging Technologies in CybersecurityWorldwide Security Information and Event Management Forecast, 2025--2029: Continued Payment for One's SIEMsQualys Named a Market & Product Leader in CNAPPDimensional Research Report: AI agents: The new attack surfaceESG Research: Organizations seek modern, continuous and integrated pentestingAccess More ResearchWebinarsSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedManaging Identities Across the Enterprise CloudMore WebinarsYou May Also LikeEditor's ChoiceVulnerabilities & ThreatsMicrosoft Drops Terrifyingly Large October Patch UpdateMicrosoft Drops Terrifyingly Large October Patch UpdatebyJai Vijayan, Contributing WriterOct 14, 20255 Min ReadKeep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.SubscribeNov 13, 2025During this event, we'll examine the most prolific threat actors in cybercrime and cyber espionage, and how they target and infiltrate their victims.Secure Your SeatWebinarsSecuring the Hybrid Workforce: Challenges and SolutionsTues, Nov 4, 2025 at 1pm ESTCybersecurity Outlook 2026Virtual Event | December 3rd, 2025 | 11:00am - 5:20pm ET | Doors Open at 10:30am ETThreat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesTuesday, Oct 21, 2025 at 1pm ESTMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedThu, Oct 23, 2025 at 11am ESTManaging Identities Across the Enterprise CloudThu, Oct 9, 2025 at 1pm ESTMore WebinarsWhite PapersSecuring Unmanaged Devices: Extending Visibility, Trust, & Control Beyond Corporate PerimetersEliminating Identity-Based Attacks: A Device-Bound Approach to Making Account Takeovers ImpossibleFrom Breached to Bound: A CISO's Guide to Identity Defense in a Credential-Driven Threat WorldWinning the AI Arms Race: Defeating the AI-Powered Phishing EpidemicIntroducing The First True Threat-Led Defense PlatformExplore More White PapersDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use

The article examines the fragmented security responsibilities that arise as enterprises deploy agentic artificial intelligence (AI) tools, particularly when those agents integrate with platforms such as Microsoft, Salesforce, and other SaaS providers. Drawing parallels to the traditional shared‑responsibility model of cloud security, the author argues that the introduction of autonomous agents complicates the question of who protects data, who detects threats, and who mitigates vulnerabilities.

Early adoption of agentic AI has revealed a spectrum of security weaknesses. A prominent case study references Noma’s discovery of “ForcedLeak,” a critical vulnerability chain in Salesforce’s Agentforce that allowed malicious actors to exfiltrate CRM data via indirect prompt injection. Although Salesforce issued patches and tightened access controls, the incident underscores the potential for agents to leak sensitive data through inappropriate access controls, ingested secrets, or prompt‑based exploits. Notably, this is not an isolated event; other vendors have faced similar exploits, such as Aim Security’s “EchoLeak” on Copilot, further evidencing a trend where the rush to enhance AI capabilities outpaces security scrutiny.

The article delves into the layered nature of responsibility. On one side, vendors, especially hyperscalers, maintain that they secure the infrastructure and provide the AI engine; on the other, customers must guard the data that these agents can access. Expert voices illustrate this duality: Itay Ravia encourages organizations to implement rigorous guardrails, Varonis’s Brian Vecci emphasizes that data resides in enterprise repositories rather than the agent itself, and Melissa Ruzzi equates AI security to the principles of SaaS, stressing data flow, access, and authentication control. Collectively, these perspectives point to a shared‑responsibility model that is often poorly understood and mismanaged in practice.

The discussion further highlights the inadequacy of conventional security controls applied to AI agents. Security awareness training, commonly effective against phishing, may be insufficient for AI agents because users can unknowingly grant excessive permissions or feed the agent sensitive data. The author suggests that vendors could mitigate this risk through mandatory multifactor authentication and secrets scanning, but notes that such measures may be insufficient to prevent sophisticated bypass techniques. Moreover, tools like data‑loss prevention and secrets scanners can foster a false sense of security, masking deeper architectural issues that cannot be rectified within the agentic layer alone.

To navigate these challenges, the article recommends a proactive, security‑first approach before adopting agentic AI solutions. Organizations should identify what data the agent can access, establish stringent guardrails and access controls, and continually assess the evolving attack surface introduced by AI. The piece concludes by urging a careful weighing of benefits against risks, reinforcing that successful AI integration hinges on disciplined data security practices and a clear delineation of responsibilities between vendor and user.