AI Security Agents Get Personas to Make Them More Appealing
Recorded: Nov. 7, 2025, 7:03 p.m.
| Original | Summarized |
Are AI Security Agents With Personality More Appealing? TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsVulnerabilities & ThreatsOllama, Nvidia Flaws Put AI Infrastructure at RiskOllama, Nvidia Flaws Put AI Infrastructure at RiskbyAlexander CulafiNov 7, 20253 Min ReadApplication SecuritySora 2 Makes Videos So Believable, Reality Checks Are RequiredSora 2 Makes Videos So Believable, Reality Checks Are RequiredbyArielle WaldmanNov 6, 20255 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLCybersecurity OperationsApplication SecurityRemote WorkforceCyber RiskNews, news analysis, and commentary on the latest trends in cybersecurity technology.AI Security Agents Get Personas to Make Them More AppealingNew synthetic security staffers promise to bring artificial intelligence comfortably into the security operations center, but they will require governance to protect security.Robert Lemos, Contributing WriterNovember 7, 20255 Min ReadSource: ImageFlow via ShutterstockA handful of cybersecurity firms are leaning heavily into wrapping artificial intelligence agents in synthetic personas, creating digital AI employees that interact with security teams and act as entry-level analysts, autonomously investigating and resolving issues. And while this makes humans more comfortable with their agentic coworkers, it opens security issues that organizations will need to address. Startup Cyn.Ai introduced Ethan, one of many synthetic intelligence (SI) workers that the company plans to develop to take on specific tasks for customers, such as brand protection, vulnerability management, and asset discovery. The digital workers aim to replace other providers' security-as-a-service offerings and have personas that wrap the AI agents. The synthetic cybersecurity specialist even has a LinkedIn page.The goal is to allow companies to be able to pull individual workers on an as-needed basis and gain a virtual security team that can assist and act as a guide, says Gil Levy, co-founder and CEO of Cyn.Ai. Digital employees promise to augment security teams, making them able to detect threats faster, respond to breaches more quickly, and to have a better view of the status of a company's digital assets."It's like your digital twin or your peer that you can talk to — you can ask questions, [and you are] not dealing with all these issues on your own," he says. Levy acknowledges these agents are “a markedly new psychological interface model for users,” so the LinkedIn personas are an experiment to make users comfortable. Related:Financial, Other Industries Urged to Prepare for Quantum ComputersYet, current security controls will need to adapt to dealing with autonomous non-human identities to give humans that ability to understand what they are doing and manage them, says Geoff Cairns, a principal analyst in the security and risk group at business intelligence firm Forrester Research."The biggest concern for organizations is how to move into that future without undermining trust and effectiveness in critical security operations," he says, adding: "Although these types of AI agents will introduce new risks of their own, they can make security more effective by enhancing continuous monitoring and threat detection, automating incident response, enforcing least privilege dynamically, assisting with human risk management, and improving regulatory compliance."Digital Employees More Than Sum of AgentsAI digital employees are not single agents, but a group of AI agents working together to — in the case of a digital security analyst — find issues, investigate an incident, prioritize actions, and resolve the incident. Where AI agents complete well-defined tasks in a narrow area, an AI digital employee brings a great deal of context to its decisions and actions, says Ben Ofer, co-founder and CEO of Twine Security, a AI-first security firm and finalist in Black Hat USA's startup competition.Related:For One NFL Team, Tackling Cyber Threats Is Basic Defense"Where a traditional AI agent might complete a single task or assist within narrow boundaries — what I call horizontal skills — an AI digital employee is deeply vertical in its expertise and operates with context," he says. "It doesn’t just understand the issue, it knows the full color and background surrounding it, enabling context-based decisions rather than static, predefined workflows."Twine's first AI digital employee is called Alex, a synthetic worker which focuses on identity and access management (IAM) tasks, can coordinate response activities and report on results. Cyn.Ai kicked off its offering with a synthetic intelligence focused on a brand protection, which monitors its clients' online assets, scans for impersonations, and looks for information stealers — simplifying detection and aiding in response. The AI security agent reduces false positives by 85% and executes takedowns in under a minute, according to Cyn.AI's Levy.Related:SentinelOne Announces Plans to Acquire Observo AI"The nice thing here is that first of all, the agent is learning, so that means if a certain pattern of problem follows the same playbook, the agent will offer to take the work off you and automate it, escalating on your behalf and following up on things," he says.While the agents are capable of automating a lot of work, it's not about replacing members of the security team, but augmenting their efforts, he says. "We're not going to remove the human factor altogether, but we're definitely going to reduce the dependency” on employees for everyday tasks, Levy says. “We will see more and more agents taking an active role in day-to-day cybersecurity operations."A Less Privileged AI AgentHowever, such personas and agents bring risks. For one, while agents allow for more human interaction, and personas can camouflage the automation with context and personality, human workers need to limit their trust on digital AI workers.Companies need to make sure that they are managing security agents — and AI agents of all types — by implementing transparent audit trails and always keeping a human in the loop, says Forrester's Cairns. Digital employees taking on security roles need to be managed as first-class objects within identity and access management (IAM) systems and using the principle of "least agency," he says."Least agency builds on the principle of least privilege," Cairns explains. "AI agents within agentic architectures must receive the minimum set of permissions, capabilities, tools, and decision-making to complete specific tasks bound by time and scope of approval. Least privilege focuses on access; least agency places boundaries on decisions and actions."In creating the AI worker Alex, Twine focused on transparency and keeping a human in the loop, claiming that every action taken by the synthetic intelligence is transparent, traceable, and fully auditable, says Ofer."Managers can review not only what was done, but why it was done, with a complete record of the context, logic, and data sources behind each decision," he says. "This collaborative approach serves a dual purpose: It builds confidence by maintaining human control, while gradually demonstrating the AI's reliability."The hope is that as trust is gained, companies will delegate more responsibilities to the AI agents and digital employees.About the AuthorRobert Lemos, Contributing WriterVeteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.See more from Robert Lemos, Contributing WriterMore InsightsIndustry Reports2025 DigiCert DDoS Biannual ReportDigiCert RADAR - Risk Analysis, Detection & Attack ReconnaissanceThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025Access More ResearchWebinarsHow AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMore WebinarsYou May Also LikeFEATUREDCheck out the Black Hat USA Conference Guide for more coverage and intel from — and about — the show.Latest Articles in DR TechnologySora 2 Makes Videos So Believable, Reality Checks Are RequiredNov 6, 2025|5 Min ReadOperational Technology Security Poses Inherent Risks for ManufacturersNov 5, 2025|5 Min ReadAI App Spending Report: Where Are the Security Tools?Nov 4, 2025|4 Min ReadAn 18-Year-Old Codebase Left Smart Buildings Wide OpenOct 30, 2025|4 Min ReadRead More DR TechnologyDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use |
The integration of artificial intelligence (AI) into cybersecurity operations is evolving beyond simple automation, with a growing trend toward creating synthetic security staffers – essentially, digital AI employees – designed to interact with human teams. This approach, spearheaded by companies like Cyn.Ai and Twine Security, aims to make AI more approachable and palatable for security professionals. These “digital employees,” such as Cyn.Ai’s Ethan or Twine’s Alex, are built with personas to foster a more comfortable working relationship and augment existing security teams. The core concept is to provide a digital equivalent of a security analyst, capable of handling tasks like brand protection, vulnerability management, or identity and access management (IAM) tasks. Companies like Cyn.Ai are offering solutions that drastically reduce false positives – in the case of their brand protection agent, they claim an 85% reduction – and execute takedowns in under a minute. Twine Security’s Alex focuses specifically on IAM, coordinating response activities and reporting on results. However, this shift presents significant security considerations that organizations must address. A key element of this trend is the recognition that these digital employees are not simply standalone agents, but rather part of a larger ‘digital security analyst’ – a concept articulated by Ben Ofer, co-founder and CEO of Twine Security. Ofer differentiates this approach by emphasizing “context” – the ability of the AI to go beyond simply completing a single, narrow task. “It doesn’t just understand the issue, it knows the full color and background surrounding it,” he explains, allowing for context-based decisions rather than fixed workflows. To ensure trust and accountability, companies are prioritizing transparency and human oversight. Twine Security, for example, claims that every action taken by Alex is transparent, traceable, and fully auditable, allowing managers to not only see *what* was done, but also *why*. This collaborative approach builds confidence and allows for the gradual delegation of more responsibilities to the AI agent. However, the implementation of these synthetic staffers is underpinned by a critical security consideration: the principle of “least agency.” Geoff Cairns, a principal analyst at Forrester Research, emphasizes the need to limit the permissions, capabilities, tools, and decision-making authority granted to AI agents. “Least privilege focuses on access; least agency places boundaries on decisions and actions,” he states, highlighting the importance of carefully defining the scope and limitations of these digital employees. Furthermore, organizations must adapt existing security controls to accommodate autonomous, non-human identities. This means incorporating AI agents into identity and access management (IAM) systems and treating them as first-class objects. Robert Lemos, a contributing writer for TechTarget, notes the potential for undermining trust and effectiveness if security operations are not managed thoughtfully. The success of this approach hinges on a willingness to experiment and learn. While the goal is to augment human teams, it’s not about replacing them entirely. As Cyn.Ai's Gil Levy explains, the aim isn’t to eliminate the human factor, but to reduce dependency on employees for everyday tasks, allowing them to focus on higher-level strategic activities. The intention is that as trust grows, companies will delegate more responsibilities to these digital employees, marking a significant shift in the way security teams operate and fostering a more collaborative and efficient security posture. |