LmCast :: Stay tuned in

The AI Trust Paradox: Why Security Teams Fear Automated Remediation

Recorded: Oct. 30, 2025, 11:03 p.m.

Original Summarized

AI Trust Paradox: Overcome Fear Auto Cyber Remediation TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsApplication SecurityMalicious NPM Packages Disguised With 'Invisible' DependenciesMalicious NPM Packages Disguised With 'Invisible' DependenciesbyRob WrightOct 29, 20254 Min ReadApplication SecurityAI-Generated Code Poses Security, Bloat ChallengesAI-Generated Code Poses Security, Bloat ChallengesbyRobert Lemos, Contributing WriterOct 29, 20256 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLCybersecurity OperationsCyber RiskCybersecurity AnalyticsCommentaryEnterprise cybersecurity technology research that connects the dots.The AI Trust Paradox: Why Security Teams Fear Automated RemediationThe AI Trust Paradox: Why Security Teams Fear Automated RemediationThe AI Trust Paradox: Why Security Teams Fear Automated RemediationSecurity teams invest in AI for automated remediation but hesitate to trust it fully due to fears of unintended consequences and lack of transparency.Tyler Shields, Principal Analyst, OmdiaOctober 28, 20257 Min ReadSource: OmdiaCOMMENTARYWith the volume of threats and the complexity of the modern digital attack surface, it's no surprise that cybersecurity teams are overwhelmed. Risk has outstripped the human capacity required to remediate. As attackers embrace automation via AI, the quantity of vulnerabilities has skyrocketed, and the number of unique tools required to detect and eradicate threats and exposures in the enterprise has become untenable.The mean time to discover and remediate vulnerabilities and exposures is going the wrong way and enterprises today find themselves buried in security debt that just keeps compounding over time. This graphic from CVE.ICU sums it up nicely — we are being buried in risk and the only way out must be AI-driven automation.CVE growth. Source: Jerry Gamblin, CVE.ICUThe only way we can scale ourselves out of this problem is by using AI to automate the human bottleneck that exists within the risk reduction process. The venture capital market is backing cybersecurity-related AI companies with massive sums of money. According to research from Mike Privette, founder of the Return on Security newsletter, AI-focused cybersecurity investment doubled from 2023 ($181.5 million) to 2024 ($369.9 million). This is likely an underestimate given the tight definition of "AI security" in his research, however it is directionally accurate to drive home the point that AI is where investors think we can see the broadest impact on cybersecurity efficacy.But here lies the problem: Research conducted by Omdia Research on the topic of automated remediation in threat and exposure management reveals a critical paradox. While we are creating the tools to support AI-driven remediation of vulnerabilities, we're still unwilling to give it the freedom to execute well. We're buying a race car but insisting on leaving the speed limiter attached to the engine. The problem is a fundamental lack of trust in automated remediation.Why We Should Bet on AI CybersecurityAI brings so much to the table that can't currently be achieved by human analysts alone. AI can leverage data points that non-AI systems can’t, at least not in the same volume and speed. AI systems built upon a broad set of asset, exposure, threat, and risk data can find sophisticated behavioral patterns of risk that would be difficult, if not impossible, to achieve with human analysis alone.With AI, we can finally scale our analysis capabilities to find contextual relationships between what we are protecting and the state and actions that impact it. The result is more accurate risk scoring and prioritization than traditional methods, with the goal of achieving security with outcomes such as real-time exposure detection, accurate risk prioritization, and, most critically, automated remediation capabilities. We have a gold mine of potential in front of us if we would just start to trust the system to execute.Unpacking the Crisis of TrustHowever, not everything is all roses and sunshine in the race to adopt AI-based cybersecurity platforms. Security and infrastructure leaders currently have an adverse reaction when it comes to putting their trust in AI recommendations and remediation capabilities. This fear of AI is not irrational. Practitioners are afraid of the "black box," the unexplainable, and the "magic" of AI results. Technologies that don't have transparency and explainability attached to their AI results are a non-starter for the cynical seasoned cybersecurity professional.There is a very real fear of unintended consequences. The ultimate roadblock for automated remediation is the question: "What if an AI 'fix' takes down a production application?" Today, enterprise cybersecurity leaders are adopting AI cybersecurity technologies, but they aren't unleashing them into the wild. They are deploying them in specific locations and systems, focusing them on low-risk patching of limited consequence, and applying limits to what agentic AI can and can't do automatically.The Evolution of Risk Reduction: Contextual Analysis and Automated Remediation in Threat and Exposure Management. Source: Tyler Shields, Omdia ResearchI honestly don't blame them. We're in the infancy of the capabilities and the last thing you want is a rogue agent causing havoc in your environment that wouldn’t have been there if you just used a human instead. We have a clear crisis of trust when it comes to execution of agentic systems.Moving from Human-in-the-Loop to Human Orchestration of AIThe lack of trust in agentic AI remediation reminds me of the original launch of the Windows auto-update feature in the year 2000. The immediate response from nearly every IT and security team was, "No way we auto-remediate — it's going to break things!" And at first, it did. But over time it improved, caused fewer issues, and eventually became a highly effective way to ensure that your systems were kept up to date and secure. Adoption happened over time as trust was gained, and patching results were consistently stable. In essence, trust was earned.To achieve a similar path to trusted adoption in the world of agentic AI cybersecurity remediation organizations must crawl, walk, then run.Phase 1 (Crawl): Mandate Explainability. This is where most companies are today with AI cybersecurity adoption. Start by using AI only for detection, prioritization, and recommendations. Ignore automated remediation capabilities in favor of building trust over time. Ask your security technology vendors for total transparency surrounding the decisions the AI system makes and dig into the explainability of the recommendations. Deep dive the output and ensure accuracy.Phase 2 (Walk): Supervised Automation. Implement a "human approval" workflow for remediation. Focus on critical actions that solve real problems and attach human oversite to the process to ensure that the correct steps are taken and to reduce execution risk of the AI agents. This will result in a human bottleneck that you will want to reduce over time as you build trust in the AI systems. Automate low-risk fixes first and build your way up to higher-risk remediations over time. Start with foundational patching and configuration changes before even considering code level or identity modifications.Phase 3 (Run): Policy-Driven Autonomy. This is the human-in-the-loop end state. Over time we transition to Phase 3, where humans are no longer responsible for approving every action but are setting the policies and guardrails within the AI system. Agentic AI operators reference and follow the guidelines resulting in operations that are well formed and secure.At this stage the role of the SOC analyst completely changes. SOC analysts will no longer be directly responsible for the day-to-day tactical operations of execution. Instead, they will own the orchestration of an army of AI agents that execute with autonomy, driving us closer to our longer-term goals of a self-healing system. SOC analysts will focus on the more complex edge cases that the agents can't quite grasp and will become experts in AI training and tuning to solve these problems.Your Real ROI Is Unleashing Your PeopleThe biggest barrier to leveraging AI in cybersecurity isn't the technology itself, it's our ability to trust AI with the execution of tasks. Overcoming this fear requires a deliberate, phased approach focused on building confidence in the new technologies that we've built. The true ROI of agentic AI deployments into cybersecurity programs isn't going to be measured in the quantity of headcount saved, but instead in the level of elevation we achieve with the headcount that we currently have.It's about freeing your most valuable resources and security experts from the day-to-day noise so that they can focus on the novel, complex threats that machines can't yet handle. As we approach a world where AI agents take over the daily operations of our security team, I want you to ask yourself one question: "What is the single automated remediation action I am most afraid to let an AI platform in your environment handle today, and why?" From there, plan a path to grow your trust in the agents to eventually help solve that scary and difficult problem.Over time you'll burn down these fears, resulting in a highly efficient AI-driven cybersecurity program that scales well beyond anything you've ever seen before. The result will be a real decrease in risk and a burn down of that security debt chart.Read more about:OmdiaAbout the AuthorTyler ShieldsPrincipal Analyst, OmdiaPrincipal Analyst Tyler Shields is a veteran market analyst with more than 25 years of experience in cybersecurity technologies and markets. Tyler at ESG advises cybersecurity vendors on product strategy, market opportunities, and customer alignment, leveraging his expertise in vulnerability management, risk analysis, and offensive security. Previously, he was VP of Marketing at Traceable.AI, CMO at JupiterOne and Signal Sciences, and VP of Strategy at Sonatype. A thought leader in cybersecurity and innovation, Tyler holds a Master's in Computer Science from James Madison University and an MBA from UNC Kenan-Flagler, where he also teaches as an Adjunct Professor.See more from Tyler ShieldsMore InsightsIndustry ReportsIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025Miercom Test Results: PA-5450 Firewall WinsSecurity Without Compromise Better security, higher performance and lower TCOThe Total Economic Impact™ Of Palo Alto Networks NextGeneration FirewallsAccess More ResearchWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedMore WebinarsYou May Also LikeDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use

The increasing complexity of the modern cyber threat landscape is creating a significant challenge for security teams, leading to a paradox: while embracing Artificial Intelligence (AI) for automated remediation, teams simultaneously exhibit a reluctance to fully trust these systems. This hesitancy, as highlighted by Omdia Research Principal Analyst Tyler Shields, stems from a core fear of unintended consequences and a lack of transparency – a ‘black box’ approach to AI decision-making. The volume of vulnerabilities, amplified by AI-driven automation, has created a massive backlog of risk, driving up “security debt” and highlighting the need for scalable solutions. The market is witnessing a massive influx of investment in cybersecurity AI, doubling from $181.5 million in 2023 to $369.9 million in 2024, but this investment is predicated on the ability to actually solve the critical issue of remediation at scale.

To successfully integrate AI into security operations, a phased approach is crucial, transitioning from a human-in-the-loop model to a more autonomous system. Shields outlines three distinct phases: “Crawl” – establishing trust through explainable AI and limited automated actions; “Walk” – implementing a supervised automation workflow with human approval for critical remediation; and “Run” – achieving full policy-driven autonomy, where AI agents operate with minimal human intervention. This progression – crawl, walk, then run – is designed to mitigate the initial fear of unchecked automation and to build confidence over time. The key is recognizing that the true ROI of AI in cybersecurity isn't solely measured by headcount reduction, but by the elevation of skilled security professionals, allowing them to focus on complex, novel threats rather than routine operational tasks.

Ultimately, overcoming the trust hurdle requires a fundamental shift in mindset: security teams must move beyond simply purchasing AI-powered remediation tools and instead embrace a strategic, iterative approach. As Shields points out, the most significant barrier isn't the technology itself, but our confidence in its execution. By carefully managing the transition to automated remediation, organizations can begin to address the massive backlog of risk, scale their security operations, and move towards a future where AI agents proactively manage the complexities of the modern cyber threat landscape.