AI Developed Code: 5 Critical Security Checkpoints for Human Oversight
Recorded: Nov. 3, 2025, 2 p.m.
| Original | Summarized |
AI Code Security Checkpoints for Human Oversight TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsCyber RiskZombie Projects Rise Again to Undermine SecurityZombie Projects Rise Again to Undermine SecuritybyRobert Lemos, Contributing WriterOct 30, 20257 Min ReadVulnerabilities & ThreatsLotL Attack Hides Malware in Windows Native AI StackLotL Attack Hides Malware in Windows Native AI StackbyNate Nelson, Contributing WriterOct 30, 20255 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLApplication SecurityCyber RiskCommentaryAI Developed Code: 5 Critical Security Checkpoints for Human OversightAI Developed Code: 5 Critical Security Checkpoints for Human OversightAI Developed Code: 5 Critical Security Checkpoints for Human OversightTo write secure code with LLMs developers must have the skills to use AI as a collaborative assistant rather than an autonomous tool, Madou argues.Matias Madou, Co-Founder & CTO, Secure Code WarriorNovember 3, 20254 Min ReadSource: Carmen K. Sisson/Cloudybright via Alamy Stock Photo OPINIONThere are several best practice recommendations to help organizations mitigate the risks inherent in AI-generated code, and most highlight the importance of human-AI collaboration, with human developers taking a hand regularly (and literally) in the process. However, those recommendations also hinge on developers having a medium to high level of security proficiency, which is an area where many developers fall short. It's up to organizations to ensure developers have current, verified security skills to work effectively with AI assistants and agents.Vulnerabilities Increase as LLM Iterations Grow LLMs have been a boon for developers since OpenAI's ChatGPT was publicly released in November 2022, with other AI models fast on its heels. Developers were quick to utilize the tools, which significantly increased productivity for overtaxed development teams. But that productivity boost came with security concerns, such as AI models trained on flawed code from internal or publicly available repositories. Those models introduced vulnerabilities that sometimes spread throughout the entire software ecosystem.One way to address the problem was to use LLMs to iteratively improve code-level security during the development process, under the assumption that LLMs, given the task of correcting mistakes, would correct them quickly and effectively. However, several studies (and extensive real-world experience, including our own data) have demonstrated that an LLM can introduce vulnerabilities into the code it generates during this process. Related:Malicious NPM Packages Disguised With 'Invisible' DependenciesThere is no shortcut. Developers must maintain control of the development process, viewing AI as a collaborative assistant rather than an autonomous tool. Designers need to incorporate security features into their tools that detect potential vulnerabilities and send alerts when they are identified. And chief information security officers (CISOs), together with other security leaders in the business, can give the development cohort a solid foundation for success with these five steps: Checkpoint 1 : Code review by security-proficient developers is non-negotiable.This step would leverage human expertise as the first line of defense, providing a level of quality control that can't be automated. However, security leaders must place developer upskilling at the heart of their security programs. Adaptive learning, verification of security skills, traceability of LLM tool usage and data-backed risk metrics should all form part of the modern, AI-augmented security program.Related:AI-Generated Code Poses Security, Bloat ChallengesCheckpoint 2: Apply secure rulesets.AI coding assistants might be powerful, but they need guidance. A contextual rule file works by steering them toward safe, standardized output, reducing the risk of non-compliant configuration or insecure coding patterns.Checkpoint 3: Review each iteration.Using both human experts and automated tools, organizations should check security at each step. While security-focused prompts generally produce more secure code than those that do not specifically instruct secure output, this still often results in vulnerable code.Checkpoint 4: Apply AI governance best practices. Automate policy enforcement to ensure AI-enabled developers meet secure coding standards before their contributions are accepted in critical repos.Checkpoint 5: Monitor code complexity. The likelihood of new vulnerabilities increases with code complexity, so human reviewers need to be alert when complexity rises. The common thread in these recommendations is the need for human expertise, which is far from guaranteed. Software engineers typically receive very little security upskilling, if any at all, and have traditionally concentrated on quickly creating applications, upgrades and services while letting security teams chase after any pesky flaws later on. As AI tools accelerate DevOps, organizations must equip developers with the skills to ensure secure code throughout the software development life cycle (SDLC) to maintain security. To achieve this, they need to implement ongoing adaptive learning programs that provide developers with the necessary skills.Related:It Takes Only 250 Documents to Poison Any AI ModelDevelopers Must Have the Skills to Keep AI in CheckForward-thinking organizations are working with developers in applying a security-first mindset to the SDLC, in line with the goals of the US Cybersecurity and Infrastructure Security Agency's (CISA's) Secure-by-Design initiative. This includes a continuous program of agile, hands-on upskilling in sessions designed to meet developers' needs. For example, training is tailored to the work they do in the programming languages they use, and it is available on a schedule that fits their busy workdays. Better still, the security proficiency of humans and their AI coding assistants should be benchmarked, with security leaders able to access data-driven insights on both developer security proficiency and the security accuracy of any commits made with the assistance of AI tooling and agents. Would it not be beneficial to monitor who used what to better manage code review, or verify when we know a particular LLM is failing at specific tasks or vulnerability classes?An effective upskilling program not only helps ensure that developers can create secure code, but also that they are equipped to review AI-generated code, identifying and correcting flaws as they appear. Even in this new era of AI-generated coding, skilled human supervision remains essential. And CISOs must prioritize equipping their critical human workforce with those skills.About the AuthorMatias MadouCo-Founder & CTO, Secure Code WarriorMatias Madou is a researcher and developer with more than 15 years of hands-on software security experience. He has developed solutions for companies such as Fortify Software and his own company, Sensei Security. Over his career, Matias has led multiple application security research projects which have led to commercial products and boasts over 10 patents under his belt. When he is away from his desk, Matias has served as an instructor for advanced application security training courses and regularly speaks at global conferences, including RSA Conference, Black Hat, DEF CON, BSIMM, OWASP AppSec, and BruCon.Matias holds a Ph.D. in Computer Engineering from Ghent University, where he studied application security through program obfuscation to hide the inner workings of an application.See more from Matias MadouMore InsightsIndustry ReportsIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025Miercom Test Results: PA-5450 Firewall WinsSecurity Without Compromise Better security, higher performance and lower TCOThe Total Economic Impact™ Of Palo Alto Networks NextGeneration FirewallsAccess More ResearchWebinarsHow AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMore WebinarsYou May Also LikeEditor's ChoiceCybersecurity OperationsElectronic Warfare Puts Commercial GPS Users on NoticeElectronic Warfare Puts Commercial GPS Users on NoticebyRobert Lemos, Contributing WriterOct 21, 20254 Min ReadKeep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.SubscribeNov 13, 2025During this event, we'll examine the most prolific threat actors in cybercrime and cyber espionage, and how they target and infiltrate their victims.Secure Your SeatWebinarsHow AI & Autonomous Patching Eliminate Exposure RisksOn-DemandThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterTues, Nov 18, 2025 at 1pm ESTSecuring the Hybrid Workforce: Challenges and SolutionsTues, Nov 4, 2025 at 1pm ESTCybersecurity Outlook 2026Virtual Event | December 3rd, 2025 | 11:00am - 5:20pm ET | Doors Open at 10:30am ETThreat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesTuesday, Oct 21, 2025 at 1pm ESTMore WebinarsWhite PapersHow to Chart a Path to Exposure Management MaturitySecurity Leaders' Guide to Exposure Management StrategyThe NHI Buyers GuideThe AI Security GuideTop 10 Identity-Centric Security Risks of Autonomous AI AgentsExplore More White PapersDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use |
The convergence of TechTarget and Informa’s Digital Business divisions, combining TechTarget’s expansive online properties with Informa’s cybersecurity offerings, establishes a significantly enhanced network reaching over 50 million professionals. This consolidation aims to deliver unparalleled insights and informed decision-making across a wide spectrum of business priorities. The core strategy revolves around leveraging the combined resources to provide comprehensive content covering areas like application security, cybersecurity operations, and data privacy. Key among the recommendations for navigating the evolving landscape of AI-generated code is a strong emphasis on human-AI collaboration, recognizing that developers need robust security proficiency to effectively utilize these tools. Several best practices highlight this imperative, beginning with code review by security-proficient developers – a non-negotiable step given the potential for AI to introduce vulnerabilities. Organizations must prioritize upskilling their development teams, incorporating adaptive learning, verification of security skills, and traceability of AI tool usage into their security programs. This proactive approach is further bolstered by implementing secure rule sets, continuous review of AI-generated code, and diligent application of AI governance best practices, including automated policy enforcement. Monitoring code complexity is also critical, as increased complexity correlates with a heightened risk of vulnerabilities. The underlying theme consistently emphasizes the critical role of human expertise, particularly in light of the rapid proliferation of AI coding assistants. Historically, software engineers have often focused on rapid application development, deferring security concerns to dedicated security teams. However, with AI accelerating DevOps, organizations must equip developers with the skills to ensure secure code throughout the Software Development Life Cycle (SDLC). The US Cybersecurity and Infrastructure Security Agency’s (CISA) Secure-by-Design initiative, promoting a continuous, hands-on upskilling approach, resonates with this shifting reality. This requires benchmarking both developer security proficiency and the accuracy of AI-generated code, providing security leaders with data-driven insights. Effective upskilling programs not only foster secure coding practices but also enable developers to scrutinize AI-produced code, promptly identifying and rectifying any emerging flaws. Crucially, the combined resources empower organizations to implement a security-first mindset, mirroring the goals of Secure-by-Design. The future demands continuous, agile programs tailored to developer needs, incorporating metrics and verifiable skill assessments within an integrated security architecture. The key takeaway is that while AI offers significant potential, it should be viewed as a collaborative assistant, not an autonomous tool, and human oversight remains paramount to mitigating security risks and maintaining a robust and resilient software ecosystem. This requires a holistic approach, blending technological advancements with strategic human capital investment, ultimately bolstering the overall security posture of the organization. |