AI-Generated Code Poses Security, Bloat Challenges
Recorded: Oct. 29, 2025, 3:40 p.m.
| Original | Summarized |
AI-Generated Code Poses Security, Bloat Challenges TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsApplication SecurityMalicious NPM Packages Disguised With 'Invisible' DependenciesMalicious NPM Packages Disguised With 'Invisible' DependenciesbyRob WrightOct 29, 20254 Min ReadApplication SecurityAI-Generated Code Poses Security, Bloat ChallengesAI-Generated Code Poses Security, Bloat ChallengesbyRobert Lemos, Contributing WriterOct 29, 20256 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLApplication SecurityCybersecurity OperationsInsider ThreatsRemote WorkforceNews, news analysis, and commentary on the latest trends in cybersecurity technology.AI-Generated Code Poses Security, Bloat ChallengesAI-Generated Code Poses Security, Bloat ChallengesAI-Generated Code Poses Security, Bloat ChallengesDevelopment teams that fail to create processes around AI-generated code face more technical and security debt as vulnerabilities get replicated.Robert Lemos, Contributing WriterOctober 29, 20256 Min ReadSource: Summit Art Creations via ShutterstockDevelopers using large language models (LLMs) to generate code perceive significant benefits, yet the reality is often less rosy.Programmers who adopted AI for code generation estimate, for example, that their individual effectiveness improved by 17%, according to the "State of AI-assisted Software Development" report published by Google's DevOps Research and Assessment (DORA) team in late September. Yet the same report also finds that software delivery instability climbed by nearly 10% as well. Overall, 60% of developers work in teams that suffer from either lower development speeds, greater software delivery instability, or both.The problem? AI tends to amplify flaws in the codebases it uses for training and, because it produces a greater volume of code, developers do not have time to scrutinize the output in the same way as if they were writing it, says Matt Makai, vice of developer relations at cloud platform Digital Ocean."If you have technical debt or security vulnerabilities, how you use the tools has a big impact on whether they're going to replicate those same problems elsewhere," he says. "They absolutely are verbose on their first shot, in part, because they're trying to solve the stated problem. The thing that's been missing from a lot of the practices today is ... what's your checklist after you've solved the problem?"Related:Malicious NPM Packages Disguised With 'Invisible' DependenciesUsing AI to generate code has already become nearly ubiquitous among developers. Depending on the study, 84% to 97% of developers use AI to generate code. (The Google DORA report found 90% of developers use AI in their work.) Yet generating code using AI without adequate scrutiny and testing can easily lead to bloated codebases and software with significant vulnerabilities. These two cases are examples of technical and security debt, respectively, because they represent negative productivity and extra work that eventually must be done in the future.More Code Added, More Checks NeededFor coders who overly rely on LLMs to produce significant portions of their codebases without firm oversight, the quality and security implications have become all too apparent: more code, more vulnerabilities, and more security debt.In 2025, the average developer checked in 75% more code than they did in 2022, according to an Oct. 1 analysis of GitHub data conducted by software engineering platform vendor GitClear. The same analysis concludes that while a "10% productivity gain look[s] real ... so are the costs," and that the increase in output "applies as much or more to the metrics that quantify 'how much code will the team need to maintain?' as it does 'how much output will each developer gain?'"Related:It Takes Only 250 Documents to Poison Any AI Model While the syntax of AI-generated code (blue line) has improved greatly, security vulnerabilities continue to be a problem. Source: VeracodeFor the most part, AI-generated code increasingly passes both syntactic and functional inspections. However, research conducted by application security firm Veracode found that 45% of the code generated by AI models had known security flaws.Two years ago, Chris Wysopal, chief security evangelist for Veracode, predicted that the 45% vulnerability rate would improve. It hasn't, he says."It's been completely flat," he says. "So that study is still applicable today — the developers using AI-assisted coding are creating slightly worse code than the ones that are not."Code Slop ComethSocial media has been overrun by AI-generated content, dubbed "AI slop." Workers are increasingly seeing "work slop" — AI-generated work delivered by their co-workers and managers that passes as reasonable deliverables but fails to advance a given task. Similarly, poor development practices can result in "code slop" — code that may compile and produce output but is verbose, brittle, and flawed.One reason for these issues: LLMs are not able to keep the context of large codebases in memory. As a result, developers are seeing massive duplication of code, such as importing an entirely new package — for logging, for example — even if another package is already being used to accomplish a task, Wysopal says.Related:Self-Propagating GlassWorm Attacks VS Code Supply Chain"That's one of the worst engineering things you could do is start to duplicate all of that code," he says. "Now I have to keep two packages updated. Now I have to fix things in two places. And so the [code] volume problem is there, but I think it just manifests itself a little bit differently."Without processes in place to reduce the voluminous code produced by AI systems and scan code before commits, developers will find themselves dedicating their time to rework, says Sarit Tager, vice president of product management at Palo Alto Networks."AI has enabled developers to move faster than ever, but security hasn't been able to keep pace," she says. "The 'shift-left' movement — intended to bring security earlier into development — has mostly concentrated on detection, not prevention. Many teams hesitate to enforce guardrails and prevention rules for fear of slowing down innovation."A Problem, but Also a SolutionThe first step is to get developers to commit to understanding the code they are submitting. But the problem is unlikely to get better soon if developers shift from creating code to merely curating it, says Palo Alto Networks' Tager."When developers prompt an AI model, they're accepting or rejecting output rather than writing logic themselves — reducing their understanding of how the code actually works," she says. "Over time, this erodes code ownership and makes it harder to identify or fix security flaws." Software development teams that do not handle AI-generated code properly will face slower throughput and possibly more instability. Source: Google DORAYet the problems AI creates can likely be solved by AI as well. The issues can be mitigated by development teams that have the right processes in place. The Google DORA study found that two out of the seven types of identified development teams — what the report dubs "Pragmatic Performers" and "Harmonious High-Achievers" — deliver on the promise of AI: both higher software delivery throughput and lower software delivery instability."[The two groups'] existence provides an empirical anchor for what is possible — a benchmark that organizations can strive for," the Google report states. "While achieving this state is clearly difficult, these groups serve as a powerful testament to the fact that high-velocity, high-quality software delivery is not a theoretical ideal but an observable reality."Digital Ocean's Makai calls this shift in dealing with AI-generated code moving from "vibe coding" to "vibe engineering.""Make sure that you are asking these tools not just to spit out some code to create a feature, but, hey, what are the potential security vulnerabilities of this feature? How do I rewrite it? How do I make this code more efficient?" he says. "The tools are all capable of that, but if you don't prompt the tool for the security review or to make your code more efficient or optimize that database query, it's not going to do that for you."About the AuthorRobert Lemos, Contributing WriterVeteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.See more from Robert Lemos, Contributing WriterMore InsightsIndustry ReportsMiercom Test Results: PA-5450 Firewall WinsSecurity Without Compromise Better security, higher performance and lower TCOThe Total Economic Impact™ Of Palo Alto Networks NextGeneration FirewallsHow Enterprises Are Harnessing Emerging Technologies in CybersecurityWorldwide Security Information and Event Management Forecast, 2025--2029: Continued Payment for One's SIEMsAccess More ResearchWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedMore WebinarsYou May Also LikeLatest Articles in DR TechnologyMicrosoft Security Change for Azure VMs Creates PitfallsOct 29, 2025|4 Min ReadLevelBlue Announces Plans to Acquire XDR Provider CybereasonOct 15, 2025|2 Min ReadFinancial, Other Industries Urged to Prepare for Quantum ComputersOct 13, 2025|5 Min ReadMicrosoft Adds Agentic AI Capabilities to SentinelOct 10, 2025|4 Min ReadRead More DR TechnologyDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use |
The article examines the growing reliance on AI-generated code among developers and the associated challenges in maintaining security, efficiency, and code quality. While large language models (LLMs) offer significant productivity gains—such as a reported 17% increase in individual developer effectiveness according to Google’s DevOps Research and Assessment (DORA) team—the adoption of AI code generation introduces new risks that threaten software stability, security, and long-term maintainability. The core issue lies in the tension between the speed and scale of AI output and the limitations of human oversight, which often fails to keep pace with the volume and complexity of generated code. Developers who integrate AI into their workflows without structured processes face heightened technical debt, security vulnerabilities, and inefficiencies that undermine the very benefits AI is meant to provide. The DORA report highlights a paradox: while 90% of developers use AI in their work, nearly 60% of teams experience either slower development speeds or increased software delivery instability. This duality reflects the dual nature of AI’s impact—its capacity to accelerate coding tasks while simultaneously amplifying existing flaws in codebases. Matt Makai, vice president of developer relations at DigitalOcean, emphasizes that AI does not inherently improve code quality; instead, it replicates and magnifies the imperfections present in its training data. For example, if a codebase contains technical debt or security vulnerabilities, AI systems are likely to reproduce these issues, producing output that is functionally correct but riddled with hidden flaws. The problem is compounded by the sheer volume of code generated, which outpaces developers’ ability to manually review and validate each line. Makai argues that the absence of a systematic checklist for evaluating AI-generated code—such as identifying security risks, optimizing performance, or ensuring maintainability—leaves teams vulnerable to long-term consequences. The security implications are particularly concerning. A 2025 study by Veracode found that 45% of AI-generated code contains known security vulnerabilities, a rate that has remained largely unchanged despite advancements in LLM capabilities. Chris Wysopal, Veracode’s chief security evangelist, notes that this stagnation suggests developers are not leveraging AI to address security concerns proactively. Instead, the focus on speed often leads to superficially functional code that lacks robustness. This trend is exacerbated by the fact that LLMs, despite their ability to produce syntactically correct code, struggle to maintain context across large codebases. As a result, developers frequently encounter redundant or duplicated code—such as importing multiple packages for the same task—which increases maintenance overhead and introduces new points of failure. Wysopal likens this phenomenon to “code slop,” a term used to describe AI-generated code that compiles and runs but is inefficient, brittle, and difficult to debug. The article also underscores the cultural shift in software development practices driven by AI adoption. Sarit Tager, vice president of product management at Palo Alto Networks, points out that AI has enabled developers to move faster than ever, but this speed comes at the cost of security. The “shift-left” movement, which advocates for integrating security earlier in the development lifecycle, has largely focused on detection rather than prevention. Many teams hesitate to implement strict guardrails or code review processes for fear of slowing down innovation, creating a gap between the rapid output of AI tools and the slower, more deliberate practices required to ensure quality. Tager warns that this imbalance could erode code ownership and hinder the ability of developers to identify and fix security flaws, as reliance on AI reduces their understanding of how code functions. Despite these challenges, the article identifies potential solutions through proactive and structured approaches to AI integration. The DORA report highlights two types of development teams—“Pragmatic Performers” and “Harmonious High-Achievers”—that effectively harness AI to achieve both high throughput and low instability. These teams prioritize processes that mitigate the risks of AI-generated code, such as rigorous testing, security audits, and optimization reviews. Makai describes this approach as a transition from “vibe coding” to “vibe engineering,” where developers explicitly prompt AI tools to address specific concerns like security vulnerabilities, performance bottlenecks, or code efficiency. By framing AI as a collaborator rather than a replacement, teams can leverage its capabilities while maintaining control over the final output. The article also touches on broader implications for the future of software development. As AI-generated code becomes increasingly prevalent, the need for standardized practices around its use will grow more urgent. Developers must balance the efficiency gains of AI with the responsibility of ensuring that their codebases remain secure, maintainable, and scalable. This requires a cultural shift toward accountability, where teams invest in tools and processes that enable thorough scrutiny of AI output. For instance, automated code analysis tools can help detect vulnerabilities or redundancies that human reviewers might miss, while continuous integration pipelines can enforce quality checks before code is committed. However, these measures are only effective if developers actively engage with them, rather than treating AI as a black box that generates code without question. The challenges outlined in the article are not unique to any single organization or industry but reflect systemic issues in how AI is being adopted across the software development ecosystem. The reliance on LLMs to generate code has created a feedback loop where increased output leads to greater complexity, which in turn demands more resources for maintenance. This dynamic is particularly evident in the rise of “AI slop,” a term used to describe subpar code that passes initial reviews but fails to meet long-term standards. Social media and workplace discussions frequently highlight instances of AI-generated work that appears functional but lacks substance, mirroring the broader issue of code quality. In development teams, this can manifest as duplicated functionality, inefficient algorithms, or poorly structured logic that hampers future development. Ultimately, the article concludes that while AI has the potential to revolutionize software development, its risks cannot be ignored. The key to mitigating these challenges lies in adopting a disciplined approach that combines human oversight with AI’s strengths. Developers must move beyond viewing AI as a tool for generating code and instead see it as part of a broader development workflow that includes security, efficiency, and maintainability. This requires not only technical solutions but also a shift in mindset, where teams prioritize quality over speed and recognize that the long-term success of their projects depends on how well they manage the trade-offs introduced by AI. As the article’s author, Robert Lemos, notes, the future of software development will hinge on whether teams can balance innovation with responsibility, ensuring that AI’s promise is realized without compromising the integrity of their codebases. |