LmCast :: Stay tuned in

GitHub adds AI-powered bug detection to expand security coverage

Recorded: March 26, 2026, 3 a.m.

Original Summarized

GitHub adds AI-powered bug detection to expand security coverage

News

Featured
Latest

Popular LiteLLM PyPI package backdoored to steal credentials, auth tokens

HackerOne discloses employee data breach after Navia hack

Firefox now has a free built-in VPN with 50GB monthly data limit

Infinite Campus warns of breach after ShinyHunters claims data theft

GitHub adds AI-powered bug detection to expand security coverage

PolyShell attacks target 56% of all vulnerable Magento stores

Bubble AI app builder abused to steal Microsoft account credentials

New Torg Grabber infostealer malware targets 728 crypto wallets

Tutorials

Latest
Popular

How to access the Dark Web using the Tor Browser

How to enable Kernel-mode Hardware-enforced Stack Protection in Windows 11

How to use the Windows Registry Editor

How to backup and restore the Windows Registry

How to start Windows in Safe Mode

How to remove a Trojan, Virus, Worm, or other Malware

How to show hidden files in Windows 7

How to see hidden files in Windows

Webinars
Downloads

Latest
Most Downloaded

Qualys BrowserCheck

STOPDecrypter

AuroraDecrypter

FilesLockerDecrypter

AdwCleaner

ComboFix

RKill

Junkware Removal Tool

Deals

Categories

eLearning

IT Certification Courses

Gear + Gadgets

Security

VPNs

Popular

Best VPNs

How to change IP address

Access the dark web safely

Best VPN for YouTube

Forums
More

Virus Removal Guides
Startup Database
Uninstall Database
Glossary
Send us a Tip!
Welcome Guide

HomeNewsSecurityGitHub adds AI-powered bug detection to expand security coverage

GitHub adds AI-powered bug detection to expand security coverage

By Bill Toulas

March 25, 2026
07:23 PM
0

GitHub is adopting AI-based scanning for its Code Security tool to expand vulnerability detections beyond the CodeQL static analysis and cover more languages and frameworks.
The developer collaboration platform says that the move is meant to uncover security issues "in areas that are difficult to support with traditional static analysis alone."
CodeQL will continue to provide deep semantic analysis for supported languages, while AI detections will provide broader coverage for Shell/Bash, Dockerfiles, Terraform, PHP, and other ecosystems.
The new hybrid model is expected to enter public preview in early Q2 2026, possibly as soon as next month.
Finding bugs before they bite
GitHub Code Security is a set of application security tools integrated directly into GitHub repositories and workflows.
It is available for free (with limitations) for all public repositories. However, paying users can access the full set of features for private/internal repositories as part of the GitHub Advanced Security (GHAS) add-on suite.
It offers code scanning for known vulnerabilities, dependency scanning to pinpoint vulnerable open-source libraries, secrets scanning to uncover leaked credentials on public assets, and provides security alerts with Copilot-powered remediation suggestions.
The security tools operate at the pull request level, with the platform selecting the appropriate tool (CodeQL or AI) for each case, so any issues are caught before merging the potentially problematic code.
If any issues, such as weak cryptography, misconfigurations, or insecure SQL, are detected, those are presented directly in the pull request.
GitHub’s internal testing showed that the system processed over 170,000 findings over 30 days, resulting in 80% positive developer feedback, and indicating that the flagged issues were valid.
These results showed “strong coverage” of the target ecosystems that had not been sufficiently scrutinized before.
GitHub also highlights the importance of Copilot Autofix, which suggests solutions for the problems detected through GitHub Code Security.
Stats from 2025 comprising over 460,000 security alerts handled by Autofix show that resolution was reached in 0.66 hours on average, compared to 1.29 hours when Autofix wasn’t used.
GitHub’s adoption of AI-powered vulnerability detection marks a broader shift where security is becoming AI-augmented and also natively embedded within the development workflow itself.

Red Report 2026: Why Ransomware Encryption Dropped 38%
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Download The Report

Related Articles:
Bubble AI app builder abused to steal Microsoft account credentialsOpenAI rolls out ChatGPT Library to store your personal filesMusician admits to $10M streaming royalty fraud using AI botsMicrosoft stops force-installing the Microsoft 365 Copilot appNew font-rendering trick hides malicious commands from AI tools

AI
Artificial Intelligence
Coding
GitHub
Security
Security Scanner
Vulnerability Scanner

Bill Toulas
Bill Toulas is a tech writer and infosec news reporter with over a decade of experience working on various online publications, covering open-source, Linux, malware, data breach incidents, and hacks.

Previous Article

Post a Comment Community Rules

You need to login in order to post a comment

Not a member yet? Register Now

You may also like:

Popular Stories

New KB5085516 emergency update fixes Microsoft account sign-in

Microsoft Exchange Online service change causes email access issues

VoidStealer malware steals Chrome master key via debugger trick

Sponsor Posts

Cyber resilience without the complexity. Join Zero Networks to stop lateral movement fast.

Overdue a password health-check? Audit your Active Directory for free

AI is a data-breach time bomb: Read the new report

Are your AI accounts being sold on the dark web? Check for free. 

Overdue a password health-check? Audit your Active Directory for free

  Upcoming Webinar

Follow us:

Main Sections

News
Webinars
VPN Buyer Guides
SysAdmin Software Guides
Downloads
Virus Removal Guides
Tutorials
Startup Database
Uninstall Database
Glossary

Community

Forums
Forum Rules
Chat

Useful Resources

Welcome Guide
Sitemap

Company

About BleepingComputer
Contact Us
Send us a Tip!
Advertising
Write for BleepingComputer
Social & Feeds
Changelog

Terms of Use - Privacy Policy - Ethics Statement - Affiliate Disclosure

Copyright @ 2003 - 2026 Bleeping Computer® LLC - All Rights Reserved

Login

Username

Password

Remember Me

Sign in anonymously

Sign in with Twitter

Not a member yet? Register Now


Reporter

Help us understand the problem. What is going on with this comment?

Spam

Abusive or Harmful

Inappropriate content

Strong language

Other

Read our posting guidelinese to learn what content is prohibited.

Submitting...
SUBMIT

GitHub is integrating AI-powered bug detection into its Code Security tool to expand security coverage beyond traditional methods. This initiative, spearheaded by GitHub, aims to identify vulnerabilities in areas where static analysis alone proves insufficient. The new hybrid system will combine deep semantic analysis provided by CodeQL with broader vulnerability detection leveraging AI, specifically targeting Shell/Bash, Dockerfiles, Terraform, PHP, and other prevalent ecosystems. Public preview of this system is slated for early Q2 2026, potentially as early as next month.

The Code Security tool, available for free (with limitations) for public repositories and accessible through the GitHub Advanced Security (GHAS) add-on suite for private repositories, offers a suite of security tools integrated into GitHub workflows. These tools include code scanning for known vulnerabilities, dependency scanning to assess open-source libraries, secrets scanning to detect leaked credentials, and security alerts supplemented by Copilot-powered remediation suggestions. The system operates at the pull request level, automatically selecting the most appropriate tool – CodeQL or AI – to handle each issue. Detected issues, such as weak cryptography, misconfigurations, or insecure SQL, are directly presented in the pull request, facilitating rapid identification and resolution.

Internal testing of the system processed over 170,000 findings within a 30-day period, generating 80% positive developer feedback. This indicated strong coverage of the targeted ecosystems and highlighted the effectiveness of Copilot Autofix, which suggests solutions for detected problems. Data from 2025 showed that over 460,000 security alerts were handled by Autofix, with an average resolution time of 0.66 hours compared to 1.29 hours when Autofix wasn't used. This significantly streamlines the remediation process.

This move reflects a broader trend of integrating AI into security workflows, creating an AI-augmented and natively embedded security experience within development processes. Bill Toulas highlights this shift as a significant development in the ongoing battle against cyber threats.