LotL Attack Hides Malware in Windows Native AI Stack
Recorded: Oct. 30, 2025, 11:03 p.m.
| Original | Summarized |
LotL Attack Hides Malware in Windows Native AI Stack TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Dark Reading Resource LibraryBlack Hat NewsOmdia CybersecurityAdvertiseNewsletter Sign-UpNewsletter Sign-UpCybersecurity TopicsRelated TopicsApplication SecurityCybersecurity CareersCloud SecurityCyber RiskCyberattacks & Data BreachesCybersecurity AnalyticsCybersecurity OperationsData PrivacyEndpoint SecurityICS/OT SecurityIdentity & Access Mgmt SecurityInsider ThreatsIoTMobile SecurityPerimeterPhysical SecurityRemote WorkforceThreat IntelligenceVulnerabilities & ThreatsRecent in Cybersecurity TopicsVulnerabilities & ThreatsLotL Attack Hides Malware in Windows Native AI StackLotL Attack Hides Malware in Windows Native AI StackbyNate Nelson, Contributing WriterOct 30, 20255 Min ReadApplication SecurityMalicious NPM Packages Disguised With 'Invisible' DependenciesMalicious NPM Packages Disguised With 'Invisible' DependenciesbyRob WrightOct 29, 20254 Min ReadWorld Related TopicsDR GlobalMiddle East & AfricaAsia PacificRecent in World See AllThreat IntelligenceSilver Fox APT Blurs the Line Between Espionage & CybercrimeSilver Fox APT Blurs the Line Between Espionage & CybercrimebyNate Nelson, Contributing WriterAug 8, 20253 Min ReadThreat IntelligenceIran-Israel War Triggers a Maelstrom in CyberspaceIran-Israel War Triggers a Maelstrom in CyberspacebyNate Nelson, Contributing WriterJun 19, 20255 Min ReadThe EdgeDR TechnologyEventsRelated TopicsUpcoming EventsPodcastsWebinarsSEE ALLResourcesRelated TopicsLibraryNewslettersPodcastsReportsVideosWebinarsWhite papers Partner PerspectivesSEE ALLVulnerabilities & ThreatsCyber RiskApplication SecurityThreat IntelligenceNewsLotL Attack Hides Malware in Windows Native AI StackLotL Attack Hides Malware in Windows Native AI StackLotL Attack Hides Malware in Windows Native AI StackSecurity programs trust AI data files, but they shouldn't: they can conceal malware more stealthily than most file types.Nate Nelson, Contributing WriterOctober 30, 20255 Min ReadSource: Eugene Sergeev via Alamy Stock PhotoA researcher has demonstrated that Windows' native artificial intelligence (AI) stack can serve as a vector for malware delivery.In a year where clever and complex prompt injection techniques have been growing on trees, security researcher hxr1 identified a much more traditional way of weaponizing rampant AI. In a proof-of-concept (PoC) shared exclusively with Dark Reading, he described a living-off-the-land attack (LotL) using trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines."All those different living off the land binaries [we're familiar with] have been there now for so many years," hxr1 says. "They're old and all well known, and most of the [endpoint detection and response systems, or EDRs] and antivirus [filters] are good enough to capture the kinds of attacks using them. So attackers always look for new living-off-the-land binaries so that they can bypass these existing defenses and get their payloads on the targeted system. That's where this ONNX model comes into the picture."A Primer on Windows AICybersecurity programs are only as effective as cybersecurity developers design them to be. They might catch undue volumes of data exfiltrating from a network, or a foreign .exe file that starts running, because these are known indicators of suspicious behavior. They won't likely know it, though, if malware arrives on a system in a form they've never seen before.Related:Oracle EBS Attack Victims May Be More Numerous Than ExpectedThat's what makes AI such a headache. As new systems, software, and workflows tack on AI capabilities, they open up new, unseen vectors through which cyberattacks may be transmitted.For example, since 2018, the Windows operating system has been steadily adding functionality that allows applications to perform AI inference locally, without having to connect to a cloud service. Windows Hello, Photos, and Office applications all use inbuilt AI to perform facial recognition, object detection, and productivity functions, respectively. They do so by calling the Windows Machine Learning (ML) application programming interface (API), which loads ML models in the form of ONNX files.Windows and security programs inherently trust ONNX files. Why wouldn't they? Malware comes in EXEs, PDFs, and other formats, but to date no threat actors in the wild have demonstrated that they intend to, or can, weaponize neural networks for malicious ends. It's certainly possible, though, by any number of means.Encoding Malware in an AI ModelAn easy method for poisoning a neural network would be to plant a malicious payload in its metadata. The tradeoff would be that this malware would sit in plaintext, much easier for a security program to incidentally notice.Related:Memento Spyware Tied to Chrome Zero-Day AttacksIt would be more difficult but more subtle to embed malware piecemeal among the named components of the model — nodes, inputs, and outputs. Or an attacker could use advanced steganography to conceal a payload within the very weights that comprise the neural network.All three methods work, as long as you have a loader nearby that can call relevant Windows APIs to unpack it, reconstruct it in memory, and run it. And both methods are extremely stealthy. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.Let's say an attacker manages to sneak malware into an ONNX file. They then have a variety of options for how they might transmit it to a victim. A phishing email would do, carrying an ONNX file and loader. Or an attacker could take advantage of the widespread trust users have in AI software across the board, by publishing a malicious model on an open source (OSS) platform like Hugging Face.But there's a crucial difference between a PDF and an ONNX in a phishing email, or a software download from Hugging Face versus GitHub.Related:Attackers Sell Turnkey Remote Access Trojan 'Atroposia'Evading Detection Tools"When you download a GitHub repo, that'll always be, like, a Python script, or .NET code, or something like that. And EDR engines are good enough to scan those types of files," hxr1 notes.By contrast, when a security program sees a process loading an ONNX file, it will read it as benign AI inference. Doubly so because of how difficult it would be to find a payload in such a complex, binary file.Triply so because the ONNX file is supposed to just contain data, so "these models don't have to be signed binaries. You can download any models, you can use native libraries to extract them, and there are no validations or signature checks happening there," hxr1 points out. They'll skirt right by analysis tools focused on executable behavior.Quadruply so because of how the file gets loaded and executed, hxr1 says. "You can hide a payload in any file format. Like, you can put it in an audio file. But how are you going to extract it? What API are you going to use? Are EDRs good enough to monitor your suspicious APIs as they retrieve, read the file, and extract data from the file?" That's why his PoC worked so well — the dynamic link libraries (DLL) that operate on ONNX files are signed by Microsoft and built into Windows. So when a malicious ONNX is loaded on a target's system, all any security program will see is trusted Windows DLLs reading model data to perform an AI task.From hxr1's perspective, there isn't any kind of issue with how Windows AI is working. Rather, the cybersecurity community at large needs to adjust. Security tools need to be reworked to look for threats couched in AI files."EDRs should monitor who loads them, what has been extracted, where the extracted data is being passed, and those paths need to be monitored," he suggests. "On top of that we have static analyzers, like YARA rules, that we can use to monitor for suspicious strings in data. Also, we can use application controls like AppLocker. All those things we could do as part of a mitigation and detection strategy."If nothing else, he says, "the main goal here is to prove that these models are not trustworthy. Don't blindly trust any model sitting on the Internet."About the AuthorNate Nelson, Contributing WriterNate Nelson is a writer based in New York City. He formerly worked as a reporter at Threatpost, and wrote "Malicious Life," an award-winning Top 20 tech podcast on Apple and Spotify. Outside of Dark Reading, he also co-hosts "The Industrial Security Podcast."See more from Nate Nelson, Contributing WriterMore InsightsIndustry ReportsIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025The Total Economic Impact™ Of Palo Alto Networks NextGeneration FirewallsMiercom Test Results: PA-5450 Firewall WinsSecurity Without Compromise Better security, higher performance and lower TCOAccess More ResearchWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedMore WebinarsYou May Also LikeEditor's ChoiceCybersecurity OperationsElectronic Warfare Puts Commercial GPS Users on NoticeElectronic Warfare Puts Commercial GPS Users on NoticebyRobert Lemos, Contributing WriterOct 21, 20254 Min ReadKeep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.SubscribeNov 13, 2025During this event, we'll examine the most prolific threat actors in cybercrime and cyber espionage, and how they target and infiltrate their victims.Secure Your SeatWebinarsThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterTues, Nov 18, 2025 at 1pm ESTSecuring the Hybrid Workforce: Challenges and SolutionsTues, Nov 4, 2025 at 1pm ESTCybersecurity Outlook 2026Virtual Event | December 3rd, 2025 | 11:00am - 5:20pm ET | Doors Open at 10:30am ETThreat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesTuesday, Oct 21, 2025 at 1pm ESTMeasuring Ransomware Resilience: What Hundreds of Security Leaders RevealedThu, Oct 23, 2025 at 11am ESTMore WebinarsWhite PapersHow to Chart a Path to Exposure Management MaturitySecurity Leaders' Guide to Exposure Management StrategyThe NHI Buyers GuideThe AI Security GuideTop 10 Identity-Centric Security Risks of Autonomous AI AgentsExplore More White PapersDiscover MoreBlack HatOmdiaWorking With UsAbout UsAdvertiseReprintsJoin UsNewsletter Sign-UpFollow UsCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.Home|Cookie Policy|Privacy|Terms of Use |
The proliferation of artificial intelligence (AI) within Windows operating systems presents a novel and concerning attack vector, according to a recent demonstration by researcher hxr1. This “living off the land” attack leverages Windows’ native AI stack, specifically the Exchange Model Notation (ONNX) format, to bypass traditional security measures. Traditional defenses, reliant on recognizing known executable behaviors, struggle to detect malware hidden within AI inference processes. The core of the threat lies in the inherent trust placed on AI models by Windows – a trust that malicious actors can exploit. The attack hinges on the fact that Windows has steadily integrated AI capabilities since 2018, utilizing the Windows Machine Learning (ML) API and ONNX files for applications like Windows Hello, Photos, and Office. Because these processes are designed to provide functionality within the OS, they’re often treated as legitimate and trusted. The problem is, this trust creates an opportunity for attackers to embed malware within these models, making it exceptionally difficult for Endpoint Detection and Response (EDR) systems and antivirus software to identify it. Standard detection methods, which focus on observable executable behavior, simply won’t recognize the malicious activity hidden within the AI inference process. hxr1’s proof-of-concept demonstrates several methods for weaponizing ONNX models. These include embedding a payload in the metadata of the model, piecemeal insertion of malicious components within the model’s nodes, or more sophisticated techniques like steganography used to conceal the payload within the neural network’s weights. Regardless of the chosen method, the core requirement is a supporting loader that can unpack and execute the payload once the ONNX file is loaded. The stealthiness of these attacks is amplified by the fact that ONNX files are not necessarily signed binaries and don’t require signature checks, and can be downloaded and used freely. The attacker's ability to deploy the malware through various vectors—phishing emails with malicious ONNX files, or by leveraging open-source platforms like Hugging Face—further exacerbates the risk. A key distinction is that an ONNX file downloaded from GitHub will always be treated as a Python script or .NET code, whereas EDR systems are often good enough to scan those kinds of files. However, the attack’s success depends on the loader component. EDRs should monitor who loads the ONNX files, what is extracted, where the extracted data is passed, and those paths need to be monitored. Static analysis tools like YARA rules can be used to monitor for suspicious strings in the data, and application controls like AppLocker can be implemented. The researcher underscores the need for a fundamental shift in security strategies. Traditional security tooling, which relies on recognizing known executable behaviors, is ill-equipped to address this type of attack. Instead, security programs should focus on monitoring the loading process, the extracted data, and the paths through which that data is passed. It's a call for a more dynamic and context-aware approach to threat detection. Ultimately, the demonstration highlights the critical importance of questioning the trust placed in AI systems and moving beyond reliance on traditional security methods. |