The Dawn of AI-Powered Malware: PromptLock Ransomware and APT28's LameHug Signal a New Era in Cyber Threats

The Dawn of AI-Powered Malware: PromptLock Ransomware and APT28's LameHug Signal a New Era in Cyber Threats
Photo by Aerps.com / Unsplash

The cybersecurity landscape is witnessing a paradigm shift as artificial intelligence transitions from defensive tool to offensive weapon. In recent months, two groundbreaking discoveries have demonstrated how threat actors are weaponizing large language models (LLMs): the emergence of PromptLock ransomware and APT28's deployment of the LameHug malware. These developments mark a critical inflection point where AI-powered attacks move from theoretical possibility to operational reality.

PromptLock: The First AI-Powered Ransomware

On August 26, 2025, ESET Research announced the discovery of what they termed "the first known AI-powered ransomware," dubbed PromptLock. This malware represents a significant evolution in ransomware capabilities, leveraging AI to generate malicious scripts dynamically rather than relying on static, pre-coded attack sequences.

Technical Architecture and Capabilities

PromptLock uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes across multiple operating systems. The ransomware is written in GoLang and relies on OpenAI's GPT-OSS:20b, an open-weight model that can be used without proprietary restrictions.

The malware's operational methodology involves several sophisticated steps:

Cross-Platform Script Generation: PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption. These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS

Dynamic Decision Making: The AI component analyzes file contents and based on predefined text prompts — determines whether to exfiltrate or encrypt the data, allowing for context-aware attack decisions that traditional malware cannot make.

Encryption and Exfiltration: PromptLock uses the SPECK 128-bit encryption algorithm to encrypt files, though researchers note this is considered a relatively weak encryption cipher, suggesting the malware remains in proof-of-concept stages.

Detection Challenges

One of PromptLock's most concerning aspects is its potential to evade traditional detection methods. Because the malware relies on scripts generated by AI, Cherepanov said one difference between PromptLock and other ransomware "is that indicators of compromise (IoCs) may vary from one execution to another." This variability could "significantly complicate detection and make defenders' jobs more difficult," as security tools typically rely on consistent behavioral patterns or signatures.

APT28's LameHug: State-Sponsored AI Integration

Prior to PromptLock's discovery, another significant AI-powered threat emerged from a more familiar source. In July 2025, Ukraine's CERT-UA identified a novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. The agency attributed the attacks to Russian state-backed threat group APT28, also known as Fancy Bear.

Operational Methodology

LameHug demonstrates a different approach to AI integration compared to PromptLock. The malware is written in Python and relies on the Hugging Face API to interact with the Qwen 2.5-Coder-32B-Instruct LLM, which can generate commands according to the given prompts.

The malware's attack sequence follows a systematic pattern:

System Reconnaissance: The threat actors instructed LameHug to execute system reconnaissance commands such as hardware, processes, services, network connections and data theft commands, dynamically generated via prompts to the integrated LLM.

Data Collection and Staging: These AI-generated commands were used by LameHug to collect system information and save it to a text file (info.txt), recursively search for documents on key Windows directories (Documents, Desktop, Downloads), and exfiltrate the data using SFTP or HTTP POST requests.

Adaptive Command Generation: Unlike traditional malware that executes pre-programmed functions, LameHug can take text instructions (written in natural language) and, with the help of an AI model, translate them into actual system commands, providing unprecedented flexibility in attack execution.

The Evolution of AI in Cybersecurity: From DARPA’s First Machines to XBOW’s Bug Bounty Victory
The Genesis: From Academic Challenge to Digital Battleground The year was 2016, not 2014 as often misremembered, when DARPA hosted the world’s first all-machine cyber hacking tournament at DEF CON 24. The Cyber Grand Challenge (CGC) marked a pivotal moment in cybersecurity history—the birth of autonomous AI hackers. Seven

Strategic Implications and Analysis

The emergence of both PromptLock and LameHug represents more than isolated incidents—they signal fundamental changes in how cyber attacks are conceived and executed.

Lowering the Barrier to Entry

"With the help of AI, launching sophisticated attacks has become dramatically easier — eliminating the need for teams of skilled developers," added Cherepanov. This democratization of advanced attack capabilities means that less technically skilled threat actors can now deploy sophisticated, adaptive malware.

Operational Flexibility

LameHug is the first malware publicly documented to include LLM support to carry out the attacker's tasks. From a technical perspective, it could usher in a new attack paradigm where threat actors can adapt their tactics during a compromise without needing new payloads. This adaptability represents a significant evolution from traditional malware that follows predetermined execution paths.

State-Sponsored Testing Grounds

Security experts believe these campaigns serve as testing grounds for more advanced AI-powered attacks. Based on our analysis of the actual malware code and operational characteristics, several factors suggest APT28 is testing new LLM capabilities rather than executing a sophisticated operational deployment. The relatively simple implementation of both PromptLock and LameHug suggests we're witnessing the early stages of AI weaponization rather than mature operational deployments.

Google’s Big Sleep AI Agent: A Paradigm Shift in Proactive Cybersecurity
Introduction In a landmark achievement for artificial intelligence in cybersecurity, Google has announced that its AI agent “Big Sleep” has successfully detected and prevented an imminent security exploit in the wild. The AI agent discovered an SQLite vulnerability (CVE-2025-6965) that was known only to threat actors and at risk of

Current Limitations and Proof-of-Concept Status

Both malware families show clear signs of being experimental rather than production-ready threats:

PromptLock Limitations: Although multiple indicators suggest the sample is a proof-of-concept (PoC) or work-in-progress rather than fully operational malware deployed in the wild, including the use of weak encryption and hardcoded Bitcoin addresses linked to Satoshi Nakamoto.

LameHug Testing Nature: The Code Simplicity: The Python scripts are relatively basic, lacking the sophisticated evasion techniques typically associated with APT28 operations. Obvious AI Integration: The LLM integration is implemented straightforwardly without attempts to obfuscate or hide the AI service usage.

Future Threat Evolution

The discovery of these AI-powered malware families provides insight into future threat development. Russo believes that we're soon going to see attack campaigns where an LLM or other AI-based control system is given "more reasoning and even decision-making capacity."

MITRE researchers have developed frameworks to study these emerging threats. The OCCULT framework initiative started in the spring of 2024 and aimed to measure autonomous agent behaviors and evaluate the performance of large language models (LLMs) and AI agents in offensive cyber capabilities.

The Evolution of DARPA’s Cyber Challenges: From Automated Defense to AI-Powered Security
The cybersecurity landscape has undergone a dramatic transformation over the past decade, and DARPA’s groundbreaking cyber challenges have both reflected and catalyzed this evolution. From the pioneering Cyber Grand Challenge in 2016 to the current AI Cyber Challenge reaching its climax at DEF CON 33 in 2025, these competitions have

Defensive Considerations and Recommendations

The emergence of AI-powered malware requires security teams to adapt their defensive strategies:

Behavioral Analysis Enhancement: Traditional signature-based detection may prove insufficient against AI-generated attack variations. Organizations should invest in behavioral analysis tools that can identify malicious activities regardless of the specific commands used.

API Monitoring: Since LameHug is one of the first publicly observed malware strains to incorporate LLM capabilities for generating and executing system commands, it's important to monitor outbound connections to known LLM service endpoints.

Network Segmentation: The success of a Promptlock attack also depends on the victim having poor network segmentation, emphasizing the importance of robust network architecture.

AI Security Awareness: Organizations should apply strict input controls, monitor AI usage, block access to LLM services where not required, and validate the integrity of any LLMs integrated into their workflows to ensure these technologies are not turned against them.

MCP in Cybersecurity: A Hacker’s Guide to AI-Powered Security Tools
Introduction: The Game Just Changed Alright, listen up. If you’re still manually querying your SIEM, copy-pasting between security tools, or clicking through dozens of tabs to investigate an incident, you’re doing it wrong. There’s a new protocol in town that’s about to revolutionize how we interact with security tools, and

Conclusion: The New Reality of AI-Powered Threats

The discovery of PromptLock and LameHug marks the beginning of a new era in cybersecurity where artificial intelligence serves both as shield and sword. While both malware families currently appear to be proof-of-concept implementations, they demonstrate the feasibility and potential of AI-powered attacks.

As "The emergence of tools like PromptLock highlights a significant shift in the cyber threat landscape," security professionals must prepare for a future where AI enhances not just defensive capabilities but also the sophistication and adaptability of cyber attacks. The race between AI-powered offense and defense has begun, and the stakes have never been higher.

Organizations must recognize that the threat landscape is evolving rapidly. The integration of AI into malware represents not just a technological advancement but a fundamental shift in how cyber attacks are conceived, developed, and executed. Preparation for this new reality requires investment in advanced defensive technologies, enhanced security awareness, and a deep understanding of how artificial intelligence can be both leveraged and weaponized in the cybersecurity domain.

DARPA’s Cyber Grand Challenge: The Historic Battle of Autonomous Cybersecurity Systems
Introduction In June 2014, DARPA launched the Cyber Grand Challenge (CGC), a competition designed to spur innovation in fully automated software vulnerability analysis and repair. This groundbreaking initiative represented a pivotal moment in cybersecurity history, marking the world’s first tournament where autonomous computer systems would compete against each other in

This analysis is based on public research from ESET, Ukraine's CERT-UA, and various cybersecurity firms. The information presented is intended for defensive awareness and educational purposes within the cybersecurity community.

Read more