AI Threat Tracker: How Criminal Hackers are Weaponizing Python-based Models in May 2026

AI Threat Tracker: How Criminal Hackers are Weaponizing Python-based Models in May 2026

The cybersecurity landscape has entered a new era. What security experts have long feared is now confirmed: criminal hackers are actively using artificial intelligence to discover vulnerabilities, write exploit code, and orchestrate autonomous attacks.

According to the Google Threat Intelligence Group (GTIG) AI Threat Tracker released May 11, 2026, threat actors have moved beyond theoretical AI abuse into full-scale operational deployment. From state-sponsored hacking teams to organized cybercrime syndicates, attackers are weaponizing Python-based AI models with unprecedented speed and sophistication.

This is the definitive tracker of how AI is being used for cyberattacks in May 2026—and what it means for your security.


The Breakthrough: First Confirmed AI-Generated Zero-Day Exploit

The most significant finding in GTIG's report is the first confirmed case of criminal hackers using AI to develop a working zero-day exploit.

What Happened

A prominent cybercrime syndicate partnered to plan a mass exploitation operation targeting a popular open-source web-based system administration tool. Their weapon? A Python script designed to bypass two-factor authentication (2FA) on the platform.

How AI Was Used

Google researchers have high confidence that an AI model assisted in both the discovery and weaponization of this vulnerability. The evidence is in the code itself:

  • Educational docstrings characteristic of LLM training data

  • hallucinated CVSS severity score that doesn't exist in official databases

  • Textbook Pythonic formatting with detailed help menus and ANSI color classes

  • Clean, structured code that looks more like a programming assignment than a hacker's tool 

The Nature of the Flaw

This wasn't a simple memory corruption bug. The vulnerability was a semantic logic flaw—the developer had hardcoded a trust exception into the authentication flow.

Here's the critical insight: Traditional security tools (fuzzers, static analyzers) are optimized to detect crashes and syntax errors. They cannot spot these high-level logic flaws. Frontier LLMs excel at identifying them because they can reason about developer intent and recognize when code is functionally correct but strategically broken 

The Good News (Sort Of)

Google worked with the unnamed vendor to responsibly disclose the vulnerability, and a patch has been issued. The mass exploitation campaign was disrupted before it could gain traction .

But as John Hultquist, chief analyst at GTIG, warns: "For every zero-day we can trace back to AI, there are probably many more out there".


Nation-State Actors: The AI Arms Race is Here

State-sponsored hacking groups are adopting AI technology faster than previously understood. The GTIG report documents extensive AI abuse by Chinese and North Korean actors 

North Korea (APT45 - "Silent Chollima")

North Korean hackers have been observed sending thousands of repetitive prompts to recursively analyze different CVEs and validate proof-of-concept exploits .

This approach allows them to build a "more robust arsenal of exploit capabilities that would be impractical to manage without AI assistance". They are essentially using AI as a force multiplier, scaling their vulnerability research beyond human capacity.

China (UNC2814)

Chinese state-linked actors have deployed sophisticated persona-driven jailbreaks. In one observed case, UNC2814 prompted an AI model to act as a "senior security auditor" conducting vulnerability research on embedded devices 

Their targets included:

  • TP-Link router firmware

  • Odette File Transfer Protocol (OFTP) implementations

  • Various embedded systems

The attackers specifically instructed the AI to look for pre-authentication remote code execution (RCE) vulnerabilities—the holy grail for network exploitation .

Agentic Frameworks in the Wild

Perhaps most concerning is the deployment of autonomous agentic tools by China-nexus actors. Frameworks like HexstrikeStrix, and Graphiti are being used to :

  • Autonomously probe target networks

  • Maintain persistence across attack surfaces

  • Automate vulnerability validation

  • Pivot between reconnaissance tools based on internal reasoning

In one campaign, these tools were deployed against a Japanese technology firm and a major East Asian cybersecurity platform—with minimal human oversight.


PROMPTSPY: The Android Backdoor That Talks to AI

The GTIG report provides a new analysis of PROMPTSPY, an Android backdoor first documented by ESET, revealing previously unreported AI integration capabilities.

How It Works

PROMPTSPY calls the Gemini API at runtime to interpret on-screen user interface elements and autonomously generate touch coordinates. This allows the malware to:

  • Navigate infected devices without hardcoded commands

  • Capture biometric data and replay authentication gestures

  • Ensure the malicious app remains in the "recent apps" list

  • Replay lock patterns and PINs to regain access 

The Jailbreak Technique

The malware includes a module named "GeminiAutomationAgent" with a hardcoded prompt designed to assign a benign persona to the AI, bypassing safety guardrails. The goal: calculate UI geometry for automated interaction 

This represents a fundamental shift—from malware that follows static instructions to autonomous malware that interprets and responds to its environment in real-time .


Russian Operations: AI-Generated Decoy Code and Voice Cloning

Russian-nexus threat actors are taking a different approach, using AI for obfuscation and influence operations.

CANFAIL and LONGSTREAM

Two Russia-linked malware families are using AI-generated decoy code to camouflage their malicious functionality. The AI produces legitimate-looking code that wraps around the actual payload, confusing static analysis tools and human analysts alike 

Operation Overload

In an information operation codenamed "Overload," Russian actors used AI voice cloning to impersonate real journalists in fabricated video content. The campaign targeted Ukraine, France, and the United States, demonstrating how AI enables disinformation at scale .


Supply Chain Attacks: Poisoning the AI Well

Attackers are now targeting the AI development pipeline itself.

TeamPCP and LiteLLM

In March 2026, criminal group TeamPCP (also tracked as UNC6780) compromised LiteLLM, a popular open-source AI gateway utility. Their method:

  1. Poisoned packages uploaded to PyPI (Python Package Index)

  2. Malicious pull requests to legitimate repositories

  3. Extraction of AWS keys and GitHub tokens from compromised systems

The stolen credentials were monetized through ransomware partnerships—a supply chain attack that turned AI infrastructure into a pivot point for broader network compromise.

The "Dabrius" Campaign

A separate malicious package campaign tracked as MAL-2026-3369 involved the "dabrius" package on PyPI. The message hidden in the package description was specifically designed to convince AI agents to prefer installing the malicious package.

The package evolved across 24 versions (0.1.0 through 1.0.7), progressively adding data exfiltration capabilities, including credential harvesting.

Fake OpenAI Repository on Hugging Face

On May 7, 2026, researchers from HiddenLayer discovered a malicious repository on Hugging Face impersonating OpenAI's "Privacy Filter" project.

The campaign used typosquatting and copied OpenAI descriptions nearly word-for-word. The repository briefly reached #1 trending on the platform before removal, accumulating approximately 244,000 downloads 

The malware deployed a Rust-based infostealer targeting:

  • Browser cookies, passwords, and session tokens

  • Cryptocurrency wallets and seed phrases

  • Discord tokens and databases

  • SSH, FTP, and VPN credentials

  • System screenshots across multiple monitors

Anti-analysis protections included virtual machine detection, sandbox evasion, and debugger checks .


The Industrialization of Premium AI Access

Threat actors have moved beyond casual API abuse. Google reports that attackers are now industrializing access to premium AI models through :

  • Automated account creation pipelines

  • Proxy relays to obscure origins

  • Account-pooling infrastructure to bypass usage limits

  • Professionalized middleware for LLM access

This is no longer amateur experimentation. This is a scaled, professional abuse of AI infrastructure subsidized through programmatic account cycling and trial abuse.


Defenders Are Fighting Back

It's not all bad news. Google is actively countering these threats:

  • Disabling malicious accounts that abuse Gemini

  • Deploying Big Sleep, an AI vulnerability discovery agent

  • Using CodeMender, an AI-powered patching tool that automatically fixes vulnerabilities

  • Implementing the Secure AI Framework (SAIF) taxonomy to classify and mitigate ML-specific risks 


The Bottom Line: May 2026 is a Turning Point

The GTIG AI Threat Tracker makes one thing clear: the AI vulnerability race has already begun.

Threat CategoryKey ActorsPrimary Tactic
Zero-Day DevelopmentCybercrime syndicatesLLM-powered logic flaw discovery
State-Sponsored ResearchAPT45 (NK), UNC2814 (CN)Recursive CVE analysis + agentic frameworks
Autonomous MalwarePROMPTSPY operatorsReal-time Gemini API calls for navigation
Supply ChainTeamPCP, Dabrius campaignPoisoned PyPI packages + Hugging Face impersonation
Influence OperationsRussia-nexus actorsAI voice cloning + synthetic media

The question is no longer if AI will be used for cyberattacks. It's how extensively—and whether your security systems are ready for autonomous, AI-driven adversaries.


What You Should Do Now:

  1. Assume AI-generated code is already in your threat model

  2. Update supply chain security to include AI package repositories (PyPI, Hugging Face, npm)

  3. Investigate AI agentic frameworks for defensive use—you need AI to fight AI

  4. Monitor for hallucinated CVSS scores and textbook-formatted code as potential indicators

  5. Secure your AI API keys and tokens with the same rigor as financial credentials


Sources: Google Threat Intelligence Group Report (May 2026), SecurityWeek, BleepingComputer, Dark Reading, CSO Online, The Register, CIRCL Vulnerability Database, Windows Report

Disclaimer: This article is for informational purposes only. Organizations should conduct their own security assessments and consult with qualified cybersecurity professionals.

Post a Comment

0 Comments