In partnership with

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

Threat Actors Probe Misconfigured Proxies to Access LLM APIs, GreyNoise Warns

Threat intelligence firm GreyNoise has observed large-scale probing activity targeting misconfigured proxy servers that could expose access to commercial large language model (LLM) APIs. The activity, recorded between October 2025 and January 2026, includes more than 91,000 attack sessions across two distinct campaigns. While some indicators suggest possible security research or bug-hunting activity, GreyNoise warns that the reconnaissance patterns are consistent with preparation for broader exploitation.

Context

As organizations rapidly integrate LLMs into applications and workflows, access to model APIs has become both valuable and sensitive. Misconfigured proxies, server-side request forgery (SSRF) flaws, and poorly protected connectors can inadvertently expose backend credentials, allowing unauthorized use of paid or restricted AI services. This emerging attack surface is increasingly attractive to threat actors seeking scalable abuse or monetization opportunities.

What Happened

GreyNoise honeypots detected two major probing campaigns over a three-month period.

The first campaign began in October 2025 and leveraged ProjectDiscovery’s out-of-band application security testing (OAST) infrastructure. Activity spiked during the Christmas holiday period, with highly uniform request patterns suggesting automated tooling. Based on the infrastructure used, GreyNoise assesses that this campaign may be linked to security researchers or bug hunters, though grey-hat activity cannot be ruled out.

The second campaign started on December 28 and generated 80,469 attack sessions over just 11 days. This wave focused on identifying misconfigured proxies capable of exposing access to LLM APIs.

Technical Breakdown

The attackers performed reconnaissance against more than 70 LLM endpoints, issuing benign-looking test queries designed to identify which models responded without triggering security alerts. Targeted models included offerings from OpenAI, Anthropic, Meta, Google, Mistral, Alibaba, and xAI.

Both campaigns originated from two IP addresses previously associated with exploitation attempts against more than 200 known vulnerabilities, including CVE-2025-55182 (React2Shell) and CVE-2023-1389, a command injection flaw affecting TP-Link routers. This overlap suggests shared tooling or infrastructure rather than isolated testing.

Impact Analysis

While no confirmed API compromises have been publicly disclosed, successful exploitation of misconfigured proxies could allow attackers to:

  • Consume paid LLM APIs at scale

  • Extract sensitive prompts or responses

  • Abuse AI services for downstream attacks, such as phishing or malware generation

Even limited exposure could result in significant financial costs or data leakage for affected organizations.

Why It Matters

This activity highlights a growing security blind spot in AI adoption: infrastructure surrounding LLMs is often less mature than the models themselves. As organizations race to deploy AI capabilities, configuration errors can quietly introduce high-impact risks that traditional security controls may not yet fully address.

Expert Commentary

GreyNoise notes that the deliberate use of harmless test queries strongly suggests reconnaissance rather than immediate exploitation. Such behavior is commonly observed when attackers are mapping viable targets ahead of a larger campaign, allowing them to scale attacks rapidly once weaknesses are confirmed.

Key Takeaways

  • Threat actors are actively probing for misconfigured proxies exposing LLM APIs.

  • Over 91,000 attack sessions were observed across two coordinated campaigns.

  • Reconnaissance targeted dozens of commercial and open-source AI models.

  • Innocuous test queries were used to avoid detection while fingerprinting systems.

  • Organizations should review proxy configurations, enforce strict egress controls, and audit API credential handling.

Keep Reading

No posts found