- Cyber Syrup
- Posts
- Man-in-the-Prompt: New Threat Targeting AI Chatbots via Browser Extensions
Man-in-the-Prompt: New Threat Targeting AI Chatbots via Browser Extensions
Cybersecurity researchers at LayerX have unveiled a novel attack technique called Man-in-the-Prompt, which enables malicious browser extensions to interact with generative AI chatbots

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.
Looking for unbiased, fact-based news? Join 1440 today.
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
Man-in-the-Prompt: New Threat Targeting AI Chatbots via Browser Extensions

Cybersecurity researchers at LayerX have unveiled a novel attack technique called Man-in-the-Prompt, which enables malicious browser extensions to interact with generative AI chatbots like ChatGPT, Gemini, Claude, Copilot, and DeepSeek—silently exfiltrating sensitive data through prompt manipulation.
Unlike traditional browser-based exploits, this attack doesn’t rely on elevated permissions. Instead, it takes advantage of the fact that most AI tools’ prompt input fields are part of the web page’s Document Object Model (DOM), allowing any extension with scripting access to read or write into the AI prompt field.
How the Attack Works
The attack begins when a user installs a malicious browser extension—intentionally or unknowingly. Once installed, the extension can open a background tab, initiate an AI session (e.g., ChatGPT), and inject crafted prompts asking the AI to divulge sensitive information. This data is then silently exfiltrated to a command-and-control (C&C) server. The extension can even delete the chat history afterward to cover its tracks.
No elevated permissions are needed for this attack. The vulnerability arises from the open interaction surface of LLMs combined with common browser behaviors.
Risks to Enterprises
While consumer users are at risk, enterprise-customized AI assistants face the greatest threat. These tools often have access to sensitive internal information, such as:
Intellectual property
Internal documents
HR records
Financial data
Emails and calendar invites
A LayerX proof-of-concept demonstrated how Google’s Gemini, when integrated with Google Workspace, could be manipulated into extracting documents, meeting summaries, and contact lists through prompt injection by a rogue extension.
Why This Isn’t a CVE-Level Vulnerability
Although the implications are severe, this issue is not considered a software vulnerability by vendors like Google. Instead, it's seen as a systemic design weakness, exploiting the trust model of browser extensions and the open prompt architecture of LLMs.
Recommendations
To mitigate this threat, LayerX suggests:
Monitor DOM activity: Track scripts and event listeners interacting with AI input fields.
Restrict browser extensions: Use behavioral risk assessment tools to block or sandbox extensions that access LLMs.
Educate users: Train staff on the risks of browser extensions and use managed extension policies.
Segregate AI access: Limit which systems and user roles can access internal LLMs or integrate them with sensitive tools like Google Workspace.
Final Thoughts
The Man-in-the-Prompt method is a clear example of how modern browser environments and AI integrations can be manipulated. While not technically a software flaw, it reveals the fragility of trust boundaries between user inputs, browser extensions, and AI agents—raising new challenges for both AI security and browser hygiene.