A recently patched vulnerability in Microsoft 365 Copilot was found to potentially allow the theft of sensitive user information through a technique known as ASCII smuggling.
“ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface,” explained security researcher Johann Rehberger. “This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!”
The attack involves a series of steps that form a reliable exploit chain:
- Initiating a prompt injection via malicious content hidden in a document shared in the chat to gain control of the chatbot.
- Utilizing a prompt injection payload to instruct Copilot to search for additional emails and documents, a method called automatic tool invocation.
- Exploiting ASCII smuggling to lure the user into clicking a link that exfiltrates valuable data to a third-party server.
As a result, sensitive information within emails, including multi-factor authentication (MFA) codes, could be transmitted to an attacker-controlled server. Microsoft addressed the vulnerability following responsible disclosure in January 2024.
This incident highlights the risks associated with AI tools like Microsoft Copilot, as proof-of-concept (PoC) attacks have shown how attackers can manipulate responses, extract private data, and bypass security measures. Zenity’s research detailed methods that enable attackers to perform retrieval-augmented generation (RAG) poisoning and indirect prompt injection, leading to remote code execution attacks that can fully control Copilot and other AI applications.
One of the more innovative attack methods involves turning the AI into a spear-phishing tool, dubbed LOLCopilot, allowing attackers with access to a victim’s email account to send phishing messages that mimic the compromised user’s style.
Microsoft also acknowledged that publicly exposed Copilot bots, created using Microsoft Copilot Studio without authentication protections, could be exploited by threat actors to extract sensitive information if they know the Copilot name or URL.
“Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots,” Rehberger advised.