Cybersecurity researchers have revealed a newly identified attack technique that shows how artificial intelligence chatbots can be manipulated to leak sensitive information with minimal user involvement. The method, known as Reprompt, demonstrates how attackers could extract data from AI assistants such as Microsoft Copilot through a single click on a legitimate-looking link, while bypassing standard enterprise security protections.
According to researchers, the attack requires no malicious software, plugins, or continued interaction. Once a user clicks the link, the attacker can retain control of the chatbot session even if the chat window is closed, allowing information to be quietly transmitted without the user’s awareness.
The issue was disclosed responsibly, and Microsoft has since addressed the vulnerability. The company confirmed that enterprise users of Microsoft 365 Copilot are not affected.
At a technical level, Reprompt relies on a chain of design weaknesses. Attackers first embed instructions into a Copilot web link using a standard query parameter. These instructions are crafted to bypass safeguards that are designed to prevent direct data exposure by exploiting the fact that certain protections apply only to the initial request. From there, the attacker can trigger a continuous exchange between Copilot and an external server, enabling hidden and ongoing data extraction.
In a realistic scenario, a target might receive an email containing what appears to be a legitimate Copilot link. Clicking it would cause Copilot to execute instructions embedded in the URL. The attacker could then repeatedly issue follow-up commands remotely, prompting the chatbot to summarize recently accessed files, infer personal details, or reveal contextual information. Because these later instructions are delivered dynamically, it becomes difficult to determine what data is being accessed by examining the original prompt alone.
Researchers note that this effectively turns Copilot into an invisible channel for data exfiltration, without requiring user-entered prompts, extensions, or system connectors. The underlying issue reflects a broader limitation in large language models: their inability to reliably distinguish between trusted user instructions and commands embedded in untrusted data, enabling indirect prompt injection attacks.
The Reprompt disclosure coincides with the identification of multiple other techniques targeting AI-powered tools. Some attacks exploit chatbot connections to third-party applications, enabling zero-interaction data leaks or long-term persistence by injecting instructions into AI memory. Others abuse confirmation prompts, turning human oversight mechanisms into attack vectors, particularly in development environments.
Researchers have also shown how hidden instructions can be planted in shared documents, calendar invites, or emails to extract corporate data, and how AI browsers can be manipulated to bypass built-in prompt injection defenses. Beyond software, hardware-level risks have been identified, where attackers with server access may infer sensitive information by observing timing patterns in machine learning accelerators.
Additional findings include abuses of trusted AI communication protocols to drain computing resources, trigger hidden tool actions, or inject persistent behavior, as well as spreadsheet-based attacks that generate unsafe formulas capable of exporting user data. In some cases, attackers could manipulate AI development platforms to alter spending controls or leak access credentials, enabling stealthy financial abuse.
Taken together, the research underlines that prompt injection remains a persistent and evolving risk. Experts recommend layered security defenses, limiting AI privileges, and restricting access to sensitive systems. Users are also advised to avoid clicking unsolicited AI-related links and to be cautious about sharing personal or confidential information in chatbot conversations.
As AI systems gain broader access to corporate data and greater autonomy, researchers warn that the potential impact of a single vulnerability increases substantially, underscoring the need for careful deployment, continuous monitoring, and ongoing security research.
But, is the name change the only new thing the users will be introduced with? The answer could be a little ambiguous.
What is New with Bing Chat (now Copilot)? Honestly, there are no significant changes in Copilot, previously called Bing Chat. “Refinement” might be a more appropriate term to characterize Microsoft's perplexing activities. Let's examine three modifications that Microsoft made to its AI chatbot.
Here, we are listing some of these refinements:
Copilot, then Bing Chat, now has its own standalone webpage. One can access this webpage at https://copilot.microsoft.com
This means that the user will no longer be required to visit Bing in order to access Microsoft’s AI chat experience. One can simply visit the aforementioned webpage, without Bing Search and other services interfering with your experience. Put differently, it has become much more "ChatGPT-like" now.
Notably, however, the link seems to only function with desktop versions of Microsoft Edge and Google Chrome.
While Microsoft has made certain visual changes in the rebranded Bing Chat, they are however insignificant.
This new version has smaller tiles but still has the same prompts: Write, Create, Laugh, Code, Organize, Compare, and Travel.
However, the users can still choose the conversation style, be it Creative, Balanced and Precise. The only big change, as mentioned before, is the new name (Copilot) and the tagline: "Your everyday AI companion."
Though the theme colour switched from light blue to an off-white, the user interface is largely the same.
Users can access DALLE-3 and GPT-4 for free with Bing Chat, which is now called Copilot. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."
With Copilot, users can access DALLE-3 and GPT-4 for free. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."
This way, users who have had a Bing Chat Enterprise account, or pay for a Microsoft 365 license, will get an additional benefit of more data protection./ Copilot will be officially launched on December 1.
Microsoft plans to gradually add commercial data protection for those who do not pay. However, Copilot currently stores information from your interactions and follows the same data policy as the previous version of Bing Chat for free users. Therefore, the name and domain change is the only difference for casual, non-subscribing Bing Chat users. OpenAI's GPT-4 and DALL-E 3 models are still available, but users need to be careful about sharing too much personal data with the chatbot.
In summary, there is not much to be excited about for free users: Copilot is the new name for Bing Chat, and it has a new home.
Microsoft introduced Copilot – its workplace assistant – earlier this year, labelling the product as a “copilot for work.”
Copilot which will be made available for the users from November 1, will be integrated to the subscribers of Microsoft 365 apps such as Word, Excel, Teams and PowerPoint – with a subscription worth $30 per user/month.
Additionally, as part of the new service, employees at companies who use Microsoft's Copilot could theoretically send their AI helpers to meetings in their place, allowing them to miss or double-book appointments and focus on other tasks.
With businesses including General Motors, KPMG, and Goodyear, Microsoft has been testing Copilot, which assists users with tasks like email writing and coding. Early feedback from those companies has revealed that it is used to swiftly respond to emails and inquire about meetings.
According to Jared Spataro, corporate vice president of modern work and business applications at Microsoft, “[Copilot] combines the power of large language models (LLMs) with your data…to turn your words into the most powerful productivity tool on the planet,” he said in a March blog post.
Spataro promised that the technology would “lighten the load” for online users, stating that for many white-collar workers, “80% of our time is consumed with busywork that bogs us down.”
For many office workers, this so-called "busywork" includes attending meetings. According to a recent British study, office workers waste 213 hours annually, or 27 full working days, in meetings where the agenda could have been communicated by email.
Companies like Shopify are deliberately putting a stop to pointless meetings. When the e-commerce giant introduced an internal "cost calculator" for staff meetings, it made headlines during the summer. According to corporate leadership, each 30-minute meeting costs the company between $700 and $1,600.
Copilot will now help in reducing this expense. The AI assistant's services include the ability to "follow" meetings and produce a transcript, summary, and notes once they are over.
Microsoft, in July, noted that “the next wave of generative AI for Teams,” which included incorporating Copilot further into Teams calls and meetings.
“You can also ask Copilot to draft notes for you during the call and highlight key points, such as names, dates, numbers, and tasks using natural language commands[…]You can quickly synthesize key information from your chat threads—allowing you to ask specific questions (or use one of the suggested prompts) to help get caught up on the conversation so far, organize key discussion points, and summarize information relevant to you,” the company noted.
In regard to the same, Spataro states that “Every meeting is a productive meeting with Copilot in Teams[…]It can summarize key discussion points—including who said what and where people are aligned and where they disagree—and suggest action items, all in real-time during a meeting.
However, Microsoft is not the only tech giant working on making meeting tolerant, as Zoom and Google have also introduced AI-powered chatbots for the online workforce that can attend meetings on behalf of the user, and present its conclusions during the get-together.