Black Hat 2024: Researcher exposes Microsoft Copilot AI vulnerabilities

At the 2024 Black Hat cybersecurity conference in Las Vegas, security researcher Michael Bargury demonstrated five severe vulnerabilities with the Microsoft Copilot Generative AI system. The proof-of-concept demos used powerpwn, a red-teaming kit, to exploit Copilot to conduct automated spear-phishing, exfiltrate private data, bypass Microsoft security controls, and cite false sources (ā€œphantom sourcingā€).

Bargury is the co-founder and CTO of cybersecurity company Zenity. He published his findings and released numerous demonstration videos on YouTube (we include a few below). They have since released powerpwn as an open-source red-teaming platform for hardening an enterpriseā€™s Microsoft 365 Power Platform and Copilot usage.

According to Wired and a statement on Zenityā€™s blog, Bargury is collaborating with Microsoft and its AI teams to improve security.

ā€œI can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalfā€

Michael Bargury

Copilot exploited for creating spear-phishing email campaigns

Using Microsoft Copilot, Bargary explained that powerpwn can generate hundreds of emails impersonating an identity. It can also determine who you have emailed and the frequency with which you contact specific individuals to automate creating spear-phishing email content.

What would take a hacker or malicious cyber actor days to carefully craft an effective spear-phishing email can now be automated in seconds. Hundreds of spear-phishing emails can target a personā€™s contacts, using language and nuance that Copilot assesses the sender uses in prior emails.

Yes, weā€™re a long way past when tech companies like OpenAI initially tried to block malicious ChatGPT requests, like creating malware.

Living off Microsoft Copilot at BHUSA24: Automated spear phishing with powerpwn abusing Copilot
Michael Bargury demonstrates using his red-teaming toolkit powerpwn for the Microsoft 365 Power Platform to automate spear-phishing attacks using Microsoft Copilot. (source: YouTube)

Using Copilot for data poisoning attacks

Perhaps most troublesome is Bargaryā€™s demonstration of using Copilot in a data poisoning attack on the systemā€™s AI database. Using powerpwn, Bargury showed how an attacker without access to the organizationā€™s 365 accounts can send a malicious email to poison the database. The attacker can then perform data exfiltration and manipulation on important financial information, for example.

Access to publicly traded company financial information could be disastrous, as companies must adhere to strict financial and regulatory policies. Bargury believes an attacker could theoretically determine whether an upcoming quarterly financial performance will be positive or negative, aiding illegal insider trading activities.

Living off Microsoft Copilot at BHUSA24: Financial transaction hijacking with Copilot as an insider
Using powerpwn to send a malicious email to a targeted Copilot AI deployment. The malicious email will perform a data poisoning attack on the AI systemā€™s database, allowing an attacker to exfiltrate or manipulate important data within the 365 organizationā€™s deployment. (source: YouTube)

Bypassing Microsoft Copilot AI safety controls

Threatening words to any organizationā€™s security teams and security executives, Bargury commented that bypassing the built-in Microsoft Copilot AI safety controls was primarily achieved with creative prompt engineering.

ā€œYou talk to Copilot and itā€™s a limited conversation, because Microsoft has put a lot of controls. But once you use a few magic words, it opens up and you can do whatever you want.ā€

Then, ā€œyouā€™ve turned Microsoft Copilot into a malicious insider,ā€ Bargury says.

ā€œEvery time you give AI access to data, that is a way for an attacker to get inā€

Michael Bargury

Bargary emphasizes that publishing the ease of abusing or exploiting enterprise AI and generative AI platforms is necessary to improve overall enterprise security. Zenity has published a complete summary of the powerpwn Copilot attacks, methodologies, and Black Hat participation on its blog.

Microsoft continues collaborating with Bargury and the Zenity organization to remediate the vulnerabilities. However, it is unknown whether all identified use cases and vulnerabilities Bargury demonstrated at Black Hat remain.

A cautionary example of rapid enterprise generative AI adoption

Microsoft Copilot may be the generative AI platform exploited in this example, but itā€™s hardly a security concern isolated to just one technology company.

Generative AI is progressing rapidlyā€“perhaps faster than any other modern-day technology. ChatGPT by OpenAI reached 1 million users in 5 days, and as of May 2024, it averages over 260 million users per month, according to Similarweb.

In 2023, leading tech companies, including Amazon, Anthropic, Microsoft, Google, Meta, and OpenAI, pledged to support safe, secure, and trustworthy AI principles with the Biden-Harris White House Administration.

Many of the same tech companies pledged to ban their generative AI platforms from being used for election interference globally, as numerous important primaries and national elections occur between 2023 and 2024.

Still, advanced persistent threat groups often funded and tied to nation-state governments abuse generative AI to create disinformation and influence operations and malware.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.

Join the Discussion