At the 2024 Black Hat cybersecurity conference in Las Vegas, security researcher Michael Bargury demonstrated five severe vulnerabilities with the Microsoft Copilot Generative AI system. The proof-of-concept demos used powerpwn, a red-teaming kit, to exploit Copilot to conduct automated spear-phishing, exfiltrate private data, bypass Microsoft security controls, and cite false sources (“phantom sourcing”).
Bargury is the co-founder and CTO of cybersecurity company Zenity. He published his findings and released numerous demonstration videos on YouTube (we include a few below). They have since released powerpwn as an open-source red-teaming platform for hardening an enterprise’s Microsoft 365 Power Platform and Copilot usage.
According to Wired and a statement on Zenity’s blog, Bargury is collaborating with Microsoft and its AI teams to improve security.
“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf”
Michael Bargury
Copilot exploited for creating spear-phishing email campaigns
Using Microsoft Copilot, Bargary explained that powerpwn can generate hundreds of emails impersonating an identity. It can also determine who you have emailed and the frequency with which you contact specific individuals to automate creating spear-phishing email content.
What would take a hacker or malicious cyber actor days to carefully craft an effective spear-phishing email can now be automated in seconds. Hundreds of spear-phishing emails can target a person’s contacts, using language and nuance that Copilot assesses the sender uses in prior emails.
Yes, we’re a long way past when tech companies like OpenAI initially tried to block malicious ChatGPT requests, like creating malware.
Using Copilot for data poisoning attacks
Perhaps most troublesome is Bargary’s demonstration of using Copilot in a data poisoning attack on the system’s AI database. Using powerpwn, Bargury showed how an attacker without access to the organization’s 365 accounts can send a malicious email to poison the database. The attacker can then perform data exfiltration and manipulation on important financial information, for example.
Access to publicly traded company financial information could be disastrous, as companies must adhere to strict financial and regulatory policies. Bargury believes an attacker could theoretically determine whether an upcoming quarterly financial performance will be positive or negative, aiding illegal insider trading activities.
Bypassing Microsoft Copilot AI safety controls
Threatening words to any organization’s security teams and security executives, Bargury commented that bypassing the built-in Microsoft Copilot AI safety controls was primarily achieved with creative prompt engineering.
“You talk to Copilot and it’s a limited conversation, because Microsoft has put a lot of controls. But once you use a few magic words, it opens up and you can do whatever you want.”
Then, “you’ve turned Microsoft Copilot into a malicious insider,” Bargury says.
“Every time you give AI access to data, that is a way for an attacker to get in”
Michael Bargury
Bargary emphasizes that publishing the ease of abusing or exploiting enterprise AI and generative AI platforms is necessary to improve overall enterprise security. Zenity has published a complete summary of the powerpwn Copilot attacks, methodologies, and Black Hat participation on its blog.
Microsoft continues collaborating with Bargury and the Zenity organization to remediate the vulnerabilities. However, it is unknown whether all identified use cases and vulnerabilities Bargury demonstrated at Black Hat remain.
A cautionary example of rapid enterprise generative AI adoption
Microsoft Copilot may be the generative AI platform exploited in this example, but it’s hardly a security concern isolated to just one technology company.
Generative AI is progressing rapidly–perhaps faster than any other modern-day technology. ChatGPT by OpenAI reached 1 million users in 5 days, and as of May 2024, it averages over 260 million users per month, according to Similarweb.
In 2023, leading tech companies, including Amazon, Anthropic, Microsoft, Google, Meta, and OpenAI, pledged to support safe, secure, and trustworthy AI principles with the Biden-Harris White House Administration.
Many of the same tech companies pledged to ban their generative AI platforms from being used for election interference globally, as numerous important primaries and national elections occur between 2023 and 2024.
Still, advanced persistent threat groups often funded and tied to nation-state governments abuse generative AI to create disinformation and influence operations and malware.
Discover more from Cybersecurity Careers Blog
Subscribe to get the latest posts sent to your email.