Teams of LLM agents can exploit zero-day vulnerabilities
Could generative AI agents be the future of cybersecurity automation? (image credit: Cybersecurity Careers Blog / Adobe Firefly)

Generative AI and large language models (LLMs) show great potential for improving cybersecurity defense. But, generative AI can also be used for malicious offensive cyberattacks. Research scientists at the University of Illinois found that teams of LLM autonomous AI agents can exploit real-world zero-day vulnerabilities (0DV). While individual agents may perform poorly against novel cyber threats, teams of agents with a planning agent for workload distribution improved performance by 450%.

The research team has named the novel approach HPTSA (Hierarchical Planning and Task-Specific Agents). This hierarchical structure multi-agent framework leverages the power of LLMs to exploit zero-day vulnerabilities in web applications. It marks the first recorded multi-agent system to successfully accomplish meaningful cybersecurity exploits.

In June 2023, Scale Venture Partners hosted Joon Sung Park, who discussed his popular Generative Agents paper. Joon’s work pushes the boundaries of thought on what AI can achieve, suggesting a future where AI is more interactive, personalized, and contextually aware. His observations, however, also pushed us to think about the practical implications of agents in the real world and the challenges that remain.

How HPTSA detects zero-day vulnerabilities in web applications

HPTSA architecture consists of three main components:

  • Hierarchical Planner: Explores the target system and identifies potential vulnerability types and locations.
  • Team Manager: Dispatches and manages the task-specific agents based on the planner’s instructions.
  • Task-Specific Agents: Expert agents trained on specific vulnerability types using relevant documentation and tools, enhancing their exploitation efficiency.

First, HPTSA utilizes a hierarchical planner to identify potential attack vectors on a website. Next, the team manager agent deploys specialized task-specific agents (e.g., XSS, SQLi, CSRF) to exploit those vulnerabilities efficiently. The individual task-specific expert agents (as the acronym implies) are vital to HPTSA’s effectiveness.

This approach proves significantly more effective than single-agent methods, achieving a 4.5x (or 450%) improvement in overall success rate compared to a standalone GPT-4 agent without vulnerability descriptions.

“Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace. Now, black-hat actors can use AI agents to hack websites.”

Researchers from the University of Illinois

Benchmark tests using real-world vulnerabilities demonstrate that HPTSA significantly outperforms single-agent LLMs and traditional vulnerability scanners, achieving a 53% success rate. While HPTSA shows promise in automating the discovery and exploitation of zero-day vulnerabilities, further research is needed to address its limitations and explore broader implications for cybersecurity.

However, the research proves that, at least for website vulnerabilities, large language models can be weaponized for malicious use.

Implications of the HPTSA framework

Using LLMs as autonomous AI agents to exploit website vulnerabilities

The research provides several implications for the future of cybersecurity and large language models. To summarize the key findings:

Large language models (LLMs) are becoming increasingly sophisticated, raising concerns about their potential for malicious use. Researchers have been exploring whether AI agents can exploit cybersecurity vulnerabilities. While AI agents have succeeded in hacking simulated websites and known vulnerabilities, their ability to exploit unknown, real-world vulnerabilities, also known as zero-day vulnerabilities, has remained an open question until now.

New research indicates that teams of AI agents can successfully exploit real-world zero-day vulnerabilities. This research introduces HPTSA, a novel multi-agent framework designed for cybersecurity exploits. HPTSA leverages a hierarchical planner to analyze the target system and identify potential vulnerabilities. It then dispatches specialized agents, each trained on specific vulnerability types like cross-site scripting (XSS), SQL injection (SQLi), and cross-site request forgery (CSRF) to attempt exploitation.

This research demonstrates, for the first time, that teams of LLM agents can successfully find and exploit previously unknown vulnerabilities in web applications.

To evaluate HPTSA’s effectiveness, researchers created a benchmark of 15 real-world web vulnerabilities with varying severity levels. These vulnerabilities were all discovered after GPT-4’s knowledge cutoff date to ensure a realistic zero-day scenario. The results were impressive: HPTSA successfully exploited 53% of the vulnerabilities, significantly outperforming single-agent approaches and achieving results within 1.4 times of an agent with prior knowledge of the vulnerabilities.

The success of HPTSA highlights the evolving landscape of cybersecurity. The ability of AI agents to autonomously identify and exploit zero-day vulnerabilities poses a significant threat, as it enables malicious actors, even those without advanced technical skills, to potentially compromise systems. However, it’s important to note that this technology can also be used for good. Security professionals can leverage AI agents like HPTSA to proactively identify and address vulnerabilities before they can be exploited by malicious actors, strengthening system defenses.

Cost-effectiveness will likely increase, resulting in more autonomous AI cyberattacks. While the cost of using HPTSA is comparable to that of human penetration testers today, scientists predict a significant cost reduction soon due to the rapidly decreasing cost of LLMs.

What future research may uncover about autonomous AI cyber threats

While alarming, HPTSA’s effectiveness remains confined to identifying and exploiting website vulnerabilities. This is a valuable but fractional amount of the entire cybersecurity threat attack surface.

Large language models continue to develop rapidly, and exploring HPTSA’s effectiveness against other vulnerability types beyond web applications should be a priority. Evaluating HPTSA across multiple LLMs such as Google Gemini, Meta Llama 3, or security-tuned models like Google’s SecLM could produce dramatically different results.

Further Investigation of techniques to improve HPTSA’s performance against complex zero-day vulnerabilities is also needed.

As autonomous AI cybersecurity capabilities emerge, developing robust defense mechanisms against AI-powered cyberattacks is paramount.

Technology vendors like Microsoft, Google, AWS, and even the U.S. Department of Defense maintain Safety and Responsible AI tenets. Examining AI agents’ broader societal and ethical implications in cybersecurity will surely be a controversial topic.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.