CISA Director Jen Easterly: 'AI may be most powerful weapon of our time'

In a speech at the Vanderbilt University Security Summit, Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly warns that artificial intelligence (AI) could be used as a powerful weapon by adversaries to carry out cyberattacks.

In fact, it could be both “the most powerful capability of our time”, and “the most powerful weapon of our time,” according to Easterly.

“And that’s not even the worst case scenario,” Easterly continued.

Easterly cites the rapid advances in technologies such as OpenAI’s ChatGPT, which can generate human-quality text, as a cause for concern. ChatGPT can easily be “repurposed by adversaries to carry out cyberattacks,” believes Easterly.

AI companies should break the “decades-long vicious cycle of technological innovation at the expense of security,” according to Easterly. The potential benefits of AI also come with severe threats, a “pretty steep but not existential threat…AI is different,” she added.

Easterly advocates for a “smart use of regulation”, believing that “technology does not have to come at the expense of safety or security.”

OpenAI GPT-5 potential impact on the 2024 US Presidential Election

She also called on the private sector to work with the government to develop security measures for AI systems.

The cybersecurity threats posed by AI systems such as ChatGPT / GPT-4 include potential for misinformation, creating malware, phishing email campaigns, and even providing wrong information.

Easterly also points out that OpenAI’s release of GPT-5 will likely be before the 2024 election. The impact GPT-5 and other AI large language models may have on that election are impossible to predict, but many fear a repeat—or worse—of the Cambridge Analytica level social media misinformation campaigns leveraged to secure a win for Donald Trump.

Most fear that the AI tools available today may produce convincing deepfakes, and help automate misinformation campaigns across social media that years ago took a significant amount of more time, money, and effort to execute.

“While one person will use this technology to plan an extravagant dinner party, another person will use the capability to plan a cyberattack or terror attack or to deploy shockingly realistic deep fakes.”

A temporary pause for AI regulation? Not while China advances

Despite many calls across industry to implement a “temporary pause” on US-based AI development for further scrutiny, it increasingly seems impossible to enact such pause across Silicon Valley giants and the broader AI community.

An open letter signed by over 1,100 individuals including AI experts, scientists, CEOs, and even Elon Musk advocate for the pause.

The six-month pause would “develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” argues the group.

But the Pentagon Department of Defense CIO John Sherman sees no value—or possibility—of an AI pause with China gunning for dominance.

“Some have argued for a six-month pause — which I personally don’t advocate towards, because if we stop, guess who’s not going to stop: potential adversaries overseas. We’ve got to keep moving,” DOD CIO John Sherman said.

AI technology such as OpenAI’s ChatGPT and Google’s Bard have been quickly ramped out to the public for experimental testing. Generative AI is now being including in business productivity software such as Microsoft 365 and Google Workspace in what some refer to as an AI-arms race.

Generative AI and its capabilities are being assessed by the US intelligence community to understand its technological impact.

Any sort of pause in AI development, implementation or experimentation seems out of reach.

Disclaimer: The author of this article is a current employee of Google. This article does not represent the views or opinions of his employer and is not meant to be an official statement for Google, or Google Cloud.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.