Elon Musk is angry over OpenAI's direction. We should all agree. OpenAI Ethics and Principles

OpenAI has dominated the headlines since the fall of 2022 when ChatGPT took the internet by storm. But many probably don’t realize that OpenAI was originally funded in part by Elon Musk when it was created in 2015 as a non-profit organization. At the time, Musk invested $100 million to help support the original OpenAI mission of open, democratized artificial intelligence.

Elon Musk announced the creation of OpenAI on Twitter, back in 2015.

The Non-Profit, “AI for Benefiting Humanity” OpenAI

In its debut, OpenAI was founded by a group of engineers and prominent investors in Silicon Valley such as Peter Thiel, Marc Andreessen, and Sam Altman (then of Y Combinator fame). Altman, who is now the CEO, joined as a Co-Chair along with Musk.

The company’s original mission statement was drastically different than the profit-driven, Microsoft-cozy relationship with “Big Tech” they now have:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

OpenAI, “Introducing OpenAI” blog post, December 11, 2015

OpenAI has since switched to a hybrid “capped profit” model in 2019 and is now worth over $30 billion. Microsoft remains the most significant investor in the company, with over $11 billion to date committed to funding product development, research, and integrating its capabilities into Microsoft offerings. Microsoft even built special-purpose supercomputers for powering OpenAI capabilities and laid off its own ethical AI team, as they increasingly are doubling down on OpenAI tech.

Microsoft has now announced Copilot, an AI content generation, data analysis, and productivity enhancement that will power its 365 Office productivity suite for enterprises, students, and consumers alike similar to ChatGPT or Bing AI.

While the intent was to originally democratize AI and counter other tech giants, it now has become closed-source and funded by one of the largest tech giants in the world.

OpenAI: Not so Open Anymore

Since announcing GPT-4, OpenAI continues to amaze businesses, consumers, and the media alike with the huge technical leap the latest iteration of its generative AI has made.

But those amidst the AI and computer science community are increasingly frustrated over the lack of transparency of OpenAI.

There doesn’t seem to be anything “open” about OpenAI anymore, ironically.

Ben Schmidt, VP of Nomic AI, commented on the lack of transparency and the closed-source nature of OpenAI technologies. (Source: Twitter)
Walid Magdy, professor at The School of Informatics, Edinburgh, Scotland. (Source: Twitter)
Lex Fridman, Research Scientist at MIT and podcaster, will be interviewing OpenAI CEO Sam Altman. A number of proposed questions are regarding OpenAI’s change from a non-profit, and regarding lack of transparency. (Source: Twitter)

Even those within OpenAI admit being open was not in their best interest. In a statement to The Verge on Wednesday, Ilya Sutskever, OpenAI’s chief scientist and cofounder, addressed the controversy.

“We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI—AGI—is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea…I fully expect that in a few years, it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”

He added, “at some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.”

A prophetic, but alarming statement.

So how does Musk feel about the change in OpenAI strategy and transparency, the organization he helped fund?

Musk is not happy that OpenAI has switched corporate strategies and direction, now worth over $30B with $11B investment from Microsoft. (Source: Twitter)

“Initially it was created as an open-source nonprofit. Now it is closed-source and for-profit. I don’t have an open stake in OpenAI, nor am I on the board, nor do I control it in any way.”

That statement was made back in February 2023. Now his opinion of OpenAI’s direction is even harsher, given the recent $10 billion investment Microsoft just committed to OpenAI:

“OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all,” Musk said in a recent statement to Fortune.

Part of the reason Musk helped fund and create OpenAI was to develop AI responsibly, something Musk believes tech giants like Google weren’t doing. “Google was not paying enough attention to AI safety,” he said.

Elon Musk calls for AI regulation

So where do we go from here?

Musk believes that artificial intelligence is “far more dangerous than nuclear warheads”—a bold statement in today’s context given the threat of nuclear war ongoing in Russia and Ukraine.

And thus, it needs regulation, in Musk’s opinion.

“I think we need to regulate AI safety, frankly,” Musk said. “It is, I think, actually a bigger risk to society than cars or planes or medicine.”

Musk thinks regulating AI is the most sensible path forward to avoid what everyone fears: AI becoming sentient or Skynet.

“I’m a little worried about the A.I. stuff. We need some kind of, like, regulatory authority or something overseeing A.I. development. Make sure it’s operating in the public interest. It’s quite dangerous technology. I fear I may have done some things to accelerate it.”

Musk calling for AI regulation or oversight, what he describes as a “major problem.” (Source: Twitter)

For now, we’re relying on big tech companies like Google, Microsoft, and OpenAI to have AI standards of ethics to develop AI solutions responsibly. Even the US Department of Defense has developed AI standards of ethics.

That is great in principle, but Microsoft laying off its entire AI ethics team is alarming in practice. The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

Without independent, third-party oversight, we may never know what tech companies develop and deploy for their customers—or for proprietary forms of monetization of consumer data.

Article updated 3/19/2023 for continuity and flow. Added additional Twitter comments on OpenAI strategy.

Disclaimer: The author of this article is a current employee of Google. This article does not represent the views or opinions of his employer and is not meant to be an official statement for Google, or Google Cloud.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.