News Report Technology
May 23, 2023

OpenAI Raises Alarm on Superintelligence and AI’s Potential to Surpass Human Capabilities in the Next Decade

In Brief

OpenAI issues a call for regulation of superintelligence, highlighting the need for governance in light of AI’s rapid advancements.

AI systems are projected to surpass human expertise and corporate productivity within a decade, according to OpenAI.

OpenAI emphasizes the importance of public oversight and democratic control for powerful AI systems.

OpenAI Raises Alarm on Superintelligence and AI's Potential to Surpass Human Capabilities in the Next Decade

OpenAI, the creator of ChatGPT, has made a thought-provoking call for the regulation of superintelligence, drawing parallels to the nuclear energy regulation. In a recent blog post, OpenAI highlighted the potential implications of AI’s rapid advancements and emphasized the pressing need for governance in this evolving landscape. The company stated that AI systems would surpass experts and the largest corporations in productivity and skills within ten years.

“We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination,” Sam Altman, Greg Brockman, and Ilya Sutskever from OpenAI emphasized. 

Superintelligence describes an entity that exceeds the overall human intelligence or some specific aspects of intelligence. According to the authors, AI superintelligence will have an unparalleled level of power, encompassing both positive and negative aspects.

The Development and Risks of the Inevitable Superintelligence

OpenAI has identified three important ideas that play a pivotal role in navigating the successful development of superintelligence. These include coordination among leading development efforts, the establishment of an international authority akin to the International Atomic Energy Agency (IAEA), and the development of technical capabilities for safety.

While OpenAI acknowledges that AI systems come with risks, these risks are comparable to those associated with other internet-related technologies. Altman, Brockman, and Sutskever also express confidence that society’s current approaches to managing these risks are suitable. However, the main concern is about future systems that will have unprecedented power.

“By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar,” the blog post read.

The authors argue that powerful AI systems need public oversight and democratic control. They also explain why they are building this technology at OpenAI: to create a better world and to avoid the risks of stopping it. AI helps in various areas, including education, creativity, and productivity, as well as general economic growth.

OpenAI thinks it’s difficult and risky to stop superintelligence from being created. Superintelligence has considerable benefits, gets cheaper every year, more people are working on it, and it’s part of the company’s technology path.

Ilman Shazhaev, techpreneur in AI and the co-founder of Farcana Labs, shared a few comments regarding the news. Projections indicate that if not properly managed, superintelligence may be one of human’s most destructive inventions of all time. However, conversations on the deployment of the technology remain divisive, as it has not yet been developed. Pushing for a stop in development based on the fear of predictions may deprive humanity of the opportunities that the new technology might have in store.

“OpenAI’s decentralized governance approach can help maintain its broad safety. With the right regulations, the program could be shut down in the event it poses a threat. Should these safeguards be in place, then Superintelligence may be an innovation worth exploring,” said Shazhaev. 

By openly discussing its views on AI superintelligence and proposed regulatory measures, OpenAI seems to foster informed discussions and invite diverse perspectives.

Sam Altman strongly believes in widespread AI availability to the public. Acknowledging that it’s impossible to anticipate all problems in advance, he advocates for addressing issues at the earliest possible stage. However, Altman also emphasizes the importance of independent audits for systems like ChatGPT before release. He further acknowledges the possibility of implementing measures such as limiting the pace of new model creation or establishing a committee to assess the safety of AI models before market release. Notably, Altman predicts that the quantity of intelligence in the universe will double every 18 months.

Read more:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].

More articles
Agne Cimerman
Agne Cimerman

Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on [email protected].

Hot Stories
Join Our Newsletter.
Latest News

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Retik Finance (RETIK) Will Become a Top 100 Crypto by the End of 2024, Listings on May 21
News Report
Retik Finance (RETIK) Will Become a Top 100 Crypto by the End of 2024, Listings on May 21
May 18, 2024
Alchemy Pay Teams Up With BounceBit To Support Bitcoin Restaking Ecosystem
Business News Report Technology
Alchemy Pay Teams Up With BounceBit To Support Bitcoin Restaking Ecosystem
May 17, 2024
Top 5 Liquid Restaking Protocols Built on Top of EigenLayer
Digest Markets Software Technology
Top 5 Liquid Restaking Protocols Built on Top of EigenLayer
May 17, 2024
Magic Square Unveils IDO Platform Magic Launchpad To Democratize Retail Access For Web3 Investors
Business Markets News Report
Magic Square Unveils IDO Platform Magic Launchpad To Democratize Retail Access For Web3 Investors
May 17, 2024