OpenAI forms new team for addressing ‘superintelligent’ AI systems.

The company responsible for the popular AI chatbot ChatGPT has announced that it will be creating a team to manage the risks associated with superintelligent AI systems. OpenAI, in a blog post on July 5, stated that this new team will be tasked with guiding and controlling AI systems that are much smarter than humans. While OpenAI believes that superintelligence will be a groundbreaking technology that can solve many problems, it acknowledges the potential dangers it poses. The immense power of superintelligence could potentially lead to the disempowerment or even extinction of humanity. OpenAI anticipates that superintelligence could become a reality within this decade. As part of their efforts, they will allocate 20% of their computing power to this endeavor and aim to develop an automated alignment researcher at a human-level. This researcher would assist in managing the safety of superintelligence and aligning it with human intent. Ilya Sutskever, the chief scientist of OpenAI, and Jan Leike, the head of alignment at the research lab, have been appointed as co-leaders of this initiative. OpenAI has extended an invitation to machine learning researchers and engineers to join the team.

In related news, governments worldwide are considering measures to regulate the development, deployment, and use of AI systems. The European Union has made significant progress in this regard, with the European Parliament passing the EU AI Act on June 14. This act would require tools like ChatGPT to disclose all AI-generated content, among other provisions. However, further discussions are needed before it can be implemented, and it has faced criticism from AI developers concerned about its potential impact on innovation. OpenAI CEO Sam Altman visited Brussels in May to discuss the negative effects of excessive regulation with EU regulators.

In the United States, lawmakers have introduced the National AI Commission Act, which aims to establish a body responsible for determining the country’s approach to AI. U.S. regulators have also expressed their desire to regulate the technology. Senator Michael Bennet drafted a letter on June 30, urging major tech companies, including OpenAI, to label AI-generated content.

For more information, read the magazine article “BitCulture: Fine art on Solana, AI music, podcast + book reviews.”

We will continue to update Phone&Auto; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

News

Shady Transactions Raise Eyebrows as $110 Million Evaporates from HECO Bridge and HTX Exchange – What in the Crypto World is Happening?

Recent blockchain breaches on the HECO bridge and HTX platform have been reported by security firms, resulting in an ...

BlockChain

EigenLayer Emerges as a DeFi Powerhouse with $4.3 Billion Inflows

The amount of capital invested in restaking protocols has skyrocketed to $10 billion, far surpassing its previous val...

Bitcoin

Humorous and Professional Announcement from HTX Exchange

HTX, a popular digital asset exchange, has announced the resumption of deposit and withdrawal services for top crypto...

News

In a Plot Twist, Poloniex Bounces Back from $100M Hack with TRX Withdrawals!

Poloniex restores withdrawals following $100M hack, prioritizing TRX deposits and withdrawals. Find out how this impa...

Market

Ether.Fi will launch the ETHFI token on Binance Launchpool next week.

Liquid restaking protocols, such as Ether.Fi, utilize Ethereum's proof-of-stake blockchain to enhance the security of...

Market

AltSignals outlook amidst Huobi insolvency rumors and crypto market slowdown.

Justin Sun, the visionary founder of TRON, has boldly refuted any unfounded rumors surrounding Huobi's financial stab...