Superintelligent AI Control: OpenAI’s Superalignment Team Struggles to Control ‘Superintelligent’ AI Amid Resource Shortage
The Superalignment team at OpenAI is led by co-founder and chief scientist Ilya Sutskever. It was established with the determined goal of developing ways to steer, adjust, and govern superintelligent AI systems. But, despite being promised 20% of the company’s computing resources, the team faced major challenges. Requests for computing were often denied, delaying their critical work.
This issue, along with others, led several team members to resign, including co-lead Jan Leike. He is a former DeepMind researcher involved in the development of ChatGPT, GPT-4, and ChatGPT’s predecessor, InstructGPT. Leike publicly expressed his reasons for resigning, stressing the need to highlight safety, security, and alignment in AI development.
OpenAI formed the Superalignment team last July, aiming to solve the core technical challenges of controlling superintelligent AI within the next four years. Despite publishing safety research and channeling grants to outside researchers, the team found itself fighting for more upfront investments.
These investments were critical to OpenAI’s mission of developing superintelligent AI for the benefit of humanity. Leike highlighted that safety culture and processes had taken a backseat to product launches, emphasizing the integral danger in building smarter-than-human machines.
Read More: OpenAI Reveals Enhanced GPT-4 Turbo, Making ChatGPT More Direct and Less Verbal
Read More: Google Airtel Cloud GenAI Partnership to Boost Cloud and GenAI Adoption in India