Ilya Sutskever, a renowned figure in artificial intelligence (AI) and a co-founder of OpenAI, has embarked on a new venture dedicated to developing safe AI. His new company, Safe Superintelligence (SSI), aims to create a lab focused solely on advancing AI safety.
SSI, an American-based company, reflects Sutskever's dedication to the field, rooted in his academic journey under AI pioneer Geoffrey Hinton at the University of Toronto. Sutskever, along with Hinton and Alex Krizhevsky, developed AlexNet, a groundbreaking neural network for image processing, which was later acquired by Google in 2013.
Joining Sutskever in this new endeavor are Daniel Levy, a former technical staff member at OpenAI, and Daniel Gross, previously Apple's AI lead. According to Sutskever’s LinkedIn, he will serve as both co-founder and chief scientist at SSI.
In a statement on its freshly launched website, SSI introduced itself as “the world’s first dedicated SSI lab, with one goal and one product: a safe superintelligence.” The company emphasized that its narrow focus allows it to avoid distractions from management overhead and product cycles, ensuring that safety, security, and progress are shielded from short-term commercial pressures.
Superintelligence refers to a form of AI that surpasses human intelligence. Despite its theoretical benefits, numerous AI experts, including Canadian pioneers like Hinton and Yoshua Bengio, have expressed concerns about the potential dangers of advanced AI systems. They have consistently warned about the existential risks these systems could pose to humanity.
OpenAI, Sutskever’s former company, has also acknowledged these concerns. Recently, it established an internal team dedicated to overseeing potential superintelligent AI systems. However, TechCrunch reported in May that this team faced significant challenges, including a lack of necessary computing resources.
OpenAI has experienced a wave of departures among employees focused on AI safety. This group includes Sutskever, who left his position in May, Jan Leike, who was instrumental in developing ChatGPT and GPT-4, and Daniel Kokotajlo, who publicly expressed his loss of trust in OpenAI’s leadership and its ability to manage AI safety risks.
Sutskever’s final months at OpenAI were marked by controversy. Reports suggest that he initially supported the board's decision to remove CEO Sam Altman last year but later advocated for Altman’s return to the position.
In its mission statement, SSI stressed its commitment to being a safety-first organization. The company aims to “advance capabilities as fast as possible while ensuring our safety always remains ahead.”
With SSI, Sutskever and his team are stepping into a crucial role in the AI community. By prioritizing safety in the development of superintelligence, they aim to balance the rapid advancement of AI with the imperative to mitigate potential risks. As AI continues to evolve, SSI’s focused approach could set new standards for the responsible development of powerful AI systems.