What did Ilya see?
This question became a meme within the AI community last year after Ilya Sutskever, the former Chief Scientist and co-founder of OpenAI, co-led a failed attempt to oust CEO Sam Altman.
Well, whatever it was, it certainly has Sutskever convinced that really advanced AI systems are on the horizon. Today, a little over month since he departed OpenAI, Sutskever has announced the launch of his new venture, Safe Superintelligence Inc. (SSI). The company was incorporated in Delaware on June 6 and will have offices in Palo Alto and Tel Aviv.
SSI is being described as the world's first "straight-shot superintelligence lab". Reportedly, it will focus solely on building a safe and powerful AI system, and has no near-term plans of selling AI products or services.
"This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever explained in an exclusive interview with Bloomberg. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."
Joining Sutskever in this endeavor are two co-founders: Daniel Gross, a prominent investor and former Apple Inc. AI lead, and Daniel Levy, an AI researcher who worked alongside Sutskever at OpenAI.
Maginative's Take
SSI's emphasis on prioritizing safety over short-term commercial gains is a notable stance. This singular focus will immediately set it apart from other AI research labs who often find themselves juggling multiple projects and products.
However, the economic realities of the AI industry, including the ever-growing computational demands and the need for substantial funding, make SSI a gamble for investors. And while Sutskever and his team are confident in their ability to secure the necessary capital to get started, we will see how long that appetite will last.
Additionally, the very nature of their endeavor—creating a superintelligence—will scrutiny and skepticism. The state-of-the-art in AI today is still far from achieving artificial general intelligence (AGI), let alone superintelligence. The very concept of "safety" in AI remains elusive and open to interpretation.
Despite these challenges, the pedigree and expertise of SSI's founding team cannot be overlooked. Sutskever, Gross, and Levy bring a wealth of experience and knowledge to the table, and their decision to embark on this ambitious project is a testament to their belief in the potential of superintelligence and the importance of developing it safely.