OpenAI co-founder Ilya Sutskever has recently announced the establishment of a new artificial intelligence company dedicated to developing “safe superintelligence.” The company, named Safe Superintelligence Inc. (SSI), is co-founded by two other individuals, Daniel Levy, a former OpenAI technician, and Daniel Gross, the former head of AI at Apple.
Safe Superintelligence Inc. believes that the emergence of “superintelligence” is imminent, and ensuring its safety for humanity is the most crucial technological challenge of this era. Their mission is to become a laboratory focused on safe superintelligence, with a primary goal of researching technology while prioritizing safety.
“We are assembling a lean and efficient team composed of the world’s most talented engineers and researchers, dedicated solely to building safe superintelligence and no other matters,” stated Safe Superintelligence Inc. in a post on X.
According to Bloomberg, Safe Superintelligence Inc. is a pure research organization with no intentions of selling commercial AI products or services in the near future, aside from creating a safe and powerful AI system.
During an interview with Bloomberg, Sutskever declined to disclose the names of the company’s financial backers or the total amount of funds raised. However, Gross stated that fundraising “will not” be a problem for the company. It is currently known that Safe Superintelligence Inc. is headquartered in Palo Alto, California, and has an office in Tel Aviv, Israel.
Sutskever’s departure from OpenAI came in May 2024 following an internal controversy within the company. He was one of the key figures in the “coup” initiated in November 2023 that led to the vote to dismiss OpenAI CEO Sam Altman. Sutskever favors pure scientific research and technological innovation, with a focus on AI public safety rather than solely commercial interests. Altman, on the other hand, excels in technology business applications and market promotion, transforming OpenAI’s research into tangible products and services. The two ultimately diverged due to differences in strategic direction and technical development.
Furthermore, Vox reported that OpenAI researchers Jan Leike and Gretchen Krueger recently left the company due to concerns about AI safety. Vox also revealed that at least five “safety-conscious employees” have left OpenAI since November of last year.
With the establishment of Safe Superintelligence Inc., Sutskever once again brings attention to the issue of AI safety. Building a powerful and safe AI system is a major challenge for technological innovation and a critical preparation that must be made before AI becomes integrated into human daily life.
Sources:
CryptoSlate, Bloomberg, Vox