Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new company laser-focused on building safe artificial intelligence (AI). Safe Superintelligence Inc. (SSI) aims to create a single product: a powerful and secure AI system.
Read: Ajax Smart Security System Review: Aware and armed alarm
In a Wednesday post, SSI outlined its approach. The company emphasizes developing “safety and capabilities in tandem,” ensuring progress doesn’t outpace safety measures. They criticize the external pressures faced by AI teams at large tech companies, suggesting their “singular focus” allows them to avoid distractions and prioritize safety above all else.
“Our business model shields safety, security, and progress from short-term financial pressures,” the announcement states. “This allows us to scale responsibly.”
Joining Sutskever are co-founders Daniel Gross (former Apple AI lead) and Daniel Levy (ex-OpenAI technical staff). Sutskever’s departure from OpenAI in May followed his reported involvement in CEO Sam Altman’s ousting. This, coupled with the resignations of AI researcher Jan Leike and policy researcher Gretchen Krueger (both citing safety concerns), cast a spotlight on OpenAI’s internal priorities.
While OpenAI forges partnerships with tech giants like Apple and Microsoft, SSI is taking a different path. In a Bloomberg interview, Sutskever stated SSI’s sole focus is creating safe superintelligence, and “nothing else” until that goal is achieved.