Safe Superintelligence (SSI), the AI company co-founded by OpenAI co-founder Ilya Sutskever, reached a staggering $32 billion valuation after raising another $2 billion in funding. The latest round was led by Greenoaks with participation from tech giants Alphabet, Nvidia, Andreessen Horowitz, and Lightspeed Venture Partners.
The massive funding is one of the biggest seed investments in AI history, particularly impressive for a company that has not yet released a public product. SSI has had its valuation increase sixfold since its original $1 billion seed funding, which valued the company at $5 billion just seven months ago in September 2024.
A Mission Focused on Safety
SSI was founded in June 2024 by a celebrity team of Daniel Gross, a former Apple AI lead, and Daniel Levy, a well-known researcher who had previously worked for OpenAI. SSI’s main work is building what they term a “safe superintelligence” — an AI that is smarter than humans but is still aligned with human values.
“We will pursue safe superintelligence in a direct manner, maintaining a singular focus, a clear objective, and a unified product,” Sutskever stated when launching the venture. This mission represents a deliberate departure from the commercial pressures that influence many tech startups.

The company’s website remains minimal, functioning primarily as a placeholder without revealing technical details or timelines for product development. This secretive approach has become increasingly common among frontier AI labs as competition intensifies.
In a surprising move that has raised eyebrows across the industry, SSI has opted to develop its models using Google’s Tensor Processing Units (TPUs) rather than the industry-standard Nvidia GPUs. This choice makes SSI Google Cloud’s largest external TPU customer, despite Nvidia also being among the company’s investors.
The decision to use TPUs rather than GPUs reflects SSI’s commitment to optimizing specifically for its safety goals, even when that means taking the road less traveled in terms of infrastructure.
Rising From OpenAI Turmoil
Sutskever’s journey to founding SSI follows significant drama at his previous company. He left OpenAI in May 2024, just months after his involvement in a failed attempt to remove CEO Sam Altman in November 2023 — a crisis that nearly destroyed the pioneering AI lab.
While Sutskever later expressed regret for his role in the leadership struggle, his concerns about AI safety have remained consistent and now form the philosophical foundation of SSI. The startup operates with a lean team of approximately 20 employees across dual hubs in Palo Alto and Tel Aviv.
The company has successfully recruited top talent, including respected researcher Dr. Yair Carmon from Tel Aviv University. Sources familiar with SSI describe an organizational structure that intentionally avoids traditional management hierarchies to foster what they call “revolutionary engineering” without bureaucratic obstacles.
Skepticism Amid Massive Investment
SSI’s extraordinary valuation exceeds most AI startups at comparable stages, surpassing even Anthropic’s $18 billion valuation and tracking closer to OpenAI’s growth trajectory. Supporters argue that Sutskever’s expertise in “superalignment” — ensuring advanced AI systems remain aligned with human intentions — justifies the premium investors are willing to pay.
However, skeptics question whether SSI can deliver on its ambitious goals without eventually facing the same commercialization pressures that have influenced other AI labs. The company has not publicly disclosed technical roadmaps or specific safety frameworks, instead relying heavily on Sutskever’s reputation as a co-inventor of transformative AI architectures.
SSI’s funding highlights an interesting trend in the AI sector: major tech companies increasingly spread their investments across competing AI labs. Both Alphabet and Nvidia now back SSI and Anthropic, while Microsoft maintains its exclusive partnership with OpenAI.
This fragmentation of the AI ecosystem reflects the high-stakes competition to develop the first generally intelligent AI system, with powerful players hedging their bets across multiple ventures.
For now, SSI remains steadfastly focused on its moonshot goal. As Sutskever told Bloomberg: “Our first product will be the safe superintelligence. We will not do anything else until then.”
Whether this singular focus will lead to breakthrough success remains to be seen, but investors are clearly betting billions that Sutskever and his team can deliver on their ambitious vision of creating superintelligent AI that remains beneficial and controllable.