AI's New Era: Veteran Founders Launch Next-Gen Foundation Models
By admin | Jan 24, 2026 | 4 min read
We are experiencing a distinctive period for AI firms developing their own foundational models. For one, a wave of seasoned professionals who built their reputations at large technology corporations are now embarking on independent ventures. Simultaneously, there are celebrated researchers with deep expertise but unclear commercial goals. There is a genuine possibility that several of these new entities will grow to the scale of an OpenAI, yet there is also space for them to focus on intriguing research without the immediate pressure to monetize. The consequence is that it is becoming difficult to discern which organizations are genuinely pursuing profitability.
To clarify this landscape, I suggest applying a sliding scale to any company creating a foundation model. This five-tier framework does not assess current revenue, but rather the intent to generate it. The goal is to gauge ambition, not achievement. Consider the levels as follows:
Level 5: We are already earning millions of dollars daily. Level 4: We possess a detailed, multi-phase strategy to become among the wealthiest entities. Level 3: We have numerous promising product concepts that will be unveiled in due course. Level 2: We have the preliminary sketch of a potential plan. Level 1: True fulfillment comes from non-commercial pursuits.
Established leaders like OpenAI, Anthropic, and Gemini firmly occupy Level 5. The scale becomes more revealing when applied to the newer generation of labs, which often have grand visions but less transparent ambitions. Importantly, the founders of these labs can essentially select their desired level. The current abundance of AI investment means few will pressure them for a conventional business plan. Even a purely research-oriented lab can attract satisfied investors.
An individual not driven by extreme wealth might find greater contentment operating at Level 2 than at Level 5. Tensions emerge, however, because a lab's position on this scale is often ambiguous, and much of the industry's current turmoil stems from this uncertainty. The unease surrounding OpenAI's transition from a non-profit stemmed from its rapid shift from Level 1 to Level 5. Conversely, one could argue Meta's early AI research was at Level 2, while the corporate ambition was always Level 4.
With this framework in mind, here is an assessment of four prominent contemporary AI labs.
**Humans&** The launch of Humans& was a major event this week and partly inspired this scale. Its founders present a compelling vision for next-generation AI, shifting focus from scaling laws to tools for communication and coordination. Despite positive coverage, the company has been vague about how this translates into marketable products. The team indicates a desire to build products but avoids specifics, mentioning only a future AI workplace tool intended to replace platforms like Slack and Jira and fundamentally redefine their operation. This description is just concrete enough to place Humans& at Level 3.
**Thinking Machines Lab** This lab presents a challenging rating. Typically, a $2 billion seed round led by a former ChatGPT CTO and project lead suggests a specific roadmap. Mira Murati appears methodical, so entering 2026, one might confidently assign TML to Level 4. However, recent events complicate this. The departure of CTO and co-founder Barret Zoph, along with at least five other employees citing concerns about the company's direction, has been significant. Nearly half the founding executives have left within a year. This suggests the initial plan for a world-class lab (Level 4) may have been less firm than believed, potentially revealing a reality closer to Level 2 or 3. While not yet definitive, this trend points toward a potential downgrade.
**World Labs** Founded by Fei-Fei Li, a towering figure in AI research known for establishing the ImageNet challenge, World Labs initially seemed like a Level 2 endeavor when it announced a $230 million raise in 2024. However, the AI field evolves rapidly. Since then, the company has released both a comprehensive world-generating model and a commercial product built upon it. Concurrently, clear demand for world-modeling has emerged from industries like gaming and special effects, with no major labs offering direct competition. This trajectory strongly resembles a Level 4 company, possibly approaching Level 5.
**Safe Superintelligence (SSI)** Founded by former OpenAI chief scientist Ilya Sutskever, SSI appears to be a quintessential Level 1 startup. Sutskever has meticulously shielded the company from commercial pressures, even declining an acquisition offer from Meta. There are no product cycles, and beyond its foundational model research, no product seems planned. With this purely scientific pitch, SSI raised $3 billion. All signs indicate Sutskever's primary focus is the science of AI.
Nonetheless, the AI landscape changes quickly, and it would be unwise to completely exclude SSI from future commercial activity. In a recent interview, Sutskever suggested two scenarios that could trigger a pivot: if research timelines become very long, or if the value of deploying powerful AI to impact the world becomes compelling. In essence, if the research progresses extremely well or poorly, SSI could rapidly ascend several levels on this scale.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!