Powered by Smartsupp

Recursive Superintelligence Launches with $650M from AI Visionaries Socher, Norvig, and Shi



By admin | May 14, 2026 | 5 min read


Recursive Superintelligence Launches with $650M from AI Visionaries Socher, Norvig, and Shi

Richard Socher has long been a prominent name in artificial intelligence, best recognized for founding the early chatbot startup You.com and, prior to that, his contributions to ImageNet. Now, he is joining the wave of research-focused AI startups with Recursive Superintelligence, a San Francisco-based company that emerged from stealth on Wednesday with $650 million in funding. Socher is joined in this new venture by a group of notable AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Together, they aim to build a recursively self-improving AI model—one that can autonomously identify its own weaknesses and redesign itself to address them without human intervention. This concept has long been considered a holy grail in contemporary AI research. After the launch, I spoke with Socher over Zoom, exploring Recursive’s distinctive technical approach and why he doesn’t view this new project as a “neolab”—the informal term for a new generation of AI startups that prioritize research over building products. This interview has been edited for clarity and length.

We hear a lot about recursion these days. It seems like a common goal across different labs. What do you see as your unique approach? Our unique approach is to use open-endedness to achieve recursive self-improvement, which no one has yet accomplished. It’s an elusive goal for many. A lot of people already assume it happens when you simply do auto-research. You can take AI and ask it to improve something else—like a machine learning system, a letter you’re writing, or whatever it might be. But that’s not recursive self-improvement; that’s just improvement. Our main focus is to build truly recursive, self-improving superintelligence at scale. This means that the entire process of ideation, implementation, and validation of research ideas would be automatic. Initially, it would automate AI research ideas, eventually any kind of research ideas, even those in physical domains. But it’s particularly powerful when AI works on itself, developing a new kind of self-awareness of its own shortcomings.

You used the term “open-ended”—does that have a specific technical meaning? It does. In fact, Tim Rocktäschel, one of our co-founders, led the open-endedness and self-improvement teams at Google DeepMind and particularly worked on the world model Genie 3, which is a great example of open-endedness. You can tell it any concept, any world, any agent, and it just creates it, and it’s interactive. In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations. It’s a process that can evolve for billions of years, and interesting stuff keeps happening. That’s how we developed eyes in our heads. Another example is rainbow teaming, from another paper by Tim. Have you heard of red teaming? In cybersecurity, it means— So, red teaming also has to be done in an LLM context. Basically, you try to get the LLM to tell you how to build a bomb, and you want to make sure it doesn’t do it. Humans can sit there for a long time and come up with interesting examples of what the AI shouldn’t say. But what if you tested this first AI with a second AI, and that second AI now has the task of making the first AI try to say all the possible bad things? Then they can go back and forth for millions of iterations. You can actually allow two AIs to co-evolve. One keeps attacking the other, and comes up with not just one angle but many different angles—hence the rainbow analogy. Then you can inoculate the first AI, and it becomes safer and safer. This was an idea from Tim Rocktäschel, and it’s now used in all the major labs.

How do you know when it’s done? I suppose it’s never done. Some of these things will never be done. You can always get more intelligent. You can always get better at programming, math, and so on. There are some bounds on intelligence; I’m actually trying to formalize those right now, but they’re astronomical. We’re very far away from those limits.

As a neolab, it feels like you’re supposed to be doing something that the major labs aren’t doing. So part of the implication here is that you don’t think the major labs are going to reach RSI (recursive self-improvement) by doing what they’re doing. Is that fair to say? I can’t really comment on what they’re doing, but I do think we’re approaching it differently. We really embrace the concept of open-endedness, and our team is entirely focused on that vision. The team has been researching this and publishing papers in this space for the last decade. They have a track record of pushing the field forward significantly and shipping real products. For instance, Tim Shi built Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led their Codex teams and the deep research teams. I actually sometimes struggle a little bit with this neolab category. I feel like we’re not just a lab. I want us to become a really viable company, to have amazing products that people love to use, with a positive impact on humanity.

When do you plan to ship your first product? I’ve thought about that a lot. The team has made so much progress, we may actually pull up the timelines from what we initially assumed. But yes, there will be products, and you’ll have to wait quarters, not years.

One of the ideas around recursive self-improvement is that, once we have this sort of system, compute becomes the only important resource. The faster you run the system, the faster it will improve, and there’s no outside human activity that will really make a difference. So the race just becomes: how much processing power can we throw at this? Do you think that’s the world we’re headed toward? Compute is not to be underestimated. I think in the future, a really important question will be: how much compute does humanity want to spend to solve which problems? Here’s this cancer and here’s that virus—which one do you want to solve first? How much compute do you want to give it? It becomes a matter of resource allocation eventually. It’s going to be one of the biggest questions in the world.




Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!