Flapping Airplanes Launches With $180M to Pioneer Data-Efficient AI Training
By admin | Feb 16, 2026 | 8 min read
The landscape of AI research has seen a surge of new, ambitious labs in recent months, and Flapping Airplanes stands out as a particularly intriguing entrant. Driven by its youthful and inquisitive founding team, the lab is dedicated to discovering methods for training artificial intelligence that require far less data. This pursuit could fundamentally reshape both the economics and the potential of AI models. With a substantial $180 million seed funding round, the initiative has significant resources to explore this path. I recently connected with the lab's three co-founders—brothers Ben and Asher Spector, and Aidan Smith—to discuss why now is a compelling time to launch an AI research lab and why concepts related to the human brain continually inform their thinking.
My first question centered on timing. Given the immense resources already deployed by established entities like OpenAI and DeepMind, the competitive landscape appears formidable. What made this moment feel right for starting a new foundation model company?
Ben emphasized the vast frontier of unexplored work. While acknowledging the spectacular advances of the past decade, he questioned whether current approaches represent the full scope of possibility. After careful consideration, the team concluded there is much more to achieve. They identified data efficiency as the critical problem to investigate, noting a significant gap: current frontier models are trained on nearly the entirety of human knowledge, whereas humans themselves learn effectively with far less. Their venture is a concentrated bet on three convictions: that the data efficiency problem is a crucial and tractable new direction, that solving it will be highly valuable commercially and beneficial for the world, and that a creative, somewhat inexperienced team is ideally suited to re-examine these challenges from first principles.
Aidan agreed, stating they don't view themselves as direct competitors to other labs because they are tackling a fundamentally different set of problems. He highlighted the stark contrast between how the human mind learns and how transformer-based models operate. Large language models possess an incredible capacity for memorization and breadth of knowledge but are slow to acquire new skills, requiring "rivers and rivers of data" to adapt. The algorithms in the brain are fundamentally different from gradient descent and contemporary AI training techniques. This insight is why they are assembling a new cohort of researchers to address these issues and rethink the AI space.
Asher added that the scientific question itself is deeply fascinating: why are the intelligent systems we've built so different from human cognition? Understanding this difference could lead to better systems. He also stressed the commercial and practical viability of this research. Many critical domains, such as robotics or scientific discovery, are highly data-constrained. Even in enterprise applications, a model that is a million times more data-efficient would be far easier to integrate into the economy. The prospect of taking a fresh perspective to create vastly more efficient models and exploring their potential applications was a major motivator.
This led to a question about the company's philosophical direction, hinted at by its name, Flapping Airplanes. Given Aidan's background at Neuralink, a company deeply focused on the brain, does the team pursue a more neuromorphic approach to AI?
Aidan clarified that he views the brain as an "existence proof"—evidence that alternative algorithms beyond the current orthodoxy are possible. The brain operates under remarkable constraints, such as the millisecond required to fire a neural action potential, during which a computer can perform innumerable operations. Therefore, an approach superior to both the brain and the transformer likely exists. While inspired by the brain's mechanisms, they are not constrained by them.
Ben expanded on this, referencing their name directly. He analogized current AI systems to large jetliners like Boeing 787s. The goal is not to build biological birds but to create a novel "flapping airplane." From a computer systems perspective, the constraints of biological wetware and silicon are so different that the resulting intelligent systems should not be expected to look identical. Different trade-offs regarding compute cost and data locality will lead to different architectures. However, this doesn't mean we shouldn't draw inspiration from the brain to improve our own systems.
The conversation shifted to the apparent freedom for new labs to prioritize pure research over immediate product development, a notable characteristic of this generation of AI companies.
Asher admitted he cannot provide a concrete timeline for commercialization. The team is fundamentally searching for truth and doesn't yet have all the answers. However, all founders have commercial backgrounds and are genuinely excited about eventually bringing technology to market, believing it's beneficial to put valuable creations into the hands of users. The key is starting with research; pursuing large enterprise contracts from the outset would be a distraction from doing the foundational work.
Aidan added that they aim to explore radically different ideas, acknowledging that such explorations sometimes yield results worse than the current paradigm. They are investigating a different set of trade-offs, hoping they will prove superior in the long run.
Ben stressed that startups must focus intensely on their most valuable task. For now, that is solving fundamental problems. He is optimistic that meaningful progress could soon allow them to "touch grass in the real world" and learn from its feedback. The recent shift in financing economics enables companies to maintain this deep focus for longer periods, which he believes is crucial for producing truly differentiated work.
Given the clear investor enthusiasm—evidenced by the $180 million seed round for a young, new team—I asked about their experience navigating the fundraising process.
Ben described it as a mixture of expectation and discovery. While the market's heat was no secret, one never knows how investors will respond to specific ideas. The fundraising process itself provided valuable feedback, leading them to refine their priorities and commercialization timelines. They were somewhat surprised by how strongly their message resonated, feeling fortunate to find investors who said, "This is exactly what we've been looking for."
Aidan observed a growing "thirst for the age of research," and they increasingly see themselves positioned as the entity to pursue that era and test radical ideas.
With the enormous compute costs associated with scaling foundation models, I inquired how much compute limitations might affect their runway.
Ben explained a paradoxical advantage of fundamental research: testing radical new ideas is often cheaper than incremental work. Incremental improvements require expensive scaling to see if benefits persist, whereas a fundamentally new architecture or optimizer idea might fail quickly at small scale, saving resources. Scale remains an important tool for them, but their work allows them to test many ideas at a small scale before considering larger deployments.
Asher succinctly captured their view: "You should be able to use all the internet. But you shouldn't *need* to." They find it perplexing that achieving human-level intelligence seems to require the entire internet's data.
This prompted a question about the potential outcomes of more data-efficient training. Would it lead to better out-of-distribution generalization, or models that master tasks with less experience?
Asher outlined three scientific hypotheses. First, current models exist on a spectrum between statistical pattern matching and deep understanding. Training on less data might force models toward deeper understanding, potentially making them more intelligent and better at reasoning, even if they know fewer facts. Second, it could vastly improve post-training efficiency, allowing a model to adapt to new domains with just a few examples. Third, it might unlock entirely new verticals for AI, such as robotics or scientific discovery, where limited data, not hardware, is the current bottleneck.
Ben expanded on the broader impact. Beyond being a deflationary technology that automates tasks, he finds a more exciting vision: AI enabling new sciences and technologies that humans aren't smart enough to conceive. Achieving this requires models to be on the "creativity side of the spectrum," capable of true generalization rather than just data interpolation. He is mission-oriented around enabling AI to do things humans fundamentally couldn't do before.
This perspective led to a question about their stance on AGI and generalization.
Asher expressed uncertainty about the term "AGI" but acknowledged capabilities are advancing rapidly with tremendous economic value being created. He does not believe we are close to a "God-in-a-box" or a near-term singularity where humans become obsolete. He agrees with Ben's initial point: it's a big world with a lot of work to do, and they are excited to contribute.
Given the recurring brain analogy, I noted that comparing LLMs to the human brain seems more relevant than comparing them to earlier deterministic computers.
Aidan emphasized that the brain is not the ceiling but the floor. He sees no evidence that the brain is an unknowable system; it follows physical laws and operates under constraints. Therefore, we should expect to create capabilities that are much more interesting, different, and potentially better than the brain in the long run.
Asher agreed the brain is a relevant comparison because it illustrates the vastness of the unexplored space, preventing us from thinking "we're almost done."
Ben reiterated that the goal is not necessarily to be better, but to be *different*. Different systems will have different trade-offs, excelling in some areas at the cost of others. A world with a diversity of fundamental AI technologies will allow for more effective and rapid diffusion of AI across various domains.
The team has also distinguished itself through a unique hiring approach, often recruiting very young talent, sometimes still in college or high school. I asked what qualities signal a promising candidate.
For Aidan, it's when someone "dazzles" you with new ideas and a thinking style unburdened by the context of thousands of existing research papers. Creativity is the paramount quality.
Ben's primary signal is whether a candidate teaches him something new during a conversation. That suggests they can bring novel insights to the company's work. His experience co-founding an incubator showed him that young people can compete at the highest levels of industry; a major unlock is simply realizing it's possible. While they value experience and have hired seasoned professionals, their key criterion is a willingness to change the paradigm and imagine entirely new systems.
I then wondered how different the resulting AI systems might actually be, as it's easier to imagine a 20% improvement than a completely alien capability.
Asher recalled strange emerging capabilities in base models like GPT-4, such as identifying an author from a text snippet. Future models will be smarter in even stranger, less fathomable ways. When seeking 1000x wins in data efficiency, one should expect similarly unknowable, "alien" changes in capabilities.
Ben broadly agreed but tempered expectations regarding how end-users might experience these advances, as raw capabilities are often shaped for usability. However, he affirmed their research agenda is fundamentally about building capabilities quite different from what exists today.
Finally, I asked how people can engage with Flapping Airplanes.
Asher shared two email addresses: `hi@flappingairplanes.com` for general contact and `disagree@flappingairplanes.com` for substantive critiques, noting they've had engaging conversations from the latter. He also emphasized they are actively seeking exceptional people who want to change the field and the world.
Ben added that unorthodox backgrounds are welcome; candidates don't need multiple PhDs. They are genuinely looking for individuals who think differently.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!