Campbell Brown’s Forum AI Launches to Fix AI’s Accuracy Crisis
By admin | May 14, 2026 | 3 min read
Campbell Brown has dedicated her career to the pursuit of accurate information—first as a celebrated television journalist, and later as Facebook’s first and only head of news partnerships. Now, as artificial intelligence reshapes how people access information, she sees a familiar pattern emerging. This time, however, she isn’t waiting for others to step in.
“The idea is to find the world’s foremost experts, have them architect benchmarks, then train AI judges to evaluate models at scale,” Brown explains. For Forum AI’s work in geopolitics, she has recruited prominent figures including Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity under the Obama administration. The goal is to achieve roughly 90% consensus between AI judges and these human experts—a threshold she says Forum AI has already reached.
Brown traces the origin of Forum AI, founded 17 months ago in New York, to a specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalled. “I remember really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really dumb if we don’t figure out how to fix this,” she remembers thinking. What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” while news and information are more difficult to handle. But harder, she argued, doesn’t mean optional.
When Forum AI began evaluating leading models, the results were far from encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” and noted a left-leaning political bias across nearly all models. Subtler failures are also widespread, she said, including missing context, missing perspectives, and straw-manning arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think that there are some very easy fixes that would vastly improve the outcomes.”
Brown spent years at Facebook witnessing what happens when a platform optimizes for the wrong metrics. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has ignored it, is that optimizing for engagement has been detrimental to society and left many people less informed. Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.” She acknowledged that the idealistic version—AI optimizing for truth—might sound naive. But she believes enterprise could be an unlikely ally. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they’re going to want you to optimize for getting it right.”
That enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge. Much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate. The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found that more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not just known scenarios but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists aren’t going to cut it.”
Brown—whose company raised $3 million last fall led by Lerer Hippeau—is uniquely positioned to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,’” she said. “But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers.”
Trust in AI sits at extraordinarily low levels, and she believes that skepticism is, in many cases, justified. “The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers.”
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!