Powered by Smartsupp

Arcee AI Launches Trinity: A Massive 400B-Parameter Open Source Foundation Model



By admin | Jan 28, 2026 | 6 min read


Arcee AI Launches Trinity: A Massive 400B-Parameter Open Source Foundation Model

A prevailing view in the tech sector is that the AI model landscape is already settled, with dominance belonging to major players like Google, Meta, Microsoft, and Amazon, alongside their preferred model creators such as OpenAI and Anthropic. However, the small 30-person startup Arcee AI challenges this assumption. The company has launched a new, permanently open-source foundation model named Trinity, released under the permissive Apache license. Arcee asserts that with 400 billion parameters, Trinity ranks among the largest open-source foundation models ever developed and publicly released by a U.S.-based company.

According to benchmark tests performed on base models with minimal post-training, Trinity demonstrates performance comparable to Meta’s Llama 4 Maverick 400B and Z.ai’s GLM-4.5, a leading open-source model from China’s Tsinghua University.

Arcee AI benchmarks for Trinity LLM
Arcee AI benchmarks for its Trinity large LLM (preview version, base model)Image Credits:Arcee AI

Similar to other cutting-edge models, Trinity is optimized for coding and multi-step agentic workflows. However, despite its scale, it is not yet a full state-of-the-art competitor because it currently processes text only. In contrast, Meta’s Llama 4 Maverick already supports multimodal input, handling both text and images. Arcee explains that before expanding into additional AI modalities, the team prioritized creating a powerful base large language model designed to appeal to its core audience: developers and academic researchers.

A key strategic goal is to attract U.S. companies of all sizes, steering them away from relying on open models originating from China. Mark Atkins, who led the model development, emphasized the importance of technical superiority, stating, “Ultimately, the winners of this game, and the only way to really win over the usage, is to have the best open-weight model. To win the hearts and minds of developers, you have to give them the best.”

Current benchmarks indicate that the Trinity base model—still in preview as further post-training continues—is competitive. In some evaluations focusing on coding, mathematics, common sense, knowledge, and reasoning, it slightly outperforms Llama. The rapid progress Arcee has made to establish itself as a competitive AI lab is notable.

The large Trinity model follows two smaller models released in December: Trinity Mini, a fully post-trained 26-billion-parameter model for tasks ranging from web applications to agents, and Trinity Nano, a 6-billion-parameter experimental model designed to explore the limits of compact yet conversational AI. Remarkably, Arcee trained all three models within six months at a total cost of $20 million, utilizing 2,048 Nvidia Blackwell B300 GPUs. This expenditure was drawn from the approximately $50 million the company has raised to date, according to founder and CEO Mark McQuade.

Atkins acknowledged that while $20 million was “a lot for us,” it is still far less than the investments being made by larger AI labs. The accelerated six-month timeline was intentional. Atkins, whose background prior to LLMs involved developing voice agents for automotive applications, explained, “We are a younger startup that’s extremely hungry. We have a tremendous amount of talent and bright young researchers who, when given the opportunity to spend this amount of money and train a model of this size, we trusted that they’d rise to the occasion. And they certainly did, with many sleepless nights, many long hours.”

McQuade, an early employee at the open-source platform Hugging Face before founding Arcee, noted that the company did not initially set out to become a new U.S. AI lab. Originally, Arcee focused on model customization for large enterprise clients such as SK Telecom. “We were only doing post-training. So we would take the great work of others: We would take a Llama model, we would take a Mistral model, we would take a Qwen model that was open source, and we would post-train it to make it better for a company’s intended use,” he said, which included implementing reinforcement learning.

As their client portfolio expanded, however, the need for an in-house model became apparent. McQuade grew concerned about dependence on other companies’ models. Compounding this, many of the highest-performing open models were coming from China, which made U.S. enterprises either hesitant or legally restricted from using them. The choice to develop their own model was a significant one. McQuade pointed out, “I think there’s less than 20 companies in the world that have ever pre-trained and released their own model” at the scale and sophistication Arcee targeted.

The company began cautiously, first collaborating with training specialist DatologyAI on a small 4.5-billion-parameter model. The success of that project gave them the confidence to pursue more ambitious efforts. But with models like Llama already available, why is another open-weight model necessary? Atkins highlights Arcee’s commitment to the Apache license, which ensures its models will remain permanently open-source. This stands in contrast to Meta, whose CEO Mark Zuckerberg suggested last year that the company might not always open-source its most advanced models.

Llama can be looked at as not truly open source as it uses a Meta-controlled license with commercial and usage caveats,” Atkins says, noting that some open-source advocates argue Llama does not fully comply with open-source principles. McQuade framed Arcee’s mission clearly: “Arcee exists because the U.S. needs a permanently open, Apache-licensed, frontier-grade alternative that can actually compete at today’s frontier.”

All Trinity models, regardless of size, are available for free download. The largest version will be offered in three variants: Trinity Large Preview, a lightly post-trained instruct model fine-tuned for following human instructions and suited for general chat; Trinity Large Base, the pure base model without post-training; and TrueBase, a version stripped of any instruct data or post-training, providing a clean slate for enterprises or researchers who wish to apply their own customization without undoing prior adjustments.

Looking ahead, Arcee AI plans to offer a hosted version of its generally available model with what it describes as competitively priced API access. This release is expected within six weeks as the startup continues to refine the model’s reasoning capabilities. For the currently available Trinity Mini, API pricing is set at $0.045 per million input tokens and $0.15 per million output tokens, with a rate-limited free tier also offered. Alongside these new models, Arcee continues to provide its established post-training and customization services for enterprise clients.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!